Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
179 views

Data Integrity Protection in Cloud Computing

The document provides an overview of cloud computing and cloud storage. It discusses how cloud storage allows users to store large amounts of data remotely, but also raises security and integrity challenges. The document then describes the scope and objectives of the project, which are to design an environment that allows remote checking of data integrity and protection in cloud storage systems. Finally, the existing system is described as using erasure coding and regenerating codes to provide redundancy and fault tolerance, but a new system is needed to efficiently verify data integrity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
179 views

Data Integrity Protection in Cloud Computing

The document provides an overview of cloud computing and cloud storage. It discusses how cloud storage allows users to store large amounts of data remotely, but also raises security and integrity challenges. The document then describes the scope and objectives of the project, which are to design an environment that allows remote checking of data integrity and protection in cloud storage systems. Finally, the existing system is described as using erasure coding and regenerating codes to provide redundancy and fault tolerance, but a new system is needed to efficiently verify data integrity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

1

CHAPTER I

1. INTRODUCTION

1.1 Project Overview


Cloud computing is the next generations platform where the entire
computational power of applications, infrastructures, resource management and run
times for the applications are provided. These services of the cloud are provided
based on the demand of the user.

The user who wants to access the cloud services needs to connect to the
internet. Once the user is connected to the internet the user can access almost
everything from cloud i.e. ubiquitous computing.

The ultimate task of the cloud is to provide a shared network between the
owner and the user. The cloud manages the resources and provide to the requesting
client based on authentication. Cloud storage is one of the important services which
have increased in the recent era.

Cloud storage is used to provide storing space for user data. A great
advantage is that it reduces the burden of the data owner to store large amount of
data in the local system. Thus, by moving large files from the local system to the
cloud it increases the performance of the system.

As the data is stored in the remote location the user data falls into threats of
the intruders where the confidentiality and integrity of the data is breached. This
reduces the trust of the cloud service provider. There are several measures used by
the CSP to provide the security of the data, but it never matches to the aspects. More
and more owners start to store the data in the cloud.

However, this new paradigm of data hosting service also introduces new
security challenges. Owners would worry that the data could be lost in the cloud.
This is because data loss could happen in any infrastructure, no matter what high
2

degree of reliable measures cloud service providers would take. Sometimes, cloud
service providers might be dishonest.

It could discard the data which has not been accessed or rarely accessed to
save the storage space and claim that the data are still correctly stored in the cloud.
Therefore, owners need to be convinced that the data are correctly stored in the
cloud.

People can easily work together in a group by Sharing and storing the
services in the cloud. To protect the integrity of data in the cloud, number of
mechanisms has been proposed. In these mechanisms, a signature is attached to each
block in data, and the integrity of data relies on the correctness of all the signatures.

One of the most significant and common features of these mechanisms is to


allow a Public Auditor to efficiently check data integrity in the cloud without
downloading the entire data, referred to as public auditing.

This Public Auditor could be a client who would like to utilize cloud data for
particular purposes (e. g. , search, computation, data mining, etc. ) or Third Party
Auditor (TPA) who is able to provide verification services on data integrity to users.
With shared data, once a user modifies a block, that user also needs to compute a
new signature for the modified block. Due to the modifications from different users,
different blocks are signed by different users.

The proposed mechanism allows a public Auditor to efficiently check the data
integrity in the cloud without downloading the entire data. This mechanism preserves
the confidentiality of the shared data by using the proxy re-signature mechanism.

In this mechanism the blocks which were previously assign to revoked user
will be re-signed by the existing user. For the security purpose secret key will be
provided while login. Public verification techniques allow the users to outsource
their data into the cloud and consistency of the data is checked by a trusted third
party called auditor. Objective of the public verification scheme is to avoid external
adversary attacks on the data outsourced by the owner.
3

1.2 Organization Profile


PET Engineering College (PETEC) is a premier engineering institution
approved by the All India Council for Technical Education, New Delhi, recognized
by the government of Tamil Nadu and affiliated to the Anna University. The College
started functioning from September 1998 in its campus site with spacious and
attractive new buildings. The College is located on the Vallioor -Trichendur State
Highway about 3. 5km from the Vallioor Town.

PET Engineering College is sponsored by Popular Educational Trust, a


registered Charitable Trust. The Trustees, a number of them employed in Gulf
Countries are generous in funding the college and are devoted to the cause of
providing quality, general and technical education to the youth of the country in
general and educationally backward communities in particular. The trustees also lay
stress on the educational upliftment of Muslims. The trust is funded from
contributions received from trustees, patrons and donors and managed by the Office
Bearers elected by the Executive Committee. PET Engineering College is the first
major educational institution of the Trust.

The College is managed by a managing committee with the able guidance


and advice of an efficient Advisory Committee comprising of distinguished
academics like Prof. P. O. J, Lebba, and Prof. Dr. S. K. Pillai (Professor Emeritus
and former HOD of IIT, Mumbai. ) well experienced veterans in the field of
technical education. The objectives of the PETEC are as follows:

 To establish and maintain a conductive campus atmosphere, to enable the


students pursue their studies with ease, to instill in them creativity and to
enable them achieve academic excellence.
 To create a pleasant working relationship between the teachers and the taught
to continuously achieve academic excellence.
 To provide the faculty with attractive working conditions for excellent
performance and rewards for achievements.
4

 To create awareness among the students on the relevance of moral values,


integrity in personal life and importance of Indian heritage.
 To kindle in students the sprit to serve the less privileged communities and
the Indian Nation as a whole.
 To conduct job oriented courses.
 To encourage research and offer solutions and technical know-how on
various problems associated with industries.
 To get NBA accreditation for all the courses of study.
 To get the approval of the affiliating University as Research Center

1.3 Scope and Objective of Project


 In cloud computing integrity of data and access control are challenging
issues.
 Protection of outsourced data in cloud storage becomes critical.
 Codes which are regenerating of data provide fault tolerance.
 Therefore, remotely checking the integrity of data against corruptions and
other issues under a real time cloud storage setting is our problem of study.
 We practically design and implement Data Integrity Protection (DIP)
environment.

1.4 System Analysis


System analysis is a process of collecting the information understanding the
process involved, identifying problem and recommending feasible solution for
improving the system functioning.

This involving studying the business process gathering operational data ,


understanding the information flow involving solution for overcoming the weakness
of the system as to achieve the organizational goals . system analysis also includes
subdividing of complex process involving entire system, identification of data store
and Manual processes.
5

System analysis is an iterative process that continuous until a preferred an


acceptable solution emerges. Organizations are complex system that consists of
interrelated and interlocking subsystem. Changes in one part of the system have both
anticipated and unanticipated consequence in other part of the system. Systems
analysis the process of observing systems for troubleshooting or development
purposes

The system approval is a way thinking about the analysis and design of
computer based application. It provides a framework on visualizing the organization
and environmental factor that operate on a system. When computer is introduced into
an organization, where is function operate on the user as well as on the organization.

1.4.1 Existing System


The existing system is Storage Based on Erasure Coding that Simplify the
content placement and recovery problem at the cost of longer data retrieval. The Rate
less property enables its redundancy.

Data security requirement of cloud computing and setup a mathematical data


model for cloud computing. some of the things that appear on the existing system
are given below.

 Regenerating codes are implemented to minimize repair traffic in the


network.
 It does not reading and reconstructing the whole file during the repair time.
 It reads a set of chunks smaller than the original file from the other servers
and reconstructs only the lost contents.
 Functional minimum-storage regenerating (FMSR) codes allow the clients to
remotely verify the integrity of the data in server.
 The servers are only need to support standard read or write functionalities.
 It enables the integrity protection, fault tolerance, and efficient recovery for
cloud storage.
 MAC algorithm is used for check the data integrity
6

Disadvantages of Existing System


 The server cannot communicate with each other in case of status failure.
 The system will result in high delay for selecting active server
communication.
 The data are stored in main server and replica server. So when data are lost it
takes so much time to recover from the replica server.
 It is not reliable.
 Hard to develop using erasure coding.
 Different types of algorithm where used so it is more complex.
1.4.2 Proposed System
 The system uses Centralized TPA to analyze the status of each server in
regular interval
 The system provides selection of server in case of meta data download
request
 The auditing system will index the files stored in the servers.
 The delay will be very lower that the existing system.

Advantages of Proposed System


 The proposed system will result higher accuracy of data selection.
 Also, the selection of server will be easily analyzed before download request
will improve the efficiency of the system.
 Fast and reliable

1.5 Feasibility Study


The feasibility study of the project is analysed in this phase and business
proposal is put forth with a very general plan for the project and some cost estimates.
During system analysis the feasibility study of the proposed system is to be carried
out.

This is to ensure that the proposed system is not burden to the company for
feasibility analysis, some understanding of the major requirement for the system is
essential.
7

Feasibility is the best test of system. It helps in deciding whether it is visible


go through the project are not . the document provide the feasibility of the project
that is being design and list various areas that were considered very carefully during
the feasibility study of this project.

The objective of the feasibility study is to determine whether the system is


feasible or not. Feasibility and risk analysis is related to many ways. If project risk is
great the feasibility listed below is equally important. The following feasibility
technique has been used in the project.

 Technical feasibility
 Economical feasibility
 Operational feasibility

1.5.1 Technical Feasibility


Technical feasibility study is carried out to check the technical requirements
of the system. Any system developed must not have a high demand and available
technical resources. This will lead to high demand on the available technical
resources, and it’s a place on the client. The developed system must have a modest
requirement as only minimal or null changes are required for implementing this
system. It is a study of function, performance and constraint that may affect the
ability to achieve an acceptable system.

In technical study one must find whether the ability to achieve a acceptable
system. In technical study one must find out whether the current technical resources,
which are available in organization, are capable of handling the user requirements.
For example, if particular software work only in a computer with higher
configuration and additional hardware is required. This involves a financial
consideration and if the budget is serious constraints, then the proposal will be
considered feasible.

The system requires the machine that can run windows 8.1 latest version
operating system that has already been installed. Since the computer with internet or
8

intranet connections. To develop the project system commonly needs frontend like
netbeans and backend like MySQL. And here installed the frontend netbeans and
backend MySQL.

The hardware and software requirements of the proposed system are already
available for the user. This system is technically feasible won the development and
implementation of this project is technically possible one and don't take any extra
requirement apart from the available in the organisation.

1.5.2 Economical Feasibility


Economic feasibility is the most frequently used method for evaluating the
effectiveness of candidate system. More commonly known as cost or benefit analysis
is the procedure is to determine the benefits and savings that are expected from the
candidate system and compare them with cost. If benefits overweight cost, then the
decision is made to design and implement the system.

The project is economically feasible as the only cost involved is having a


computer with the minimum requirement mentioned earlier. For the user access the
application, the only cost involved will be getting access to the netbeans and MySQL
software. An evolution of development cost is weighted against the ultimate income
or benefit derived from the developed system. There was no need of extra hardware
and software for development of this project. Hence economical is justified for the
development in this organization.

The existing system is enough for implementing the system and also for the
software development in terms manpower, it is adequate and no extra personal is
required. This system is developed with the cost than the gain from this system.

1.5.3 Operational Feasibility


The operational feasibility is an identification of how well a proposed system
solves the problem and takes advantages of the opportunity identified during scope
definition and how it is satisfy the requirements identified in the analysis phase of the
system development.
9

The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to user system effectively. The
user must not feel threatened by the system instead must accepted it as a necessity.
The level of acceptance by the use of solely depends on the methods that are
employed to educate the user about the system and to make familiar with it.

The level of confidence must be raised so that is also able to make some
constructive criticism, which is welcomed, as the final user of the system. The
reasonable acceptance from the user side is necessary for the success of the systems.
since user welcomes expert system, the system is found operationally feasible.

Here, this system provides the easy access to the user to store data on the
cloud. Then this system also ensures the integrity of the data being uploaded. So the
user’s data is secured and they can easily access those data. So this project is
operationally feasible.

1. 6 System Design
System designs the most creative and challenging part of the system life cycle
is the system design. Design commences, once the logical model of the existing
system is available. Design begins by using identified system problem as the basis
for developing objectives for the new system.

System design is the process of defining the architecture, components,


module, interfaces, and data for the system to satisfy specify requirements. System
design could be seen as the application of the system theory to product development.

Design phase present in 2 steps, logical design and physical design. The
logical design of the system pertains to an abstract representation of the data flows,
inputs and outputs of the system.

The primary objective of the design phase will always be to design a system,
which delivers a function required by the client to support the business objectives of
the organization. There are a number of objectives that must be consider, if a good
design is to be produced. During the logic design, a new logical model is developed.
10

The new logical model that include any process or challenges to existing
process are necessary to meet system objective. During physical design decisions are
made on which process would still remain a manual and which are to be
computerized.
 Architecture Design
 Input Design
 Output Design
 Database Design
 UML Diagram

1.6.1 Architecture Diagram


The software architecture of the program of computing system is the structure
of the system, which comprise software elements, the externally visible properties of
those elements, and the relationship among them.

Requirements of the software should be transfer into an architecture that


describe the software’s top level structure identify its components. Architecture
design is the process of defining a collection of hardware and software components
and the interface to establish the framework for the development of the computer
system.

This frame work is established by the examining the software requirement


document and designing the model for providing implementation details. These
details are used to specify the components of the system along with the input, output,
functions and the interaction between them.

In the fig 1.1, defines the relationship between major structural elements of
the software design patterns that can be used to achieve the requirements that have
been defined for the system.
11

The diagram divided into three parts, there are user and trusted authority and
third party auditor. The data are stored in the cloud and data base. There are five
modules that are data segmentation, system spilt the data into four parts.

Then generate the message authentication code for the each segment. Data
regeneration occur when some data are missing. Data integrity protection indicates
the integrity of the data.

Fig 1. 1 Architecture Diagram

1.6.2 Input Design


Input design Converts the user oriented inputs to the computer oriented
formats. This requires carefully attention. The collection of input data is most
expensive part of the system, in terms of both requirements used and number of
people involved. In the input design, data are accepted for the computer processing
and input to the system is done through the maps created using the basic mapping
support facility.
12

Inaccurate input data or common cause of errors in data processing. The input
screens are very carefully and logically designed. For data entry or for data access,
different fields are used which makes the data entry as early as possible. While
entering data validation checks are done and message will be generated by the
system in case of incorrect data entry. some of the feature are

 The input data are validated to minimized data entry error. Here when the
user uploads the image file then it provides the message for selecting only the
text file.
 Appropriate message are provided to inform user about the false entry. When
click on the upload button before select the file to upload then it shows the
message about like ‘please select the file’.
 Fixed format is used for displaying titles and messages. Same type of text and
format are used throughout the project.
 The form of title clearly states the purpose of the form. Each form gets its
title such as upload, TPA, etc.
 Heading for each data items are clearly given.
 Adequate space is given for data item.
 Forms are not crowded, as forms are difficult to read or validate. Forms are
clearly designed

1.6.3 Output Design


In any system, the result of the processing or communicated to the user and to
other system using output. In output design, it determines how the information is to
be displayed for the immediate need and for the hard copy output. It is most
important direct source of information to the user.

Every time the user interact with the system, it is very easy to accessible.
Each output design clearly describes the process. Efficient and intelligent output
design improves the relationship of user with the system helps in decision making.
13

The output testing includes report in the specific format, displays as enquires
as well as a simple profile of the database. When the user uploads the file then it
uploads on to the cloud. The user only need to specify file, system itself spilt the file
and generate the mac address and stored.

1.6.4 Database Design


Database Design is a collection of processes that facilitate the designing,
development, implementation and maintenance of enterprise data management
systems.

It helps produce database systems. That meet the requirements of the users
have high performance.

The main objectives of database designing are to produce logical and physical
designs models of the proposed database system. The logical model concentrates on
the data requirements and the data to be stored independent of physical
considerations.

It does not concern itself with how the data will be stored or where it will be
stored physically. The physical data design model involves translating the logical
design of the database onto physical media using hardware resources and software
systems such as database management systems (DBMS)..

Table 1.1 Mac Table

Field Name Data Type Constraints

Server ID Int(5) Not Null

MAC Varchar(50) Not Null

This table is used to store the mac address of the server. This mac address is
used to check the integrity of data. When the system regenerate the mac address of
the failed server then that must matched to this mac address.
14

1.6.4 UML Diagram


Unified Modeling Language (UML) is a general purpose modelling language.
The main aim of UML is define a standard way to visualize the way a system has
been designed. It is quite similar to blueprints used in other fields of engineering.

UML is not a programming language, it is rather a visual language. UML


diagrams used to portray the behavior and structure of a system. UML helps software
engineers, businessmen and system architects with modeling, design and analysis.
The Object Management Group (OMG) adopted Unified Modeling Language as a
standard in 1997. Its been managed by OMG ever since. International Organization
for Standardization (ISO) published UML as an approved standard in 2005. UML
has been revised over the years and is reviewed periodically. There are some reason
we need the UML that are given below.
 Complex applications need collaboration and planning from multiple teams
and hence require a clear and concise way to communicate amongst them.
 Businessmen do not understand code. So UML becomes essential to
communicate with non programmers essential requirements, functionalities
and processes of the system.
 A lot of time is saved down the line when teams are able to visualize
processes, user interactions and static structure of the system.

UML is linked with object oriented design and analysis. UML makes the use of
elements and forms associations between them to form diagrams. Diagrams in UML
can be broadly classified as:
 Structural Diagrams – Capture static aspects or structure of a system.
Structural Diagrams include: Component Diagrams, Object Diagrams, Class
Diagrams and Deployment Diagrams.
 Behavior Diagrams – Capture dynamic aspects or behavior of the system.
Behavior diagrams include: Use Case Diagrams, State Diagrams, Activity
Diagrams and Interaction Diagrams, block diagram. it specify the behavior of
the system
15

Object Oriented Concepts Used in UML


 Class – A class defines the blue print i. e. Structure and functions of an
object.
 Objects – Objects help us to decompose large systems and help us to
modularize our system. Modularity helps to divide our system into
understandable components so that we can build our system piece by piece.
An object is the fundamental unit (building block) of a system which is used
to depict an entity.
 Inheritance – Inheritance is a mechanism by which child classes inherit the
properties of their parent classes.
 Abstraction – Mechanism by which implementation details are hidden from
user.
 Encapsulation – Binding data together and protecting it from the outer
world is referred to as encapsulation.
 Polymorphism – Mechanism by which functions or entities are able to exist
in different forms.

Use Case Diagram


Use Case Diagrams are used to depict the functionality of a system or a part
of a system. They are widely used to illustrate the functional requirements of the
system and its interaction with external agents(actors).

A use case is basically a diagram representing different scenarios where the


system can be used. A use case diagram gives us a high level view of what the
system or a part of the system does without going into implementation details.

In the figure 1.2, here there are three actors such as client, TPA and server.
The client only able to upload the data and contact the trusted authority to download
the data. The server is contact the trusted authority and public auditor. And the TPA
is responsible for data integrity and data recovery.
16

Fig 1.2 Use Case Diagram

Class Diagram
Class diagram is a static diagram. It represents the static view of an
application. Class diagram is not only used for visualizing, describing, and
documenting different aspects of a system but also for constructing executable code
of the software application.

A class diagram in the Unified Modeling Language (UML) is a type of static


structure diagram that describes the structure of a system by showing the
system's classes, their attributes, operations (or methods), and the relationships
among objects.

Class diagram describes the attributes and operations of a class and also the
constraints imposed on the system. The class diagrams are widely used in the
modeling of object oriented systems because they are the only UML diagrams,
which can be mapped directly with object-oriented languages.
17

Class diagram shows a collection of classes, interfaces, associations,


collaborations, and constraints. It is also known as a structural diagram. It also
shows the various operation performed by the system.

Fig 1. 3 Class diagram

There are four classes in the project. That are client, trusted authority, TPA
and File storage. These classes are interrelated to each others. Client has two
attributes that are user file and file attributes. File attributes means name of the file
and size of the file . It has only one operation that is upload file Trusted authority has
two variables that are user file and client connection.

There is three operation for the trusted authority that are receive files, contact
TPA and save user files on the cloud. TPA has only one variable access key and it
has two operation check the status of the each server and response back. File storage
has one variable server details. It has four operation data index, key index, recovery
data and recovery index
18

Interaction Diagram
From the term Interaction, it is clear that the diagram is used to describe some
type of interactions among the different elements in the model. This interaction is a
part of dynamic behavior of the system.

This interactive behavior is represented in UML by two diagrams known


as Sequence diagram and Collaboration diagram. The basic purpose of both the
diagrams are similar.

Sequence diagram emphasizes on time sequence of messages and


collaboration diagram emphasizes on the structural organization of the objects that
send and receive messages.

Fig 1. 4 Sequence diagram

Steps involved in fig 1.4

 The user upload file to the trusted authority


 Those files stored on the servers
19

 The user request for the file download.


 The TPA check the status of all the server.
 It get the status failed.
 TPA starts data recovery on the servers.
 Data integrity confirmed.
 Then the requested files are downloaded
Activity Diagram
Activity Diagrams is used to illustrate the flow of control in a system. An
activity diagram is also used to refer to the steps involved in the execution of a use
case. The model sequential and concurrent activities using activity diagrams. So,it
basically depict workflows visually using an activity diagram.

An activity diagram focuses on condition of flow and the sequence in which it


happens. An activity diagram describe or depict what causes a particular event using
an activity diagram.

Activity is a particular operation of the system. Activity diagrams are not


only used for visualizing the dynamic nature of a system, but they are also used to
construct the executable system by using forward and reverse engineering
techniques. The only missing thing in the activity diagram is the message part.

It does not show any message flow from one activity to another. Activity
diagram is sometimes considered as the flowchart. Although the diagrams look like
a flowchart, they are not. It shows different flows such as parallel, branched,
concurrent, and single.

The purpose of an activity diagram can be described as

 Draw the activity flow of a system.

 Describe the sequence from one activity to another.

 Describe the parallel, branched and concurrent flow of the system.


20

In fig 1.5 firstly, the user upload the file via the trusted authority. Then the
files are stored in the server. The TPA keep checking on the trusted authority
whether the servers are active or not. Then the uploaded files spilted into several
parts and stores on the server. When downloading the files, the integrity of the data
checked using the TPA.

Fig 1.5 Activity Diagram

1.7 System Specification


To be used efficiently, all computer software needs certain hardware
components or other software resources to be present on a computer. These
prerequisites are known as system requirements and are often used as a guideline as
21

opposed to an absolute rule. Most software defines two sets of system requirements:
minimum and recommended.

Industry analysts suggest that this trend plays a bigger part in driving
upgrades to existing computer systems than technological advancements. A second
meaning of the term of System requirements, is a generalisation of this first
definition, giving the requirements to be met in the design of a system or sub-system.
In the specifications the latest hardware and software must be proposed to enable
faster retrival of the information. It invokes two concept . They are as followes
 Hardware Specification
 Software Specification

Hardware Requirements
A good hardware selection pays a vital role for the development of an
application. The most common set of requirements defined by any operating system
or software application is the physical computer resources, also known as hardware.

A hardware requirements list is often accompanied by a hardware


compatibility list, especially in case of operating system.

CPU : Intel Dual Core 2. 4 GHz or Later


RAM : 2GB DDR2
Hard Disk : 160 GB
Display : Wide VGA (Video Graphics Array)
Input : Keyboard and Mouse

Software Requirements
Software requirements deal with defining software resource requirements and
prerequisites that need to be installed on a computer to provide optimal functioning
of an application.

These requirements or prerequisites are generally not included in the software


installation package and need to be installed separately before the software is
installed.
22

Front End : JAVA 7


Back End : MySQL (XAMPP)
IDE : NETBEANS 8.2
Platform : Windows 7 or later

1.8 Software Description


NETBEANS
NetBeans IDE is the official IDE for Java 8. With its editors, code analyzers,
and converters, you can quickly and smoothly upgrade your applications to use new
Java 8 language constructs, such as lambdas, functional operations, and method
references.

NetBeans is coded in Java and runs on most operating systems with a Java
Virtual Machine (JVM), including Solaris, Mac OS, and Linux.

Batch analyzers and converters are provided to search through multiple


applications at the same time, matching patterns for conversion to new Java 8
language constructs. With its constantly improving Java Editor, many rich features
and an extensive range of tools, templates and samples, NetBeans IDE sets the
standard for developing with cutting edge technologies out of the box.

An IDE is much more than a text editor. The NetBeans Editor indents lines,
matches words and brackets, and highlights source code syntactically and
semantically. It lets you easily refactor code, with a range of handy and powerful
tools, while it also provides code templates, coding tips, and code generators.
The editor supports many languages from Java, C/C++, XML and HTML, to PHP,
Groovy, Javadoc, JavaScript and JSP. Because the editor is extensible, you can plug
in support for many other languages.

The NetBeans Profiler provides expert assistance for optimizing your


application's speed and memory usage, and makes it easier to build reliable and
scalable Java SE, JavaFX and Java EE applications.
23

NetBeans IDE includes a visual debugger for Java SE applications, letting


you debug user interfaces without looking into source code. Take GUI snapshots of
your applications and click on user interface elements to jump back into the related
source code.

NetBeans IDE can be installed on all operating systems that support Java,
from Windows to Linux to Mac OS X systems. Write Once, Run Anywhere, is as
true for NetBeans IDE as it is for your own applications. Because NetBeans IDE
itself is written in Java, too!

NetBeans uses components, also known as modules, to enable software


development. NetBeans dynamically installs modules and allows users to download
updated features and digitally authenticated upgrades. NetBeans IDE modules
include NetBeans Profiler, a Graphical User Interface (GUI) design tool, and
NetBeans JavaScript Editor.

NetBeans framework reusability simplifies Java Swing desktop application


development, which provides platform extension capabilities to third-party
developers.

JAVA
Java programming language was originally developed by Sun Microsystems
which was initiated by James Gosling and released in 1995 as core component of
Sun Microsystems' Java platform.

The latest release of the Java Standard Edition is Java SE 8. With the
advancement of Java and its widespread popularity, multiple configurations were
built to suit various types of platforms.

For example: J2EE for Enterprise Applications, J2ME for Mobile


Applications. The new J2 versions were renamed as Java SE, Java EE, and Java ME
respectively. Java is guaranteed to be Write Once, Run Anywhere.
24

Java is
 Object Oriented − In Java, everything is an Object. Java can be easily
extended since it is based on the Object model.
 Platform Independent − Unlike many other programming languages
including C and C++, when Java is compiled, it is not compiled into
platform specific machine, rather into platform independent byte code. This
byte code is distributed over the web and interpreted by the Virtual Machine
(JVM) on whichever platform it is being run on.
 Simple − Java is designed to be easy to learn. If you understand the basic
concept of OOP Java, it would be easy to master.
 Secure − With Java's secure feature it enables to develop virus-free, tamper-
free systems. Authentication techniques are based on public-key encryption.
 Architecture-neutral − Java compiler generates an architecture-neutral
object file format, which makes the compiled code executable on many
processors, with the presence of Java runtime system.
 Portable − Being architecture-neutral and having no implementation
dependent aspects of the specification makes Java portable. Compiler in Java
is written in ANSI C with a clean portability boundary, which is a POSIX
subset.
 Robust − Java makes an effort to eliminate error prone situations by
emphasizing mainly on compile time error checking and runtime checking.
 Multithreaded − With Java's multithreaded feature it is possible to write
programs that can perform many tasks simultaneously. This design feature
allows the developers to construct interactive applications that can run
smoothly.
 Interpreted − Java byte code is translated on the fly to native machine
instructions and is not stored anywhere. The development process is more
rapid and analytical since the linking is an incremental and light-weight
process.
25

 High Performance − With the use of Just-In-Time compilers, Java enables


high performance.

MySQL Database
MySQL is a fast, easy-to-use RDBMS being used for many small and big
businesses. MySQL is developed, marketed and supported by MySQL AB, which is
a Swedish company. MySQL is becoming so popular because of many good reasons.

 MySQL is released under an open-source license. So you have nothing to


pay to use it.
 MySQL is a very powerful program in its own right. It handles a large
subset of the functionality of the most expensive and powerful database
packages.
 MySQL uses a standard form of the well-known SQL data language.
 MySQL works on many operating systems and with many languages
including PHP, PERL, C, C++, JAVA, etc.
 MySQL works very quickly and works well even with large data sets.
 MySQL is very friendly to PHP, the most appreciated language for web
development.
26

CHAPTER II

2. PROJECT DESCRIPTION

2. 1 Overview of the Project


Data integrity protection in cloud computing designed as a framework which
is based on secure framework that integrates different tools to protect the integrity on
the cloud. This project used to check the integrity of the data which are upload on the
cloud.

Integrity, in terms of data security, is nothing but the guarantee that data can
only be accessed or modified by those authorized to do so, in simple word it is
process of verifying data. Data Integrity is very important among the other cloud
challenges. As data integrity gives the guarantee that data is of high quality, correct,
unmodified.

After storing data to the cloud, user depends on the cloud to provide more
reliable services to them and hopes that their data and applications are in secured
manner. But that hope may fail sometimes the user’s data may be altered or deleted.
Sometimes, the cloud service providers may be dishonest and they may discard the
data which has not been accessed or rarely accessed to save the storage space or keep
fewer replicas than promised.

Moreover, the cloud service providers may choose to hide data loss and claim
that the data are still correctly stored in the Cloud. As a result, data owners need to be
convinced that their data are correctly stored in the Cloud. So, one of the biggest
concerns with cloud data storage is that of data integrity verification at untrusted
servers. This project gives a way to solve the above problems.

In this project, firstly the user need to select the file which the user want to
upload on the cloud. Here this project uses a four server to upload the data. The
user’s data spilted into four different files and uploaded on the four servers. Then the
data in the each server spilted again into 3 parts and stored in the servers except its
own. The uploaded data are divided based on the binary value. Then create the mac
27

address for the each server based on its content. After that process the data are
uploaded to the storage. Now the user’s data are stored in different server with
different content.

When the user wants to download the data then the user request the trusted
authority for download. Then the trusted authority take the data on the server then
give it to the user. If sometime some data are missed or any one of four server is
crashed, then the system start the data recovery process.

In the data recovery , the data of lost server can be fetched in remaining
server. After collecting the data ,it again generate the mac address for the lost data. If
the newly generated mac address matched with the old one then the data integrity is
checked and the lost data is recovered.

The recovery process done by the TPA(Third Party Auditor). The work of
the third party auditor is to check all the server in a particular interval time. If any
one server is lost or crash then the TPA start the recovery process. The existing
system of this project is that the data are only stored on the two server . One is
original server another one is replica server. If the data are lost then it very difficult
to recover it in an little time.

But in the proposed mechanism allows a public Auditor to efficiently check


the data integrity in the cloud without downloading the entire data. This mechanism
preserves the confidentiality of the shared data by using the proxy re-signature
mechanism.

In this mechanism the blocks which were previously assign to revoked user
will be re-signed by the existing user. For the security purpose secret key will be
provided while login. This project uses data base is to store only the mac address of
the server. This project is developed using netbeans.

This project covers the problems of security and reliability. The Multi Cloud
implementation is done where the data is stored in two different cloud. Therefore,
28

even when a single cloud drops entirely the user need not suffer from data loss. The
retrieval of data can be done from the other cloud.

Program Design
If upload is requested
 Generate the per-file secrets.
 Split the file into four parts according to size.
 Encoded each code chunk with BLOWFISH.
 Store into the four respective cloud servers.
 Update the metadata file and upload.

Else if download is requested


 Check the metadata file.
 Decodes the encoded chunk for file F.
 Merge and downloads the decoded chunk for file F. Repair Operation.

Else if to check TPA process


 Periodic update the server status
 Update the fail index for download selection.
 Data index validation prepare the selection process with higher accuracy.

Else if recover the process


 Check the metadata file.
 Regenerate the file. In particular, if there is only one failed server,
 then instead of trying to download k(n-k) chunk from any k server, download
one chunk from Backend server.
 Decodes the encoded chunk for file F.
 Merge and downloads the decoded chunk for file F.

2. 2 Modules
 Data Processing
 Indexing
29

 Third Party Auditor


 Regeneration Code
 Integrity Verification

2.2.1 Data Processing


In this module, apply metadata processing suitable for operating data
intensive and computational intensive applications. There is a serious requirement to
deal with the data security issues for preserving the data integrity, privacy and trust
in the security environment.

While security concerns are protecting some organizations from adopting


cloud computing at all. In this module, data owners first encode the metadata files by
using regenerating code, and then store the coded file across multiple cloud servers.
The multiple cloud web servers may locate in the same provider or different service
providers. Data owners may perform block-level active functions on the outsourced
data.

Data File Upload

Check Attributes

Storage Server
File Sender

Fig 2. 1 Data Processing

In this fig2. 1 ,

 The data represent the data to be upload.


30

 File upload represent the file upload mechanism.

 Check attributes represent the attributes of the file such as name,size

 Storage server represent the storage area of the data

2. 2. 2 Meta Indexing
In this module, meta indexing are proposed using data structure to support
dynamic data update operations in which the data owner needs to store block index
and block logical location for each block of the outsourced file.

The main advantage of this method is that it is able to efficiently support


dynamic update operations efficiently due to the node re-balancing problem.

Data Index Service

Data Index

Uploaded List
File Index

Figure 2. 2 Data Indexing


In the fig 2. 2

 Data represent the user’s data to be updated

 Index service represent the MAC indexing service

 The data index represent the index of the data on the server

 Fail Index represent the failed server index

 When any changes occurred then it uploaded


31

2.2.3 Third Party Auditor


In this module, for data integrity confirmation use a third get together auditor,
specifically a sole third party auditor. TPA helps an end user verify the metadata.
TPA can gain access to control should be applied to determine traditional users and
minimize the possibility of unauthorized users.

The communication and computation expense should be reduced. Information


integrity with high security may be ensured when blocks of information are
distributed between multiple auditors for verification.

Data Index Service

Data Index

TPA
File Index

Authorized Access

Figure 2. 3 Third Party Auditor

In fig 4. 3,

 Data represent the user’s data to be updated


 Index service represent the MAC indexing service
 The data index represent the index of the data on the server.
 Fail Index represent the failed server index.
 TPA manage all the indexing service and periodically update the list and
perform recovery
 Authorized access means access for the trusted authority cloud
32

2.2.4 Regeneration Codes


In this module, the storage that holds data and information on the cloud is
obligated on data integrity. Data integrity depends on the assurance pursued by the
user that data are unaltered on the provider infrastructure. Data integrity threats
involve both malicious third party occurrences and hosting infrastructure
weaknesses.

Protecting data from loss and leakage involves integrity of many parties
involved in providing the resources. Some schemes and mechanism are needed to
ensure the data and information kept on the cloud is unaltered or removed. It is
suggested to practice auditing techniques such as proof-of-retrievability and proof-
of-data possession to enable verification.

Data

Data Index TPA

File Index
POR PODP

Fig 2.4 Regeneration Codes

This figure explained about the regeneration of the code of the lost server.
The TPA do the recovery process. After recovery of the mac address it matched with
the previous mac address. If it is matched then the data integrity is checked and the
lost data is recovered.
33

2. 2.5 Integrity Verification


In this module, integrity verification provides guarantee that the data will
always be available autonomously regardless of hardware failures, corrupted
physical disks or downtime.

Hardware failures can happen at any time. This includes failures caused by
environmental failures such as a natural disaster, flood or even fire. A hardware
design should be built on a basis of having redundancy and minimum single points of
failure. At the design phase, the analyst creates a physical hardware map that shows
all the connection points for server, storage, network and software.

Data

Data Index Index Selection TPA


TPA

Download
 Failed Index

Failed Server Regular Interval Server Selection

Figure 2. 5 Integrity Verification

2. 3 Algorithm
MAC
MAC algorithm is a symmetric key cryptographic technique to provide
message authentication. For establishing MAC process, the sender and receiver
share a symmetric key K. The some of the features are
34

 Mac is generated by an algorithm that creates a small


 It is fixed‐sized block
 It depending on both message and some key
 It like encryption though need not be reversible
 It appended to message as a signature
 The receiver performs same computation on
 Message and checks it matches the MAC
 It sometime provides assurance that message is unaltered and comes from
sender

Uses of MAC
The MAC provides authentication can also use encryption for secrecy
generally. It use separate keys for each and can compute MAC either before or after
encryption is generally regarded as better done before

It is used when sometimes only authentication is needed. It is used when


sometimes need authentication to persist longer than the encryption (eg. archival
use). MAC is not a digital signature

MAC Properties
A MAC is a cryptographic checksum
MAC = CK(M)
Condenses a variable‐length message M. it using a secret key K to a fixed‐sized
authenticator

Mac is a many‐to‐one function


The many messages have same MAC but finding these needs to be very
difficult

Requirements for MACs


 Taking into account the types of attacks
 Knowing a message and mac, is infeasible to
35

 It must find another message with same mac


 Macs should be uniformly distributed
 Mac should depend equally on all bits of the message

Using Symmetric Ciphers for MACs


It can use any block cipher chaining mode and use final block as a MACData
Authentication Algorithm (DAA) is a widely used MAC based on DES‐CBC. It
using IV=0 and zero‐pad of final block.

Symmetric encryption is a form of computerized cryptography using a


singular encryption key to guise an electronic message. Its data conversion uses a
mathematical algorithm along with a secret key, which results in the inability to
make sense out of a message.

And encrypt message using DES in CBC modeand send just the final block
as the MAC or the leftmost M bits (16≤M≤64) of final block but final MAC i

Essentially, a MAC is an encrypted checksum generated on the underlying


message that is sent along with a message to ensure message authentication.

The process of using MAC for authentication is depicted in the following


illustration

Fig 2. 6 MAC Algorithm


36

Let us now try to understand the entire process in detail −

 The sender uses some publicly known MAC algorithm, inputs the message
and the secret key K and produces a MAC value.

 Similar to hash, MAC function also compresses an arbitrary long input into a
fixed length output. The major difference between hash and MAC is that
MAC uses secret key during the compression.

 The sender forwards the message along with the MAC. Here, we assume that
the message is sent in the clear, as we are concerned of providing message
origin authentication, not confidentiality. If confidentiality is required then
the message needs encryption.

 On receipt of the message and the MAC, the receiver feeds the received
message and the shared secret key K into the MAC algorithm and re-
computes the MAC value.

 The receiver now checks equality of freshly computed MAC with the MAC
received from the sender. If they match, then the receiver accepts the
message and assures himself that the message has been sent by the intended
sender.

 If the computed MAC does not match the MAC sent by the sender, the
receiver cannot determine whether it is the message that has been altered or
it is the origin that has been falsified.

2. 4 Framework
The application gets developed by netbeans framework using wamp server
and MySQL as a backend for connectivity. The framework of this system involves

 View Files
 Upload Files
 Spilt Files
 Share Files
37

 Generate MAC
 Server Status
 Failed Server
 Data Integrity Checking

2.4.1 View Files


It is the first form in this application. Here there is two option available . It
can allow us to view the file being uploaded. When the user click on the view button,
the content of the file appeared on the right side panel. Refer the app. 1 for details.

2.4.2 Upload Files


Upload file form is used to upload the data to the server. When the user click
on the upload button .It redirect to the another page there the data are received sent
by the user. The upload form view on the client side and the file receive form view
on the server. Because the file received by server only. Refer app. 2 for detail view

2.4.3 Spilt Files


It is the server side operation. When the file received which sent by the user.
When click on the spilt button. Then the third party auditor spilt received files and
upload it to the server. The file is spilted based on their size.

Firstly the size of the file convert into binary then the value divided by four.
This is how the data are spilted. Refer app 1.3 there the file are spilted into four
different files and the successfully summited message box appear.

2.4.4 Share Files


It is the server side operation. Refer app 1.4 , when click on the share button
,the files are shared to available four servers. Each part of the file again spilted into
three parts. These three parts stored on the three server except their own.

2.4.5 Generate MAC


In this phase , the TPA generate the message authentication code for the each
server based on their data . After generating the mac click on the save button to save
38

the mac address. This mac address used for the data integrity verification. This mac
address stored on the mssql database. Refer app 1.5 for details.

2.4.6 Server Status


In app 1.5 it shows the status of the each server. This server status checked by
the TPA in particular interval of time. Here in that app 1. 5 it shows that the server 3
is failed. Then click on the next it moved on to next form.

2.4.7 Failed Server


In app 1.5 It shows the name of the server and status of that server. There the
name of the server is server three and the status is failed. In the right side it does not
shows any of its own and other server data.

It clearly tells that the server three is crashed and need to perform recovery.
Because file cannot be download when any one of the part is missing. Then click on
the load button it moved the next form failed server content

2.4.8 Failed Server Content


When click on the load button in the form it move to the next page. There it
shows the content of the crashed server which is stored on the other servers. Then
click on the next button it moved to next form data integrity checking.

2.4.9 Data Integrity Check


Data integrity is a fundamental component of information security. In its
broadest use, “data integrity” refers to the accuracy and consistency of data stored in
a database, data warehouse, data mart or other construct.

The term Data Integrity can be used to describe a state, a process or a


function and is often used as a proxy for “data quality”. Data with “integrity” is said
to have a complete or whole structure. Data values are standardized according to a
data model and/or data type.

All characteristics of the data must be correct – including business rules,


relations, dates, definitions and lineage – for data to be complete. Data integrity is
39

imposed within a database when it is designed and is authenticated through the


ongoing use of error checking and validation routines.

In this form it combined the data from other servers. Then click on the
recovery mac . Then the mac is again generated for the content.

When click on the data integrity checking , it check the recovery mac with the
old mac, if it is matched then data integrity is confirmed and the lost data is
recovered . refer app 1.9 for the details.

Fig 2. 7 Work Flow of DIP

In the above figure explain the structure of the project.

 Here first the client need to upload the data such as ‘abcdefgh’.
 Next the data are spilted into three parts and shared among four servers such
as S1, S2, S3 and S4.
40

 Now the s1 hold the data ‘abc’,S2 hold the data ‘def’ ,s3 hold the data ‘ ghi ’
and the s4 hold the data ‘ jkl ’.
 Next the data are again spilted and spilted among three server.
 For example the s1 data are spilted into three and stored among S2, S3, S4.
 Then the s2 data are spilted into three parts and stored among S1, S3, S4.
 Similarly the remaining data are stored. This picture shows how the data
stored on the four server.
41

CHAPTER III
3. TESTING METHODOLOGY

Software testing is an investigation conducted to provide stakeholder with


information about the quality of the product or service under test. It is also provides
an objective, independent view of the software to allow the business to appreciated
and understand the risk implement of the software. Test techniques includes, but are
not limited to, the process of executing a program or application with the intent of
finding software bugs. The purpose of testing is to discover errors.

Testing is the process of verifying and typing to discover every conceivable


fault or weakness in our product. It provides a way to check the functionality of
components, sub- assemblies and finished product.

It is the process of exercising software with the intent of ensuring that the
software system requirement and user expectation and does not fail in an acceptable
manual. Testing is a process of executing a program without finding error. Testing
present an interesting anomaly for the software engineering.

3. 1 System testing
System testing is a stage of implementation which is aimed at ensuring that
system works accurately and efficiently as expected before live operation
commences. It verify that the whole set of programs hangs together. System testing
requires test plans that consists of several keys. This implementation of newly
designed package is important adopting in successful new system.

Testing is an important stage in software development. The system test in


implementation should be confirmed that the system works as expected . It account
for the largest percentage of technical effort in the software development process.

The objective of this testing are to discover errors. To fulfill this objectives a
series of test step unit, integration, validation and output testing was planned and
42

executed. Software testing is a critical element of software quality assurance


represent the ultimate review of specification ,design and coding.

3. 2 Types of Testing
The goal of testing is to improve the program’s quality. Quality is assured
primary through some software testing. This history of testing goes back to the
beginning of the computing field.

Testing is done at two levels of testing of individual modules and testing the
entire system. During the system testing, the system is experimentally to ensure that
the software will run according to the specification and in a way the user experts.
Testing is very tedious and time consuming. Each test cases design with the intent of
finding errors in a way the system will process it.

Testing objectives
 Testing is a process of executive a program with the intent of finding error.
 A good test case is one that has a high probability of finding as and
discovered error.
 A successful test is one that uncovers an as yet undiscovered error.

Test strategy
The purpose of testing is to fine defects. A test strategy basically tells which
types of testing seem best to do, the order in which to perform them, the proposed
sequence of execution, and the optimum amount of effort to put into each test
objective to make your testing most effective.

Test strategy is based on their prioritized requirements and any other


available information about what is important to the customer. Face time and
resource constraints, a test strategy faces up to this reality and tells how to make the
best use of whatever resources you have to locate most of the worst defects. Without
a test strategy , apt to waste your time on less fruitful testing and miss using some of
the most powerful testing option. Create a test strategy at about the middle of the
design phase as soon as the requirements have settled down.
43

Testing Plan
A testing plan is simply that part of the project plan that deals with the testing
task. It tells the details about who will do which tasks, starting when, ending when,
taking how much effort, and depending on which other task.

Testing plan provide a complete list of all the things that need to be done for
testing, including all the preparation work during all the phases before testing. It
shows the dependencies among the task to clearly create a critical path without
surprises. To start filling in the details of this testing plans as soon as the test strategy
is completed. Both the test strategy and testing plan are subject to change as the
project evolves.
Test Cases
Test cases are prepared based on the strategy which tells you how much of
each type of testing to do. Test cases are developed based on the prioritized
requirements and acceptance criteria for the software, keeping in mind the
customer’s emphasis on quality dimension and the project’s latest risk assessment of
what would go wrong, expect for the small amount of ad hoc testing ,all of the test
cases should be prepared in advance of the start of testing

There are many different approaches to developing test cases. Test case
development is an activity performed in parallel with software development. It is just
as difficult to do a good job of coming up with test case as it is to program the system
itself.

Level of testing
 Unit Testing
 Integration Testing
 Functional Testing
 Navigation Testing
 Interface Mechanism Testing
 Form Testing
44

Unit Testing
Unit testing is this testing of each module and the integration of overall
system is done. Unit testing becomes verification effort on a smallest unit of
software design in the modules.
It is also known as module testing. The module of the system are tested
separately . this testing is carried out during the programming itself.

Table 3. 1 Unit Tesing


Expected
S. No. Test Condition Input Result
Output
Check data Upload the File are spilted Success
processing file into four files
1. module and upload
working across the
properly server
Check index Spilted file Generate the success
module mac address for
2 working each file and
properly stored in the
data base
Check the third Specify the Check all the Success
party module is server server and
working return the
3.
properly server status
whether it is
active or not
Check the File Regenerate Success
regeneration Mac address
4. module is for the file
working based on the
properly content
45

Test objectives
 All field entities must work properly.
 Pages must be activate a from the identifying link.
 The entire screen, messages and responses must not be displayed.
Features to be tested
 Verify that entries are the of the correct format.
 No duplicate entries should be allowed.
Integration testing
Integration testing is the phase in a software testing in which individual
software models are combined and tested as a group. It occurs after unit testing and
before validation testing .

Integration testing is testing where a group of components is combined to


produce output. If software and hardware components have any relation the iteration
between them is also tested with integration testing. In simple words, the integration
testing is a testing of all integrated modules to verify the combined functionality after
integration.

It may fall under both whitebox testing and blackbox testing. Modules are
typically code modules, individual applications, client and server applications on a
network, etc.

Here integration is the process of assembling unit-tested modules. We need to


test the following aspects, which have not been addressed previously while
independently testing the modules:

Interfaces: To ensure “interface integrity,” the transfer of data between


modules is tested. When data is passed to another module, by way of a call, there
should not be any loss or corruption of data. The loss or corruption of data can
happen due to mismatch or differences in the number or order of calling and
receiving parameters.
46

Module combinations may produce a different behavior due to combinations


of data that are not exercised during unit testing.

Table 3. 2 Integration Testing


Expected
S. No. Test Condition Input Result
output
Check The File Upload form success
upload form successfully
1. correctly passed running
through the
next.

Check the third Select the file Third party success


party auditor auditor module
2. module is successfully
passed to running
another page

Check the Spilted files Indexing success


indexing module running
3. module is successfully
passed to
another page

Check the File content Regeneration success


regeneration module running
4. code module is successfuly
passed to
another page
47

Check the data File content Data integrity Success


integrity Checked
5.
module is
running.

Functional Testing
Functional testing has to be performed to make sure that offering provides the
services that the user is paying for. Functional tests ensure that the business
requirements are being met. Some of the functional tests are performed on this
project:
 System Verification Testing: This ensures whether the various modules
function correctly with one another, thus making sure that their behaviour is
as expected. This testing is successfully completed on this project.
 Acceptance Testing: Here the cloud-based solution is handed over to the
users to make sure it meets their expectations. This testing is successfully
completed on this project
 Interoperability Testing: Any application must have the flexibility to work
without any issues not only on different platforms but it must also work
seamlessly when moving from cloud infrastructure to another. This testing is
successfully completed on this project.
Interface Mechanism Testing
Interface Testing is defined as a software testing type which verifies
whether the communication between two different software systems is done
correctly. A connection that integrates two components is called interface.

This interface in a computer world could be anything like API's, web


services, etc. Testing of these connecting services or interface is referred to as
Interface Testing.

An interface is actually software that consists of sets of commands,


messages, and other attributes that enable communication between a device and a
48

user. When a user interacts with a web application, the interaction occurs through
one or more mechanisms which are called interface mechanisms. Testing done
within these mechanisms is the interface mechanism testing

Form Testing
Testing forms has been done at two different levels i. e. at minimum level
and at more targeted level. At minimum level I have tested for:

 Whether labels been correctly defined for fields or not.


 Whether server is receiving all the information contained in the form and no
data are lost in the transmission between client and server.
 Whether appropriate default values are available when the user does not
select any item in the selection box.
 Whether scripts that perform data validation from the client-side are
working properly or not.

At more targeted level I have tested for,


 Whether text fields have proper width to enter data.
 Whether text fields are allowing string length more than specified length.
 Whether tab order among different controls is in required order or not.

Navigation Testing
The job of navigation testing is to ensure that the navigation mechanisms
are functional, and to validate that each Navigation Semantic Unit can be achieved
by the appropriate user category. In this project have done the navigation testing in
following areas.
 Navigation links are thoroughly tested.
 Redirects are properly checked.
 Is the target page to a navigation link is correct or not.
49

CHAPTER IV

4. SYSTEM IMPLEMENTATION

Implementation of software refers to the final installation of the package in


this real environment, to the satisfaction of the intended user and the operation of the
system. People or not sure that the software is main to make the job easier.

 The active user must be aware of the benefits of using the system.
 Their confidence in the software build up.
 Proper guidance is impact to the user so that is comfortable in using the
application

4.1 Introduction
Implementation plan is updated throughout the development phase.
Implementation plan include text plant, training plan, equipment installation plan and
the conversion plan. The process of bringing development system into operational
use and turning it over to the user implementation activities extend form planning
through conversion from old system to the new system.

Test plan includes how to test the performance of the system using some
input data whose record output are already known. For this purpose sample data is
prepared and can be tested for output performance.

The implementation phase is less creative than should not disturb the
functioning of the organization. Training plan includes how to train the user of
personnel in order to make them easy to handle to be confident with the working of
the system in the models developed all of the data entry screen or in highly legible
from object.

The user can easily understand the working procedure by single time training
in instruction. Output also can be generated the easily in the same way this needs no
extra training. Equipment installation plan consists of planning how to install the
required computer and peripherals in a proper way in a possible lower cost. The
50

software requirement analysis should focus on what the software is to accomplis ,


rather than on how processing will be implemented . The implementation view
should not be necessary be interpreted as are presentation of how rather and
implementation model represented the current mode of operation, that is, the existing
or proposed allocation of all system elements.

4.2 Pre Implementation


Implementation is the most crucial stage in achieving a successful system and
giving the user’s confidence that the new system is workable and effective.
Implementation of a modified application to replace an existing one. This type of
conversion is relatively easy to handle, provide there are no major changes in the
system.

Each program is tester individually at the time development using the data
and has verified that this program linked together in a way specified in the program
specification, the computer system and its environment is tested to the satisfaction of
the user.

Process of Coding, Testing and Implementation


Implementation is the most crucial stage in achieving a success fully system
and giving the user confidence that the new system is workable and effective. In
coding process the physical design specification are turned into working computer
code and in testing process.

once the code is done the testing will be performed using the various
strategies it may also test the code by parallel operation which means, while doing
the coding part can do the side testing part which will not affect the coding. In
installation process it requires a software and database. This is primary thing for all
the installation part.

4.3 Post Implementation

Taking the above mentioned factors into consideration, the proposed system
has been considered as a feasible recommended for implementation. After the careful
51

and study analysis of the system. The major functionalities are identified in the
system and hence the system and modules

The system that has been developed is accepted and proved to be satisfactory
for the user . and so the system is going to implemented very soon. A simple
operating procedure is included so that the user can understand the different function
clearly and quickly.

4.4 Performance Implementation


Performance is taken as the part of implementation. Performance is
perceived as response time for queries, report generation and process related
activities. This project performed well in all the levels . The module of this
application produce good and accurate result. The tasks does not make errors during
a process

An implementation case describe an input description and compares the


observed output to know the outcome of the test case. If it is different, then there is a
failure and it must be identified. Software implementation must consider the impact
of hardware failure of software processing.

4.5 Project Maintenance


Project maintenance is a matter of practicing some very simple values
throughout the course of this project.

It means being very intentional about tracking the progress towards


milestone and goals , inside of assuming everything will happen as planned. There
are four types of maintenance

 Corrective maintenance
 Adaptive maintenance
 Perfective maintenance
 Preventive maintenance
52

Corrective maintenance
Corrective maintenance is concerned with the fixing error that observe when
software is in use. Corrective maintenance deals with the repair of faults or defect
found in day to day system function.

A defect can result due to errors in software design, a logic and coding. The
need of corrective maintenance is usually initiative by the bug reports drawn by the
user.

Adaptive Maintenance
Adaptive maintenance is the implementation of changes in the part of the
system, which has been affected by a change that occur in some other parts of the
system . It consists of adopting software to changes in the environment such as the
hardware or the operating system .

It is concerned with the change in the software that takes place to make the
software adaptable to new environment such as run the software on new operating
system.

Perfective Maintenance
Perfective maintenance is concerned with the change the software that occurs
while adding the new functionality in the software. It deals with the implementing
new or changed user requirements. It involves making functional enhancement to the
system in addition to the activities to increase the system’s performance even when
the changes have not been suggested by faults. This includes enhancing both the
function and efficiency of the code and changing the functionality of the system.

Preventive Maintenance
Preventive maintenance involved implementing changes to prevent the
occurrence of error. It tends to reduce the software complexity thereby improving
program understandability and increasing software maintainability. It comprise the
documentation updating , code optimization and code restructuring.
53

CHAPTER V
5. RESULTS
5.1 Conclusion
In this project, a TPA based Integrity Verification and Data Recovery has
been proposed, which helps reducing the computation time delay and traffic
mismatch errors. The system mainly depends on Third Party Auditor (TPA) which
will verify the status of the servers in regular interval for the lost connection or data.
The system will gain more efficient, higher analytical of data records, time
consuming. This system provides higher result in time consumption and reduced
computation overhead which compared to the previous results.

This DIP scheme maintaining the transparency between End user and cloud
service provider by performing the tight security manually on the client side so that
they can be satisfied on their security of data. So seen the popularity of outsourcing
real storage to the respective cloud servers, it is necessary to enable clients to verify
the integrity of a data in the cloud. Our DIP scheme preserve a fault tolerance and
repair traffic saving.

5.2 Future Enhancement

In the future work, a backup or replication to the TPA can provide higher data
retrieval and indexing in very less period. The security can be added to the system
will help in protecting the more privacy to the user data and files. An efficient
machine learning algorithms like ADABOOST can be implemented in the system
which will help the system in time consumption and increase the accuracy of the
retrieval.

In future we are going to focus on storage of data and retrieval of data to be


done automatically that is auto backup with more tight security of data. It is designed
for text file so in future we will further developed for audio and video respectively.
54

APPENDIX

Screen Shots

App 1. 1 View File

App 1. 2 Upload File


55

App 1. 3 Split File

App 1. 4 Share Files


56

App 1. 5 Generate MAC

App 1. 6 Server Status


57

App 1. 7 Failed Server

App 1. 8 Failed Server Content


58

App 1. 9 Data Integrity Checking


59

REFERENCES

1. Balachandra Reddy Kandukuri, Ramkrishna Paturi V, DR. Atanu Rakshit,


“Cloud Security Issues”, 2009 IEEE International Conference on Services
Computing.

2. C. Wang, Q. Wang, K. Ren, and W. Lou, “Privacy-Preserving Public


Auditing tool for cloud Data Storage”, In Proc. of IEEE INFOCOM, 2010.

3. Mandeep Kaur and Manish Mahajan, “Implementing Various Encryption


Algorithms to Enhance The data Security of Cloud in Cloud Computing”,
2012 VSRD International Journal of Computer Science & Information
Technology.

4. Mehmet Yildiz, Jemal Abawajy, Tuncay Ercan and Andrew Bernoth, “A


Layered Security Approach for Cloud Computing infrastructure”, 10th
International Symposium on Pervasive Systems, Algorithms, and Networks,
2009 IEEE.
5. Amazon Elastic Cloud. http://aws. amazon. com/ec2/.

6. Amazon Simple Database Service. http://aws. amazon. com/s3/.

7. http://ieeeexplore. ieee. org/Cloud Security and Data Integrity .

You might also like