Data Integrity Protection in Cloud Computing
Data Integrity Protection in Cloud Computing
CHAPTER I
1. INTRODUCTION
The user who wants to access the cloud services needs to connect to the
internet. Once the user is connected to the internet the user can access almost
everything from cloud i.e. ubiquitous computing.
The ultimate task of the cloud is to provide a shared network between the
owner and the user. The cloud manages the resources and provide to the requesting
client based on authentication. Cloud storage is one of the important services which
have increased in the recent era.
Cloud storage is used to provide storing space for user data. A great
advantage is that it reduces the burden of the data owner to store large amount of
data in the local system. Thus, by moving large files from the local system to the
cloud it increases the performance of the system.
As the data is stored in the remote location the user data falls into threats of
the intruders where the confidentiality and integrity of the data is breached. This
reduces the trust of the cloud service provider. There are several measures used by
the CSP to provide the security of the data, but it never matches to the aspects. More
and more owners start to store the data in the cloud.
However, this new paradigm of data hosting service also introduces new
security challenges. Owners would worry that the data could be lost in the cloud.
This is because data loss could happen in any infrastructure, no matter what high
2
degree of reliable measures cloud service providers would take. Sometimes, cloud
service providers might be dishonest.
It could discard the data which has not been accessed or rarely accessed to
save the storage space and claim that the data are still correctly stored in the cloud.
Therefore, owners need to be convinced that the data are correctly stored in the
cloud.
People can easily work together in a group by Sharing and storing the
services in the cloud. To protect the integrity of data in the cloud, number of
mechanisms has been proposed. In these mechanisms, a signature is attached to each
block in data, and the integrity of data relies on the correctness of all the signatures.
This Public Auditor could be a client who would like to utilize cloud data for
particular purposes (e. g. , search, computation, data mining, etc. ) or Third Party
Auditor (TPA) who is able to provide verification services on data integrity to users.
With shared data, once a user modifies a block, that user also needs to compute a
new signature for the modified block. Due to the modifications from different users,
different blocks are signed by different users.
The proposed mechanism allows a public Auditor to efficiently check the data
integrity in the cloud without downloading the entire data. This mechanism preserves
the confidentiality of the shared data by using the proxy re-signature mechanism.
In this mechanism the blocks which were previously assign to revoked user
will be re-signed by the existing user. For the security purpose secret key will be
provided while login. Public verification techniques allow the users to outsource
their data into the cloud and consistency of the data is checked by a trusted third
party called auditor. Objective of the public verification scheme is to avoid external
adversary attacks on the data outsourced by the owner.
3
The system approval is a way thinking about the analysis and design of
computer based application. It provides a framework on visualizing the organization
and environmental factor that operate on a system. When computer is introduced into
an organization, where is function operate on the user as well as on the organization.
This is to ensure that the proposed system is not burden to the company for
feasibility analysis, some understanding of the major requirement for the system is
essential.
7
Technical feasibility
Economical feasibility
Operational feasibility
In technical study one must find whether the ability to achieve a acceptable
system. In technical study one must find out whether the current technical resources,
which are available in organization, are capable of handling the user requirements.
For example, if particular software work only in a computer with higher
configuration and additional hardware is required. This involves a financial
consideration and if the budget is serious constraints, then the proposal will be
considered feasible.
The system requires the machine that can run windows 8.1 latest version
operating system that has already been installed. Since the computer with internet or
8
intranet connections. To develop the project system commonly needs frontend like
netbeans and backend like MySQL. And here installed the frontend netbeans and
backend MySQL.
The hardware and software requirements of the proposed system are already
available for the user. This system is technically feasible won the development and
implementation of this project is technically possible one and don't take any extra
requirement apart from the available in the organisation.
The existing system is enough for implementing the system and also for the
software development in terms manpower, it is adequate and no extra personal is
required. This system is developed with the cost than the gain from this system.
The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to user system effectively. The
user must not feel threatened by the system instead must accepted it as a necessity.
The level of acceptance by the use of solely depends on the methods that are
employed to educate the user about the system and to make familiar with it.
The level of confidence must be raised so that is also able to make some
constructive criticism, which is welcomed, as the final user of the system. The
reasonable acceptance from the user side is necessary for the success of the systems.
since user welcomes expert system, the system is found operationally feasible.
Here, this system provides the easy access to the user to store data on the
cloud. Then this system also ensures the integrity of the data being uploaded. So the
user’s data is secured and they can easily access those data. So this project is
operationally feasible.
1. 6 System Design
System designs the most creative and challenging part of the system life cycle
is the system design. Design commences, once the logical model of the existing
system is available. Design begins by using identified system problem as the basis
for developing objectives for the new system.
Design phase present in 2 steps, logical design and physical design. The
logical design of the system pertains to an abstract representation of the data flows,
inputs and outputs of the system.
The primary objective of the design phase will always be to design a system,
which delivers a function required by the client to support the business objectives of
the organization. There are a number of objectives that must be consider, if a good
design is to be produced. During the logic design, a new logical model is developed.
10
The new logical model that include any process or challenges to existing
process are necessary to meet system objective. During physical design decisions are
made on which process would still remain a manual and which are to be
computerized.
Architecture Design
Input Design
Output Design
Database Design
UML Diagram
In the fig 1.1, defines the relationship between major structural elements of
the software design patterns that can be used to achieve the requirements that have
been defined for the system.
11
The diagram divided into three parts, there are user and trusted authority and
third party auditor. The data are stored in the cloud and data base. There are five
modules that are data segmentation, system spilt the data into four parts.
Then generate the message authentication code for the each segment. Data
regeneration occur when some data are missing. Data integrity protection indicates
the integrity of the data.
Inaccurate input data or common cause of errors in data processing. The input
screens are very carefully and logically designed. For data entry or for data access,
different fields are used which makes the data entry as early as possible. While
entering data validation checks are done and message will be generated by the
system in case of incorrect data entry. some of the feature are
The input data are validated to minimized data entry error. Here when the
user uploads the image file then it provides the message for selecting only the
text file.
Appropriate message are provided to inform user about the false entry. When
click on the upload button before select the file to upload then it shows the
message about like ‘please select the file’.
Fixed format is used for displaying titles and messages. Same type of text and
format are used throughout the project.
The form of title clearly states the purpose of the form. Each form gets its
title such as upload, TPA, etc.
Heading for each data items are clearly given.
Adequate space is given for data item.
Forms are not crowded, as forms are difficult to read or validate. Forms are
clearly designed
Every time the user interact with the system, it is very easy to accessible.
Each output design clearly describes the process. Efficient and intelligent output
design improves the relationship of user with the system helps in decision making.
13
The output testing includes report in the specific format, displays as enquires
as well as a simple profile of the database. When the user uploads the file then it
uploads on to the cloud. The user only need to specify file, system itself spilt the file
and generate the mac address and stored.
It helps produce database systems. That meet the requirements of the users
have high performance.
The main objectives of database designing are to produce logical and physical
designs models of the proposed database system. The logical model concentrates on
the data requirements and the data to be stored independent of physical
considerations.
It does not concern itself with how the data will be stored or where it will be
stored physically. The physical data design model involves translating the logical
design of the database onto physical media using hardware resources and software
systems such as database management systems (DBMS)..
This table is used to store the mac address of the server. This mac address is
used to check the integrity of data. When the system regenerate the mac address of
the failed server then that must matched to this mac address.
14
UML is linked with object oriented design and analysis. UML makes the use of
elements and forms associations between them to form diagrams. Diagrams in UML
can be broadly classified as:
Structural Diagrams – Capture static aspects or structure of a system.
Structural Diagrams include: Component Diagrams, Object Diagrams, Class
Diagrams and Deployment Diagrams.
Behavior Diagrams – Capture dynamic aspects or behavior of the system.
Behavior diagrams include: Use Case Diagrams, State Diagrams, Activity
Diagrams and Interaction Diagrams, block diagram. it specify the behavior of
the system
15
In the figure 1.2, here there are three actors such as client, TPA and server.
The client only able to upload the data and contact the trusted authority to download
the data. The server is contact the trusted authority and public auditor. And the TPA
is responsible for data integrity and data recovery.
16
Class Diagram
Class diagram is a static diagram. It represents the static view of an
application. Class diagram is not only used for visualizing, describing, and
documenting different aspects of a system but also for constructing executable code
of the software application.
Class diagram describes the attributes and operations of a class and also the
constraints imposed on the system. The class diagrams are widely used in the
modeling of object oriented systems because they are the only UML diagrams,
which can be mapped directly with object-oriented languages.
17
There are four classes in the project. That are client, trusted authority, TPA
and File storage. These classes are interrelated to each others. Client has two
attributes that are user file and file attributes. File attributes means name of the file
and size of the file . It has only one operation that is upload file Trusted authority has
two variables that are user file and client connection.
There is three operation for the trusted authority that are receive files, contact
TPA and save user files on the cloud. TPA has only one variable access key and it
has two operation check the status of the each server and response back. File storage
has one variable server details. It has four operation data index, key index, recovery
data and recovery index
18
Interaction Diagram
From the term Interaction, it is clear that the diagram is used to describe some
type of interactions among the different elements in the model. This interaction is a
part of dynamic behavior of the system.
It does not show any message flow from one activity to another. Activity
diagram is sometimes considered as the flowchart. Although the diagrams look like
a flowchart, they are not. It shows different flows such as parallel, branched,
concurrent, and single.
In fig 1.5 firstly, the user upload the file via the trusted authority. Then the
files are stored in the server. The TPA keep checking on the trusted authority
whether the servers are active or not. Then the uploaded files spilted into several
parts and stores on the server. When downloading the files, the integrity of the data
checked using the TPA.
opposed to an absolute rule. Most software defines two sets of system requirements:
minimum and recommended.
Industry analysts suggest that this trend plays a bigger part in driving
upgrades to existing computer systems than technological advancements. A second
meaning of the term of System requirements, is a generalisation of this first
definition, giving the requirements to be met in the design of a system or sub-system.
In the specifications the latest hardware and software must be proposed to enable
faster retrival of the information. It invokes two concept . They are as followes
Hardware Specification
Software Specification
Hardware Requirements
A good hardware selection pays a vital role for the development of an
application. The most common set of requirements defined by any operating system
or software application is the physical computer resources, also known as hardware.
Software Requirements
Software requirements deal with defining software resource requirements and
prerequisites that need to be installed on a computer to provide optimal functioning
of an application.
NetBeans is coded in Java and runs on most operating systems with a Java
Virtual Machine (JVM), including Solaris, Mac OS, and Linux.
An IDE is much more than a text editor. The NetBeans Editor indents lines,
matches words and brackets, and highlights source code syntactically and
semantically. It lets you easily refactor code, with a range of handy and powerful
tools, while it also provides code templates, coding tips, and code generators.
The editor supports many languages from Java, C/C++, XML and HTML, to PHP,
Groovy, Javadoc, JavaScript and JSP. Because the editor is extensible, you can plug
in support for many other languages.
NetBeans IDE can be installed on all operating systems that support Java,
from Windows to Linux to Mac OS X systems. Write Once, Run Anywhere, is as
true for NetBeans IDE as it is for your own applications. Because NetBeans IDE
itself is written in Java, too!
JAVA
Java programming language was originally developed by Sun Microsystems
which was initiated by James Gosling and released in 1995 as core component of
Sun Microsystems' Java platform.
The latest release of the Java Standard Edition is Java SE 8. With the
advancement of Java and its widespread popularity, multiple configurations were
built to suit various types of platforms.
Java is
Object Oriented − In Java, everything is an Object. Java can be easily
extended since it is based on the Object model.
Platform Independent − Unlike many other programming languages
including C and C++, when Java is compiled, it is not compiled into
platform specific machine, rather into platform independent byte code. This
byte code is distributed over the web and interpreted by the Virtual Machine
(JVM) on whichever platform it is being run on.
Simple − Java is designed to be easy to learn. If you understand the basic
concept of OOP Java, it would be easy to master.
Secure − With Java's secure feature it enables to develop virus-free, tamper-
free systems. Authentication techniques are based on public-key encryption.
Architecture-neutral − Java compiler generates an architecture-neutral
object file format, which makes the compiled code executable on many
processors, with the presence of Java runtime system.
Portable − Being architecture-neutral and having no implementation
dependent aspects of the specification makes Java portable. Compiler in Java
is written in ANSI C with a clean portability boundary, which is a POSIX
subset.
Robust − Java makes an effort to eliminate error prone situations by
emphasizing mainly on compile time error checking and runtime checking.
Multithreaded − With Java's multithreaded feature it is possible to write
programs that can perform many tasks simultaneously. This design feature
allows the developers to construct interactive applications that can run
smoothly.
Interpreted − Java byte code is translated on the fly to native machine
instructions and is not stored anywhere. The development process is more
rapid and analytical since the linking is an incremental and light-weight
process.
25
MySQL Database
MySQL is a fast, easy-to-use RDBMS being used for many small and big
businesses. MySQL is developed, marketed and supported by MySQL AB, which is
a Swedish company. MySQL is becoming so popular because of many good reasons.
CHAPTER II
2. PROJECT DESCRIPTION
Integrity, in terms of data security, is nothing but the guarantee that data can
only be accessed or modified by those authorized to do so, in simple word it is
process of verifying data. Data Integrity is very important among the other cloud
challenges. As data integrity gives the guarantee that data is of high quality, correct,
unmodified.
After storing data to the cloud, user depends on the cloud to provide more
reliable services to them and hopes that their data and applications are in secured
manner. But that hope may fail sometimes the user’s data may be altered or deleted.
Sometimes, the cloud service providers may be dishonest and they may discard the
data which has not been accessed or rarely accessed to save the storage space or keep
fewer replicas than promised.
Moreover, the cloud service providers may choose to hide data loss and claim
that the data are still correctly stored in the Cloud. As a result, data owners need to be
convinced that their data are correctly stored in the Cloud. So, one of the biggest
concerns with cloud data storage is that of data integrity verification at untrusted
servers. This project gives a way to solve the above problems.
In this project, firstly the user need to select the file which the user want to
upload on the cloud. Here this project uses a four server to upload the data. The
user’s data spilted into four different files and uploaded on the four servers. Then the
data in the each server spilted again into 3 parts and stored in the servers except its
own. The uploaded data are divided based on the binary value. Then create the mac
27
address for the each server based on its content. After that process the data are
uploaded to the storage. Now the user’s data are stored in different server with
different content.
When the user wants to download the data then the user request the trusted
authority for download. Then the trusted authority take the data on the server then
give it to the user. If sometime some data are missed or any one of four server is
crashed, then the system start the data recovery process.
In the data recovery , the data of lost server can be fetched in remaining
server. After collecting the data ,it again generate the mac address for the lost data. If
the newly generated mac address matched with the old one then the data integrity is
checked and the lost data is recovered.
The recovery process done by the TPA(Third Party Auditor). The work of
the third party auditor is to check all the server in a particular interval time. If any
one server is lost or crash then the TPA start the recovery process. The existing
system of this project is that the data are only stored on the two server . One is
original server another one is replica server. If the data are lost then it very difficult
to recover it in an little time.
In this mechanism the blocks which were previously assign to revoked user
will be re-signed by the existing user. For the security purpose secret key will be
provided while login. This project uses data base is to store only the mac address of
the server. This project is developed using netbeans.
This project covers the problems of security and reliability. The Multi Cloud
implementation is done where the data is stored in two different cloud. Therefore,
28
even when a single cloud drops entirely the user need not suffer from data loss. The
retrieval of data can be done from the other cloud.
Program Design
If upload is requested
Generate the per-file secrets.
Split the file into four parts according to size.
Encoded each code chunk with BLOWFISH.
Store into the four respective cloud servers.
Update the metadata file and upload.
2. 2 Modules
Data Processing
Indexing
29
Check Attributes
Storage Server
File Sender
In this fig2. 1 ,
2. 2. 2 Meta Indexing
In this module, meta indexing are proposed using data structure to support
dynamic data update operations in which the data owner needs to store block index
and block logical location for each block of the outsourced file.
Data Index
Uploaded List
File Index
The data index represent the index of the data on the server
Data Index
TPA
File Index
Authorized Access
In fig 4. 3,
Protecting data from loss and leakage involves integrity of many parties
involved in providing the resources. Some schemes and mechanism are needed to
ensure the data and information kept on the cloud is unaltered or removed. It is
suggested to practice auditing techniques such as proof-of-retrievability and proof-
of-data possession to enable verification.
Data
File Index
POR PODP
This figure explained about the regeneration of the code of the lost server.
The TPA do the recovery process. After recovery of the mac address it matched with
the previous mac address. If it is matched then the data integrity is checked and the
lost data is recovered.
33
Hardware failures can happen at any time. This includes failures caused by
environmental failures such as a natural disaster, flood or even fire. A hardware
design should be built on a basis of having redundancy and minimum single points of
failure. At the design phase, the analyst creates a physical hardware map that shows
all the connection points for server, storage, network and software.
Data
Download
Failed Index
2. 3 Algorithm
MAC
MAC algorithm is a symmetric key cryptographic technique to provide
message authentication. For establishing MAC process, the sender and receiver
share a symmetric key K. The some of the features are
34
Uses of MAC
The MAC provides authentication can also use encryption for secrecy
generally. It use separate keys for each and can compute MAC either before or after
encryption is generally regarded as better done before
MAC Properties
A MAC is a cryptographic checksum
MAC = CK(M)
Condenses a variable‐length message M. it using a secret key K to a fixed‐sized
authenticator
And encrypt message using DES in CBC modeand send just the final block
as the MAC or the leftmost M bits (16≤M≤64) of final block but final MAC i
The sender uses some publicly known MAC algorithm, inputs the message
and the secret key K and produces a MAC value.
Similar to hash, MAC function also compresses an arbitrary long input into a
fixed length output. The major difference between hash and MAC is that
MAC uses secret key during the compression.
The sender forwards the message along with the MAC. Here, we assume that
the message is sent in the clear, as we are concerned of providing message
origin authentication, not confidentiality. If confidentiality is required then
the message needs encryption.
On receipt of the message and the MAC, the receiver feeds the received
message and the shared secret key K into the MAC algorithm and re-
computes the MAC value.
The receiver now checks equality of freshly computed MAC with the MAC
received from the sender. If they match, then the receiver accepts the
message and assures himself that the message has been sent by the intended
sender.
If the computed MAC does not match the MAC sent by the sender, the
receiver cannot determine whether it is the message that has been altered or
it is the origin that has been falsified.
2. 4 Framework
The application gets developed by netbeans framework using wamp server
and MySQL as a backend for connectivity. The framework of this system involves
View Files
Upload Files
Spilt Files
Share Files
37
Generate MAC
Server Status
Failed Server
Data Integrity Checking
Firstly the size of the file convert into binary then the value divided by four.
This is how the data are spilted. Refer app 1.3 there the file are spilted into four
different files and the successfully summited message box appear.
the mac address. This mac address used for the data integrity verification. This mac
address stored on the mssql database. Refer app 1.5 for details.
It clearly tells that the server three is crashed and need to perform recovery.
Because file cannot be download when any one of the part is missing. Then click on
the load button it moved the next form failed server content
In this form it combined the data from other servers. Then click on the
recovery mac . Then the mac is again generated for the content.
When click on the data integrity checking , it check the recovery mac with the
old mac, if it is matched then data integrity is confirmed and the lost data is
recovered . refer app 1.9 for the details.
Here first the client need to upload the data such as ‘abcdefgh’.
Next the data are spilted into three parts and shared among four servers such
as S1, S2, S3 and S4.
40
Now the s1 hold the data ‘abc’,S2 hold the data ‘def’ ,s3 hold the data ‘ ghi ’
and the s4 hold the data ‘ jkl ’.
Next the data are again spilted and spilted among three server.
For example the s1 data are spilted into three and stored among S2, S3, S4.
Then the s2 data are spilted into three parts and stored among S1, S3, S4.
Similarly the remaining data are stored. This picture shows how the data
stored on the four server.
41
CHAPTER III
3. TESTING METHODOLOGY
It is the process of exercising software with the intent of ensuring that the
software system requirement and user expectation and does not fail in an acceptable
manual. Testing is a process of executing a program without finding error. Testing
present an interesting anomaly for the software engineering.
3. 1 System testing
System testing is a stage of implementation which is aimed at ensuring that
system works accurately and efficiently as expected before live operation
commences. It verify that the whole set of programs hangs together. System testing
requires test plans that consists of several keys. This implementation of newly
designed package is important adopting in successful new system.
The objective of this testing are to discover errors. To fulfill this objectives a
series of test step unit, integration, validation and output testing was planned and
42
3. 2 Types of Testing
The goal of testing is to improve the program’s quality. Quality is assured
primary through some software testing. This history of testing goes back to the
beginning of the computing field.
Testing is done at two levels of testing of individual modules and testing the
entire system. During the system testing, the system is experimentally to ensure that
the software will run according to the specification and in a way the user experts.
Testing is very tedious and time consuming. Each test cases design with the intent of
finding errors in a way the system will process it.
Testing objectives
Testing is a process of executive a program with the intent of finding error.
A good test case is one that has a high probability of finding as and
discovered error.
A successful test is one that uncovers an as yet undiscovered error.
Test strategy
The purpose of testing is to fine defects. A test strategy basically tells which
types of testing seem best to do, the order in which to perform them, the proposed
sequence of execution, and the optimum amount of effort to put into each test
objective to make your testing most effective.
Testing Plan
A testing plan is simply that part of the project plan that deals with the testing
task. It tells the details about who will do which tasks, starting when, ending when,
taking how much effort, and depending on which other task.
Testing plan provide a complete list of all the things that need to be done for
testing, including all the preparation work during all the phases before testing. It
shows the dependencies among the task to clearly create a critical path without
surprises. To start filling in the details of this testing plans as soon as the test strategy
is completed. Both the test strategy and testing plan are subject to change as the
project evolves.
Test Cases
Test cases are prepared based on the strategy which tells you how much of
each type of testing to do. Test cases are developed based on the prioritized
requirements and acceptance criteria for the software, keeping in mind the
customer’s emphasis on quality dimension and the project’s latest risk assessment of
what would go wrong, expect for the small amount of ad hoc testing ,all of the test
cases should be prepared in advance of the start of testing
There are many different approaches to developing test cases. Test case
development is an activity performed in parallel with software development. It is just
as difficult to do a good job of coming up with test case as it is to program the system
itself.
Level of testing
Unit Testing
Integration Testing
Functional Testing
Navigation Testing
Interface Mechanism Testing
Form Testing
44
Unit Testing
Unit testing is this testing of each module and the integration of overall
system is done. Unit testing becomes verification effort on a smallest unit of
software design in the modules.
It is also known as module testing. The module of the system are tested
separately . this testing is carried out during the programming itself.
Test objectives
All field entities must work properly.
Pages must be activate a from the identifying link.
The entire screen, messages and responses must not be displayed.
Features to be tested
Verify that entries are the of the correct format.
No duplicate entries should be allowed.
Integration testing
Integration testing is the phase in a software testing in which individual
software models are combined and tested as a group. It occurs after unit testing and
before validation testing .
It may fall under both whitebox testing and blackbox testing. Modules are
typically code modules, individual applications, client and server applications on a
network, etc.
Functional Testing
Functional testing has to be performed to make sure that offering provides the
services that the user is paying for. Functional tests ensure that the business
requirements are being met. Some of the functional tests are performed on this
project:
System Verification Testing: This ensures whether the various modules
function correctly with one another, thus making sure that their behaviour is
as expected. This testing is successfully completed on this project.
Acceptance Testing: Here the cloud-based solution is handed over to the
users to make sure it meets their expectations. This testing is successfully
completed on this project
Interoperability Testing: Any application must have the flexibility to work
without any issues not only on different platforms but it must also work
seamlessly when moving from cloud infrastructure to another. This testing is
successfully completed on this project.
Interface Mechanism Testing
Interface Testing is defined as a software testing type which verifies
whether the communication between two different software systems is done
correctly. A connection that integrates two components is called interface.
user. When a user interacts with a web application, the interaction occurs through
one or more mechanisms which are called interface mechanisms. Testing done
within these mechanisms is the interface mechanism testing
Form Testing
Testing forms has been done at two different levels i. e. at minimum level
and at more targeted level. At minimum level I have tested for:
Navigation Testing
The job of navigation testing is to ensure that the navigation mechanisms
are functional, and to validate that each Navigation Semantic Unit can be achieved
by the appropriate user category. In this project have done the navigation testing in
following areas.
Navigation links are thoroughly tested.
Redirects are properly checked.
Is the target page to a navigation link is correct or not.
49
CHAPTER IV
4. SYSTEM IMPLEMENTATION
The active user must be aware of the benefits of using the system.
Their confidence in the software build up.
Proper guidance is impact to the user so that is comfortable in using the
application
4.1 Introduction
Implementation plan is updated throughout the development phase.
Implementation plan include text plant, training plan, equipment installation plan and
the conversion plan. The process of bringing development system into operational
use and turning it over to the user implementation activities extend form planning
through conversion from old system to the new system.
Test plan includes how to test the performance of the system using some
input data whose record output are already known. For this purpose sample data is
prepared and can be tested for output performance.
The implementation phase is less creative than should not disturb the
functioning of the organization. Training plan includes how to train the user of
personnel in order to make them easy to handle to be confident with the working of
the system in the models developed all of the data entry screen or in highly legible
from object.
The user can easily understand the working procedure by single time training
in instruction. Output also can be generated the easily in the same way this needs no
extra training. Equipment installation plan consists of planning how to install the
required computer and peripherals in a proper way in a possible lower cost. The
50
Each program is tester individually at the time development using the data
and has verified that this program linked together in a way specified in the program
specification, the computer system and its environment is tested to the satisfaction of
the user.
once the code is done the testing will be performed using the various
strategies it may also test the code by parallel operation which means, while doing
the coding part can do the side testing part which will not affect the coding. In
installation process it requires a software and database. This is primary thing for all
the installation part.
Taking the above mentioned factors into consideration, the proposed system
has been considered as a feasible recommended for implementation. After the careful
51
and study analysis of the system. The major functionalities are identified in the
system and hence the system and modules
The system that has been developed is accepted and proved to be satisfactory
for the user . and so the system is going to implemented very soon. A simple
operating procedure is included so that the user can understand the different function
clearly and quickly.
Corrective maintenance
Adaptive maintenance
Perfective maintenance
Preventive maintenance
52
Corrective maintenance
Corrective maintenance is concerned with the fixing error that observe when
software is in use. Corrective maintenance deals with the repair of faults or defect
found in day to day system function.
A defect can result due to errors in software design, a logic and coding. The
need of corrective maintenance is usually initiative by the bug reports drawn by the
user.
Adaptive Maintenance
Adaptive maintenance is the implementation of changes in the part of the
system, which has been affected by a change that occur in some other parts of the
system . It consists of adopting software to changes in the environment such as the
hardware or the operating system .
It is concerned with the change in the software that takes place to make the
software adaptable to new environment such as run the software on new operating
system.
Perfective Maintenance
Perfective maintenance is concerned with the change the software that occurs
while adding the new functionality in the software. It deals with the implementing
new or changed user requirements. It involves making functional enhancement to the
system in addition to the activities to increase the system’s performance even when
the changes have not been suggested by faults. This includes enhancing both the
function and efficiency of the code and changing the functionality of the system.
Preventive Maintenance
Preventive maintenance involved implementing changes to prevent the
occurrence of error. It tends to reduce the software complexity thereby improving
program understandability and increasing software maintainability. It comprise the
documentation updating , code optimization and code restructuring.
53
CHAPTER V
5. RESULTS
5.1 Conclusion
In this project, a TPA based Integrity Verification and Data Recovery has
been proposed, which helps reducing the computation time delay and traffic
mismatch errors. The system mainly depends on Third Party Auditor (TPA) which
will verify the status of the servers in regular interval for the lost connection or data.
The system will gain more efficient, higher analytical of data records, time
consuming. This system provides higher result in time consumption and reduced
computation overhead which compared to the previous results.
This DIP scheme maintaining the transparency between End user and cloud
service provider by performing the tight security manually on the client side so that
they can be satisfied on their security of data. So seen the popularity of outsourcing
real storage to the respective cloud servers, it is necessary to enable clients to verify
the integrity of a data in the cloud. Our DIP scheme preserve a fault tolerance and
repair traffic saving.
In the future work, a backup or replication to the TPA can provide higher data
retrieval and indexing in very less period. The security can be added to the system
will help in protecting the more privacy to the user data and files. An efficient
machine learning algorithms like ADABOOST can be implemented in the system
which will help the system in time consumption and increase the accuracy of the
retrieval.
APPENDIX
Screen Shots
REFERENCES