Bonfring International Journal of Software Engineering and Soft Computing, Vol. 9, No. 2, April 2019
64
Secure Sharing of Sensitive Data on a Big Data
Platform
R. Logeswari and V. Manimaran
Abstract--- Clients store immense measures of delicate
information on a major information stage. Sharing touchy
information will help ventures decrease the expense of giving
clients customized benefits and offer some incentive included
information administrations. Be that as it may, secure
information sharing is tricky. This paper proposes a system
for secure touchy information sharing on a major information
stage, including secure information conveyance, stockpiling,
utilization, and demolition on a semi-believed enormous
information sharing stage. We present an intermediary reencryption calculation dependent on heterogeneous ciphertext
change and a client procedure insurance strategy dependent
on a virtual machine screen, which offers help for the
acknowledgment of framework capacities. The system ensures
the security of clients' touchy information adequately and
shares these information securely. In the meantime,
information proprietors hold full oversight of their own
information in a sound situation for current Internet data
security.
Keywords--- Secure Sharing, Sensitive Data, Big Data,
Intermediary Re-encryption, Private Space.
I.
INTRODUCTION
W
ITH the fast improvement of data digitization, huge
measures of organized, semi-organized, and
unstructured information are produced rapidly. By gathering,
arranging, examining, and mining these information, a venture
can get a lot of individual clients' delicate information. These
information not just fulfill the needs of the venture itself, yet
in addition give administrations to different organizations if
the information are put away on a major information stage.
Customary distributed storage simply stores plain content or
encoded information inactively. Such information can be
considered as "dead", since they are not engaged with
computation. Be that as it may, a major information stage
permits the trading of information (counting touchy
information). It gives computational administration activities,
(for example, encoding information, change, or capacity
encryption) on information utilized by members, which can
animate "dead" information. A case of such an application is
appeared in Fig. 1 to outline the stream procedure of touchy
information on such a stage.
In Fig. 1, we think about client's inclinations as delicate
R. Logeswari, PG Scholar, Department of CSE, Nandha Engineering
College (Autonomus), Erode, India. E-mail: logeswariloki93@gmail.com
V. Manimaran, Asst. Professor, Department of CSE, Nandha Engineering
College (Autonomus), Erode, India. E-mail: manimaran.v@nandhaengg.org
DOI:10.9756/BIJSESC.9028
information. At the point when Alice presents an inquiry
(sportswear), the Search Engine Service Provider (SESP) first
searches for Alice's inclination on the huge information stage.
In the event that the huge information stage has gathered and
shared the client's close to home inclination data,
"badminton", at that point the web crawler returns customized
results (sportswear + badminton) to Alice. At the point when
Alice sees.
Fig. 1: Application of Sensitive Data
Her favorite badminton sportswear, she encounters a
lovely buy. Thusly, this prompts a success win circumstance.
In any case, while information sharing expands endeavor
resources, Internet uncertainty and the capability of delicate
information spillage likewise make security issues for touchy
information sharing.
Secure delicate information sharing includes four essential
wellbeing factors. In the first place, there are security issues
when delicate information are transmitted from an information
proprietor's neighborhood server to a major information stage.
Second, there can be touchy information registering and
capacity security issues on the huge information stage. Third,
there are secure touchy information use issues on the cloud
stage. Fourth, there are issues including secure information
annihilation. Some examination organizations and researchers
at home and abroad have made positive commitments to
investigation and research went for taking care of these
security issues.
Existing innovations have incompletely settled information
sharing and protection insurance issues from different points
of view, yet they have not considered the whole procedure in
the full information security life cycle. Notwithstanding, a
major information stage is a finished framework with multipartner association, and in this way can't endure any security
break bringing about touchy information misfortune. In this
paper, we investigate security issues including the whole
delicate information sharing life cycle and depict a framework
show made to guarantee secure touchy information sharing on
ISSN 2277-5099 | © 2019 Bonfring
Bonfring International Journal of Software Engineering and Soft Computing, Vol. 9, No. 2, April 2019
a major information stage, to ensure secure capacity on the
huge information stage utilizing Proxy Re-Encryption (PRE)
innovation, and to guarantee secure utilization of delicate
information sharing utilizing a private space process
dependent on a Virtual Machine Monitor (VMM). At that
point, a security module and an information implosion system
help to ease client concern with respect to delicate individual
data spillage. Whatever is left of this paper is sorted out as
pursues. Area 2 depicts related work. Area 3 proposes an
orderly structure for secure delicate information sharing.
Segment 4 clarifies secure accommodation and capacity of
touchy information dependent on PRE in detail. Segment 5
gives our answer for guaranteeing secure touchy information
utilize dependent on a VMM. Ends are attracted Section 6.
II. RELATED WORK
In this section, we focus on previous work on relevant
topics such as encryption, access control, trusted computing,
and data security destruction technology in a cloud storage
environment.
Data Encryption and Access Control of Cloud Storage
As to innovation, the Attribute-Based Encryption (ABE)
calculation incorporates Key-Policy ABE (KP-ABE)[1] and
Ciphertext-Policy ABE (CP-ABE)[2]. ABE decoding rules are
contained in the encryption calculation, staying away from the
expenses of continuous key dispersion in ciphertext get to
control. Be that as it may, when the entrance control technique
changes progressively, an information proprietor is required to
re-scramble the information. A technique dependent on PRE is
proposed in Ref. [3]. A semi-believed operator with an
intermediary key can re-encode ciphertext; in any case, the
specialist can't get the comparing plaintext or figure the
decoding key of either party in the approval process[4]. A
Fully Homomorphic Encryption (FHE) component is
proposed in Ref. [5]. The FHE component allows a particular
arithmetical task dependent on ciphertext that yields a still
scrambled outcome. All the more explicitly, recovery and
correlation of the scrambled information produce right
outcomes, yet the information are not unscrambled all through
the whole procedure. The FHE plot requires extremely
considerable calculation, and it isn't in every case simple to
execute with existing innovation. As to ciphertext recovery
with a view toward information security insurance in the
cloud, ciphertext recovery arrangements in the cloud are
proposed in Refs. [6– 8]. With respect to control, another
cryptographic access control conspire, Attribute-Based Access
Control for Cloud Storage (AB-ACCS), is proposed in Ref.
[9]. Every client's private key is marked with a lot of
properties, and information is scrambled with a quality
condition limiting the client to probably unscramble the
information just if their traits fulfill the information's
condition. Disseminated frameworks with Information Flow
Control (DIFC)[10] utilize a tag to follow information
dependent on a lot of straightforward information following
tenets. DIFC permit untrusted programming to utilize private
information, yet utilize confided in code to control whether the
private information can be uncovered. In Ref. [11], the
creators consider the unpredictability of fine- grained get to
65
control for a substantial number of clients in the cloud and
propose a protected and proficient disavowal plot dependent
on an adjusted CP-ABE calculation. This calculation is
utilized to build up fine-grained get to control in which clients
are disavowed by Shamir's hypothesis of mystery sharing.
With a Single Sign-On (SSO), any approved client can sign in
to the distributed storage framework utilizing a standard
normal application interface.
Trusted Computing and Process Protection
Confided in Computing Group (TCG) presented the
Trusted Platform Module (TPM) in its current engineering, to
guarantee that a general believed figuring stage utilizing TPM
security highlights is valid. In the scholarly world, the
principle inquire about thought incorporates first building a
believed terminal stage dependent on a security chip, and after
that setting up trust between stages through remote
confirmation. At that point, trust is reached out to the system.
Honesty estimation is the essential specialized methods for
building a confided in terminal stage. Research on virtual
stage estimation innovation incorporates HIMA[12] and
HyperSentry[13] metric design. Utilizing virtual stage
detachment highlights, HIMA measures the trustworthiness of
a virtual machine by checking the virtual machine's memory.
HyperSentry finishes the respectability estimation utilizing an
equipment component. TCG issued a Trusted Network
Connection (TNC) design detail form 1.0[14] in 2005,
described by having terminal respectability as a choice of
system get to control. Chinese researchers have directed
research on confided in system associations dependent on the
TNC architecture[15]. Starting by setting up the trust of the
terminal stage, Feng et al.[16] proposed a reliability based
trust demonstrate and gave a technique for building a trust
chain powerfully with data stream. Zhang et al.[17] proposed a
straightforward, in reverse perfect methodology that ensures
the security and trustworthiness of clients' virtual machines on
product virtualized foundations. Dissolver is a model
framework dependent on a Xen VMM and a Confidentiality
and
High-Assurance
Equipped
Operating
System
(CHAOS)[18– 21]. It guarantees that the client's content
information exist just in a private working space and that the
client's key exists just in the memory space of the VMM.
Information in the memory and the client's key are annihilated
at a client determined time.
Data Destruction
Wang et al.[22] proposed a security decimation plot for
electronic information. Another plan, Self Vanish, is proposed
in Ref. [23]. This plan averts bouncing assaults by expanding
the lengths of key offers and altogether expanding the expense
of mounting an assault. To take care of the issue of how to
anticipate delicate data from spilling, when a crisis happens,
Dong et al.[24] proposed a continuous touchy safe information
annihilation framework. The open source distributed
computing stockpiling framework, Hadoop Distributed File
System (HDFS), can't obliterate information totally, which
may prompt information spill. To fix this imperfection, Qin et
al.[25] structured a multi-grade safe information devastation
instrument for HDFS. In Ref. [26], the creators proposed
security the board over the whole information lifecycle and
ISSN 2277-5099 | © 2019 Bonfring
Bonfring International Journal of Software Engineering and Soft Computing, Vol. 9, No. 2, April 2019
utilized a required information decimation convention to
control client information.
Supposedly, few examinations center around the sharing of
touchy information on a major information stage. In Ref. [27],
Razick et al. given a typical system to classifying and sharing
both open and private information, yet they don't examine
information calculation on a major information stage. In this
paper, we talk about the issue of information stockpiling,
processing, use, and demolition.
Systematic Framework for Secured Sensitive Data Sharing
Issuing and leasing delicate information on a semibelieved enormous information stage requires an information
security instrument. Building secure channels for a full touchy
information life cycle requires thought of four parts of
66
wellbeing issues: dependable accommodation, safe
stockpiling, riskless use, and secure pulverization. A
deliberate structure for secure touchy information sharing on a
major information stage is appeared in Fig. 2.
A typical and prominent strategy for guaranteeing
information accommodation security on a semi- believed
enormous information stage is to encode information before
submitting information to the stage. A few activities, (for
example, encryption, decoding, and approval) are given
utilizing a security module. A cloud stage specialist
organization, (for example, a SESP) utilizing information on a
major information stage guarantees information security by
downloading and utilizing the security module that the
unscrambled clear content will release clients' private data.
Fig. 2: Systematic Framework for Secure Sensitive Data Sharing on a Big Data Platform
Hence, we have to receive process insurance innovation share this touchy data and after that submit and store the
dependent on a VMM, through a trusted VMM layer, comparing scrambled information on a major information
bypassing the visitor working framework and giving stage utilizing the nearby security module. Second, we have to
information security straightforwardly to the client procedure. play out the required activity with the submitted information
The key administration module of the VMM is utilized for utilizing PRE on the enormous information stage.
putting away open keys of the new register program gathering.
At that point, cloud stage specialist organizations who
At the point when a program is running, the symmetric key at
need to share the delicate data download and decode the
the base of the fundamental program will be unscrambled
comparing information in the private process space utilizing
progressively by the key administration module. All
the protected module with touchy security information running
utilizations of general society and symmetric keys are put
in that space. Last, we utilize a protected system to annihilate
away in the memory of the VMM.
utilized information still put away briefly in the cloud. To put
The file, replication, and reinforcement instrument of it plainly, the structure ensures the security of the full touchy
distributed storage make information repetition, requiring the information life cycle adequately. In the interim, information
utilization of an appropriate information annihilation plan to proprietors have unlimited oversight over their own
erase the client's private individual information. To information. Next, we talk about the most basic PRE
accomplish high security, we planned a rent based system to calculation dependent on heterogeneous figure content change
pulverize private information and keys completely in a and client process security strategies utilizing the VMM.
controlled way. Cleartext and keys exist no place in the cloud,
after the rent lapses.
III. SECURE SUBMISSION AND STORAGE OF SENSITIVE
DATA BASED ON PREH-PRE
The fundamental stream of the structure is as per the
following. In the first place, ventures that have singular
clients' delicate data pre-set those specialist co-ops that need to
RE consists of three types of algorithm, traditional
identity-based encryption (including SetupIBE, KeyGenIBE,
ISSN 2277-5099 | © 2019 Bonfring
Bonfring International Journal of Software Engineering and Soft Computing, Vol. 9, No. 2, April 2019
EncIBE, and DecIBE), re-encryption (including KeyGenRE,
ReEnc, and ReDec functions), and the last one is the
traditional public key cryptosystems (including KeyGenPKE,
EncPKE, and DecPKE). The basic H-PRE process is very
simple. The data owner encrypts the sensitive data using a
local security plug-in and then uploads the encrypted data to
the big data platform. The data are transformed into the
ciphertext and that can be decrypted by a specified user after
PRE services. If an SESP is a specified user, then the SESP
can decrypt the data using its own private key to obtain
corresponding clear text. We complete the following steps to
implement the H-PRE algorithm.
Then, PRE ciphertext, which can be encrypted by the
(authorized) data users, is generated. If the data user wants to
use the data on the big data platform, the data user will send
data requests to the platform and then query whether there is
corresponding data in the shared space. If such data exist, the
data user accesses and downloads it. The operation on the big
data platform is independent and transparent to users.
Moreover, the computing resources of the big data platform
are more powerful than those of the client. Hence, we can put
PRE computational overhead on the big data platform to
improve user experience. The PRE system includes data
submission, storage (sharing), and data extraction.
1
2
3
4
SetupIBE.k/: Input security parameters k,generate
randomly a primary security parameter mk, calculate
the system parameter set params using a bilinear map
and hash function.
KeyGenIBE.mk, params, id/: When the user requests
the private key from the key generation center, the
key generation center obtains the legal identity(id) of
the user and generates the public and private keys
(pkid, skid) for the user using params and mk.
KeyGenPKE.params/: When a user submits a request,
the key management center not only generates the
identity-based
public and private keys, but also
generates the public and private keys of the traditional
public key system (pk0 , sk0 ). and transforms the
original cipher using the PRE key.
EncIBE.pkid; skid; params; m/: When the user encrypts
data, the data owner encrypts the clear-text(m) into the
ciphertext (c D .c1; c2/) using the user’s own (pkid,
sk ) and a random number (r 2RZ×).
id
5
6
7
KeyGenRE .skidi;ski0d;pkid;params/:
When
the
operations execute.
.ReEnc.ci; rkidi idj ; params/: This process is executed
transparently on the big data platform. The function
re-encrypts the ciphertext that user i encrypted into
ciphertext that user j can decrypt. It inputs ci .ci D
.ci1; ci2//, the PRE key (rkidi idj), and related system
parameters, and then the big data platform computes
and outputs the PRE ciphertext(cj D.cj1;cj2/).
DecPKE.cj; sk0id ; params/: This is a function for
j
decrypting the PRE ciphertext. After receiving the
PRE ciphertext (cj D .cj1; cj 2/) from the proxy server
of the big data platform, user j determines the clear-
65
text (m0 D m) of the data using his or her own sk0id
j.
The submission, storage, and extraction operations of system
sensitive data
The information proprietor scrambles information locally,
first utilizing the Propelled Encryption Standard (AES)
symmetric encryption calculation to encode the
accommodation information and after that utilizing the PRE
calculation to encode the symmetric key of the information.
These outcomes are altogether put away inside the circulated
information. Meanwhile, if the information proprietor imparts
the delicate information to different clients, the information
proprietor must approve the touchy information locally and
produce the PRE key, which is put away in the approval key
server.
On the Huge Information Stage, the PRE Server re- Scrambles
creates arbitrarily an AES straightforward encryption key
(Symmetric Encryption Key, SEK), and afterward utilize the
AES calculation to encode the information records; (2) utilizes
the PRE calculation to scramble the SEK and store the
information ciphertext and SEK ciphertext in the server farms;
(3) distinguishes from the information proprietor the clients
assigned to share the information; (4) utilizes the security
module to peruse the private key of the information proprietor
and get the information client's open key from the enormous
information stage; (5) utilizes the security module to produce
the relating PRE key utilizing the EncIBE work and to transfer
the PRE key to the approval key server of the huge
information stage; and (6) re-scrambles the information
utilizing the ReEnc work on the huge information stage,
accordingly producing PRE ciphertext.
Information extraction activities Subsequent to getting the
information download ask for, the Internet browser conjures
the security module and gives information download
administrations to the information client, as per the
accompanying nitty gritty advances. The program (1)
questions whether there is approval for the information client
on the PRE server of the huge information stage, and if an
approval is in actuality, continues to Step (2); (2) utilizes the
download modules to send information download solicitations
to the enormous information stage, which at that point
discovers PRE ciphertext information in the server farm; (3)
pushes the PRE ciphertext to the protected information
module on the huge information stage; (4) summons an
information client's download module to peruse the client's
private key and prepares to decrypt data; (5) invokes a data
user’s download plug-in to decrypt received SEK ciphertext
using the DecPKE function and obtain the AES symmetric
key; and (6) permits the data user to decrypt the data
ciphertext using the AES symmetric key to obtain the required
clear text.
The data extraction operation is put into the private space
of a user process by the secure plug-in, a prerequisite for
secure use of sensitive data.
ISSN 2277-5099 | © 2019 Bonfring
Bonfring International Journal of Software Engineering and Soft Computing, Vol. 9, No. 2, April 2019
IV. SECURE USE OF SENSITIVE DATA ON VMM
The Private Space of a User Process based on a VMM
To guarantee secure running of an application in the cloud,
we utilize the private space of a client procedure dependent on
VMM. We accept that some endeavor, (for example, a SESP)
rents Foundation as an Administration (IaaS) to finish some
business. The business procedure needs to separate delicate
individual information on the enormous information stage. We
consider the shielded program that separates touchy
information from the enormous information stage a delicate
procedure. A danger model of a touchy procedure on a cloud
stage is appeared in Fig. 3. A delicate procedure must keep
dangers from an administration VMM and an inconsistent
66
working framework layer beneath it. Leased base equipment
utilizes the TPM mode, guaranteeing that the VMM is trusted.
For this situation, the key administration component of the
tenant, (for example, a SESP) must form this relationship
dependent on confiding in a VMM, guaranteeing safe activity
under the temperamental working framework.
The presentation of virtualization and believed figuring
innovation guarantees that specialist organization applications
and a protected module keep running in the process private
space. This mode can secure the protection of delicate
information and maintain a strategic distance from obstruction
from outside projects, even the working framework. A safe
operation process is shown in Fig. 4.
Fig. 3: Threat Model of a Sensitive Process in a Cloud Platform
With PRE ciphertext determined on a major information
gauge startup programming through confided in
stage extricated onto a cloud stage, private memory space of
registering innovation. In this manner, cloud clients
procedures on the cloud stage can ensure information security
(SESPs) must guarantee the trustworthiness of the
in memory and on the Hard Circle Drive (HDD). Initially, the
VMM, in other words, cloud clients must guarantee
VMM gives private memory space to determining a VM
that the VMM is trusted. After the booting procedure,
procedure. The procedure keeps running in private memory
the cloud server will store Fundamental Info/Yield
space whose memory can't be gotten to by the working
Framework (Profiles), Amazing Bound together
framework or different applications. The technique for
Bootloader (GRUB), and VMM estimations in the
memory disconnection guarantees information protection and
Stage Design Register (PCR) of the TPM chip, and
security in the memory. Besides, the information utilized and
afterward send a remote check to the client to
put away on circle is ciphertext. The VMM unscrambles or
guarantee the trust connection between them.
scrambles when perusing or composing information,
separately. Subsequently, a blend of these two measures can
be ensured utilizing the VMM, regardless of whether the client
program keeps running in memory or is put away on circle.
Secure Use of System Sensitive Data
We use process security innovation dependent on a VMM,
through a trusted VMM layer, and sidestep the visitor working
framework, giving information assurance straightforwardly to
the client procedure. To ensure information security during the
time spent communication on the cloud stage, the
accompanying advances must be finished.
1.
Establishing a believable situation and channels
Amidthebooting procedure, the cloud stage needs to
Fig. 4: Safe Operation Process
The SESP must build up a solid direct with the VMM in
the cloud, and after that get touchy information securely from
ISSN 2277-5099 | © 2019 Bonfring
Bonfring International Journal of Software Engineering and Soft Computing, Vol. 9, No. 2, April 2019
67
the huge information stage. The remote attestation and hand
shaking protocol between the SESP and the VMM in the cloud
is shown in Fig. 5.
In fact, the VMM responds to the request at the cloud
server end. First, the SESP sends an integrality request to the
cloud server, including the SESP public key (PKid) and
timestamp (TS). Second, the VMM generates a session key
(Ksess) and computes the hashed value of TS, PKid, and Ksess
using Secure Hash Algorithm (SHA1). Then, the VMM calls
the TPM quote instruction and passes the hashed value and
PCR as arguments to obtain the testimony (Quote) using the
TPM private key signature. The VMM uses PKid to encrypt
Ksess and then sends Ksess, Quote, and a Certification
Authority (CA) certificate to the other side.The SESP verifies
the value of TS, PKid, and Ksess after receiving this
information. If the values are consistent, the communications
are secure. As a result, both sides of the communications
determine a session key. In the future, both sides of
communication will be encrypted using the session key.
(1) Data upload and extraction
The cloud users (SESP) extract the sensitive data from the
big data platform through retrieval. We assume that the cloud is
untrusted. The uploaded executable application and data must
be encrypted before the SESP uses the cloud. The upload and
extract protocol of the data is shown in Fig. 6.
In Fig. 6, the SESP generates the AES symmetric key and a
pair of asymmetric keys (PKapp, SKapp) using the tools,
encrypts the executable files and data files using the AES
symmetric key, and encrypts the AES key by the asymmetric
keys, which are attached at the end of the application files. The
data obtained from the big data platform are PRE ciphertext,
which can be decrypted during runtime. The command format
of the new program must be identified when registering the
program. The user encrypts the PKid, registration command,
application name, public key (PKapp), and predetermined
lease using Ksess, and then sends them to the VMM. Finally,
encrypted executable files and data files are uploaded to the
cloud server.
Fig. 6: Upload and Extract Protocol of the Data
(2) Program execution
In the process of application execution on the cloud
platform, dynamic data protection and encryption are
similar to the protection of process memory space[18–
21], as shown in Fig. 4. During process execution, the
occupied memory process cannot be accessed by other
processes and operating systems. The VMM serves as
the bridge of data exchange between the operating
system and the user process. When the OS copies the
data from the user memory space, the VMM, not the
operating system, performs the copying operation,
because the operating system lacks read and write
privileges. When the data are copied into the private
memory space of the process, the VMM decrypts the
data using the corresponding AES symmetric key. Thus,
the data can be computed normally. Conversely, when
the data in the private memory space of the process are
copied to the outside, the VMM encrypts the data using
the corresponding AES symmetric key. Hence, the user
data stored on disk is in ciphertext form.
In a word, in the private space of a user process, the
security plug-in decrypts PRE data from the big data
platform, and the VMM decrypts data from the cloud
user (SESP). The generated data are encrypted when the
user process is completed, and then the data is destroyed
according to the terms of the lease. Therefore, the private
space of the user process acts as a balance point of the
security mechanism between the data owner and user,
benefiting both while preventing sensitive information
leakage.
V. CONCLUSIONS
Fig. 5: Remote Attestation and Hand Shaking Protocol
between the SESP and the VMM in the Cloud
In outline, we proposed an efficient structure of secure
sharing of delicate information on huge information stage,
which guarantees secure accommodation and capacity of
touchy information dependent on the heterogeneous
ISSN 2277-5099 | © 2019 Bonfring
Bonfring International Journal of Software Engineering and Soft Computing, Vol. 9, No. 2, April 2019
intermediary re-encryption calculation, and ensures secure
utilization of clear content in the cloud stage by the private
space of client process dependent on the VMM. The
proposed system well ensures the security of clients'
delicate information. In the meantime the information
proprietors have the full oversight of their own information,
which is a plausible answer for equalization the advantages
of included gatherings under the semi-confided in
conditions. Later on, we will advance the heterogeneous
intermediary re- encryption calculation, and further
improve the proficiency of encryption. Moreover, lessening
the overhead of the communication among included
gatherings is additionally an essential future work.
protection of virtual machines in multi- tenant cloud with nested
virtualization”, In Proc. 23rd ACM Symposium on Operating Systems
Principles, Pp. 203–216, 2011.
[18] X. Chen, T. Garfinkel, E.C. Lewis, and B. Spasojevic, “Overshadow: A
virtualization-based approach to retrofitting protection in commodity
operating systems”, In Proc. 13th Int. Conf. on Architectural Support for
Programming Languages and Operating Systems, Pp. 2–13, 2008.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
68
S. Yu, C. Wang, K. Ren, and W. Lou, “Attribute based data sharing with
attribute revocation”, in Proc. 5th ACM Symposium on Information,
Computer and Communications Security, Beijing, China, Pp. 261– 270,
2010.
J. Bethencourt, A. Sahai and B. Waters, “Ciphertext-policy attributebased encryption”, In Proc. IEEE Symposium on Security and Privacy,
Oakland, USA, Pp. 321–334, 2007.
J. Li, G. Zhao, X. Chen, D. Xie, C. Rong, W. Li, L. Tang and Y. Tang,
“Fine-grained data access control systems with user accountability in
cloud computing”, In Proc. 2nd Int. Conf. on Cloud Computing,
Indianapolis, USA, Pp. 89–96, 2010.
L. Wang, L. Wang, M. Mambo and E. Okamoto, “New identity-based
proxy re-encryption schemes to prevent collusion attacks”, In Proc. 4th
Int. Conf. Pairing-Based Cryptograghy-Pairing, Ishikawa, Japan, Pp.
327– 346, 2010.
C. Gentry, A fully homorphic encryption scheme, Ph.D dissertation,
Stanford University, California, USA, 2009.
S. Ananthi, M.S. Sendil and S. Karthik, “Privacy preserving keyword
search over encrypted cloud data”, In Proc. 1st Advances in Computing
and Communications, Kochi, India, Pp. 480–487, 2011.
H. Hu, J. Xu, C. Ren and B. Choi, “Processing private queries over
untrusted data cloud through privacy homomorphism”, In Proc. 27th
IEEE Int. Conf. on Data Engineering, Hannover, Germany, Pp. 601–
612, 2011.
N. Cao, C. Wang, M. Li, K. Ren and W. Lou, “Privacy- preserving
multi-keyword ranked search over encrypted cloud data”, In Proc. 30th
IEEE INFOCOM, Shanghai, China, Pp. 829–837, 2011.
C. Hong, M. Zhang and D. Feng, “AB-ACCS: A cryptographic access
control scheme for cloud storage, (in Chinese)”, Journal of Computer
Research and Development, Vol. 47, No. 1, Pp. 259–265, 2010.
N. Zeldovich, S. Boyd-Wickizer and D. Mazieres, “Securing distributed
systems with information flow control”, In Proc. 5th USENIX
Symposium on Networked Systems Design and Implementation, Pp.
293–308, 2008.
Z. Lv, C. Hong, M. Zhang and D. Feng, “A secure and efficient
revocation scheme for fine-grained access control in cloud storage”, In
Proc. 4th IEEE Int. Conf. on Cloud Computing Technology and Science,
Pp. 545–550, 2012.
A.M. Azab, P. Ning, E.C. Sezer, and X. Zhang, “HIMA: A hypervisorbased integrity measurement agent”, in Proc. 25th Annual Computer
Security Applications Conf., Hawaii, USA, pp. 461–470, 2009.
A.M. Azab, P. Ning, Z. Wang, X. Jiang, X. Zhang and N.C. Skalsky,
“Hyper Sentry: Enabling stealthy in- context measurement of hypervisor
integrity”, In Proc. 17th ACM Conference on Computer and
Communications Security, Pp. 38–49, 2010.
Trusted Computing Group, TNC architecture for interoperability,
http://www.trustedcomputinggroup.o rg/ resources/tnc architecture for
interoperability specification, 2014.
H. Zhang, L. Chen and L. Zhang, “Research on trusted network
connection, (in Chinese)”, Chinese Journal of Computers, Vol. 33, No.
4, Pp. 706–717, 2010.
D. Feng, Y. Qin, D. Wang, and X. Chu, “Research on trusted computing
technology, (in Chinese)”, Journal of Computer Research and
Development, Vol. 48, No. 8, Pp. 1332–1349, 2011.
F. Zhang, J. Chen, H. Chen and B. Zang, “Cloudvisor: Retrofitting
ISSN 2277-5099 | © 2019 Bonfring