28 - Securing Data With BlockChain and AI
28 - Securing Data With BlockChain and AI
28 - Securing Data With BlockChain and AI
Master
of
Computer Application
Submitted by
STUDENT_NAME
ROLL_NO
Under the esteemed guidance of
GUIDE_NAME
Assistant Professor
CERTIFICATE
This is to certify that the project report entitled PROJECT NAME” is the bonafied record of
project work carried out by STUDENT NAME, a student of this college, during the academic
year 2014 - 2016, in partial fulfillment of the requirements for the award of the degree of Master
Of Computer Application from St.Marys Group Of Institutions Guntur of Jawaharlal Nehru
Technological University, Kakinada.
GUIDE_NAME,
Asst. Professor Associate. Professor
(Project Guide) (Head of Department, CSE)
DECLARATION
We, hereby declare that the project report entitled “PROJECT_NAME” is an original work
done at St.Mary„s Group of Institutions Guntur, Chebrolu, Guntur and submitted in fulfillment
of the requirements for the award of Master of Computer Application, to St.Mary„s Group of
Institutions Guntur, Chebrolu, Guntur.
STUDENT_NAME
ROLL_NO
ACKNOWLEDGEMENT
We consider it as a privilege to thank all those people who helped us a lot for
successful
completion of the project “PROJECT_NAME” A special gratitude we extend to
our guide GUIDE_NAME, Asst. Professor whose contribution in stimulating
suggestions and encouragement ,helped us to coordinate our project especially in
writing this report, whose valuable suggestions, guidance and comprehensive
assistance helped us a lot in presenting the project “PROJECT_NAME”.
We would also like to acknowledge with much appreciation the crucial role of our
Co-Ordinator GUIDE_NAME, Asst.Professor for helping us a lot in completing
our project. We just wanted to say thank you for being such a wonderful educator
as well as a person.
We express our heartfelt thanks to HOD_NAME, Head of the Department, CSE,
for his spontaneous expression of knowledge, which helped us in bringing up this
project through the academic year.
STUDENT_NAME
ROLL_NO
ABSTRACT:
Data is the input for various artificial intelligence (AI) algorithms to mine valuable features, yet data in Internet is
scattered everywhere and controlled by different stakeholders who cannot believe in each other, and usage of the
data in complex cyberspace is difficult to authorize or to validate. As a result, it is very difficult to enable data
sharing in cyberspace for the real big data, as well as a real powerful AI. In this paper, we propose the SecNet, an
architecture that can enable secure data storing, computing, and sharing in the large-scale Internet environment,
aiming at a more secure cyberspace with real big data and thus enhanced AI with plenty of data source, by
integrating three key components: 1) block chain-based data sharing with ownership guarantee, which enables
trusted data sharing in the large-scale environment to form real big data; 2) AI-based secure computing platform to
produce more intelligent security rules, which helps to construct a more trusted cyberspace; 3) trusted value-
exchange mechanism for purchasing security service, providing a way for participants to gain economic rewards
when giving out their data or service, which promotes the data sharing and thus achieves better performance of AI.
Moreover, we discuss the typical use scenario of SecNet as well as its potentially alternative way to deploy, as well
as analyze its effectiveness from the aspect of network security and economic revenue
5
TABLE OF CONTENTS
TITLE PAGENO
1. ABSTRACT 6
2. INTRODUCTION 8
3- LITERATURE SURVEY 11
6. ARCHITECTURE DIAGRAM 43
7. MODULES 45
9.APPENDISIS 52
6
CHAPTER 1
SYNOPSIS
INTRODUCTION
With the development of information technologies, the trend of integrating cyber, physical and social (CPS)
systems to a highly unified information society, rather than just a digital Internet, is becoming increasing obvious
[1]. In such an information society, data is the asset of its owner, and its usage should be under the full control of
its owner, although this is not the common case [2], [3]. Given data is undoubtedly the oil of the information
society, almost every big company want to collect data as much as possible, for their future competitiveness [4],
[5]. An increasing amount of personal data, including location information, websearching behavior, user calls, user
preference, is being silently collected by the built-in sensors inside the products from those big companies, which
brings in huge risk on privacy leakage of data owners [6], [7]. Moreover, the usage of those data is out of control of
their owners, since currently there is not a reliable way to record how the data is used and by who, and thus has
little methods to trace or punish the violators who abuse those data [8]. That is, lack of ability to effectively
manage data makes it very difficult for an individual to control the potential risks associated with the collected data
[9]. For example, once the data has been collected by a third party (e.g., a big company), the lack of access to this
data hinders an individual to understand or manage the risks related to the collected data from him. Meanwhile, the
lack of immutable recording for the usage of data increases the risks to abuse them [10]. If there is an efficient and
trusted way to collect and merge the data scattered across the whole CPS to form real big data, the performance of
artificial intelligence (AI) will be significantly improved since AI can handle massive amount of data including
huge information at the same time, which would bring in great benefits (e.g., achieving enhanced security for data)
and even makes AI gaining the ability to exceed human capabilities in more areas [11]. According to the research
in [12], if given large amount of data in an orders of magnitude more scale, even the simplest AI algorithm
currently (e.g., perceptrons from the 1950s) can achieve fanciest performance to beat many state-of-theart
technologies today. The key lies in how to make data sharing trusted and secured [13]. Fortunately, the blockchain
technologies may be the promising way to achieve this goal, via consensus mechanisms throughout the network to
guarantee data sharing in a tamper-proof way embedded with economic incentives [14], [15]. Thus, AI can be
further empowered by blockchainprotected data sharing [16]–[18]. As a result, enhanced AI can provide better
performance and security for data. In this paper, we aim at securing data by combining blockchain and AI together,
and design a Secure Networking architecture (termed as SecNet) to significantly improve the security of data
sharing, and then the security of the whole network, even the whole CPS. In SecNet, to protect data, one of the
7
biggest challenges is where and how to store data, because users have to give their data to service providers if they
want to use certain services or applications [1], [3]. his is caused by the inherent coupling of user data and
application in current service mechanisms, which significantly hinders the development of data protection and
application innovation. Inspired by the concept of Personal Data Store (PDS) from openPDS [5] and the Private
Data Center (PDC) from HyperNet [1], SecNet finally inherits and adopts PDC instead of PDS, as PDC is more
suitable to deploy and to deal with this problem, since it provides more secure and intelligent data storage system
via physical entities instead of software-based algorithms as in openPDS. Each PDC actually serves as a secured as
well as centralized physical space for each SecNet user where his/her data lives in. Embedding PDC into SecNet
would allow users to monitor and reason about what and why their data is used as well as by who, meaning the
users can truly control every operation on their own data and achieve fine-grained management on access
behaviors for data. Actually, besides PDC, other choices can also be applied for the data storing in SecNet
according to certain requirements (see Section V). The trust-less relationship between different data stakeholders
significantly thwarts the data sharing in the whole Internet, thus the data used for AI training or analyzing is
limited in amount as well as partial in variety. Fortunately, the rise of Block chain technologies bring in a hopeful,
efficient and effective way to enable trust data sharing in trustless environment, which can help AI make more
accurate decisions due to the real big data collected from more places in the Internet. SecNet leverages the
emerging blockchain technologies to prevent the abuse of data, and to enable trusted data sharing in trustless or
even untrusted environment. For instance, it can enable cooperations between different edge computing paradigms
to work together to improve the whole system performance of edge networks [19]. The reason why blockchain can
enable trusted mechanisms is that it can provide a transparent, tamper-proof metadata infrastructure to seriously
recode all the usage of data [17]. Thus, SecNet introduces blockchain-based data sharing mechanisms with
ownership guarantee, where any data ready for sharing should be registered into a blockchain, named Data
Recording Blockchain (DRB), to announce its availability for sharing. Each access behavior on data by other
parties (not the data owner) should also be validated and recorded in this chain. In addition, the authenticity and
integrity of data can only be validated by DRB as well. Besides, SecNet enables economic incentive between
different entities if they share data or exchange security service, by embedding smart contract on data to trigger
automatic and tamper-proof value exchange. In this way, SecNet guarantees the data security and encourages data
sharing throughout the CPS. Furthermore, data is the fuel of AI [11], and it can greatly help to improve the
performance of AI algorithms if data can be efficiently networked and properly fused. Enabling data sharing across
multiple service providers can be a way to maximize the utilization of scattered data in separate entities with
potential conflicts of interest, which can enables a more powerful AI. Given enough data and blockchainbased
smart contract [20] on secure data sharing, it is not surprised that AI can become one of the most powerful
8
technologies and tools to improve cybersecurity, since it can check huge amount of data more quickly to save time,
and identify and mitigate threats more rapidly, and meanwhile give more accurate prediction and decision support
on security rules that a PDC should deploy. Besides, embedded with Machine Learning [21] inside, AI can
constantly learn patterns by applying existing data or artificial data generated by GAN [22] to improve its
strategies over time, to strengthen its ability on identifying any deviation on data or behaviors on a 24/7/365 basis.
SecNet can apply these advanced AI technologies into its Operation Support System (OSS) to adaptively identify
more suspicious data-related behaviors, even they are never seen before. In addition, swarm intelligence can be
used in SecNet to further improve the data security, by collecting different security knowledge from huge amount
of intelligent agents scattered everywhere in the CPS, with the help of trusted exchange mechanisms for incentive
tokens.
CHAPTER 2
SYSTEM ANALYSIS
The Given data is undoubtedly the oil of the information society, almost every big company
want to collect data as much as possible, for their future competitiveness [4], [5]. An
increasing amount of personal data, including location information, web- searching behavior,
user calls, user preference, is being silently collected by the built-in sensors inside the products
from those big companies, which brings in huge risk on privacy leakage of data owners [6],
[7]. Moreover, the usage of those data is out of control of their owners, since currently The
associate editor coordinating the review of this manuscript and approving it for publication
was Chi-Yuan Chen. there is not a reliable way to record how the data is used and by who, and
thus has little methods to trace or punish the violators who abuse those data [8]. That is, lack
of ability to effectively manage data makes it very difficult for an individual to control the
potential risks associated with the collected data [9]. For example, once the data has been
collected by a third party (e.g., a big company), the lack of access to this data hinders an
individual to understand or manage the risks related to the collected data from him.
Meanwhile, the lack of immutable recording for the usage of data increases the risks to abuse
them [10]
9
2.2PROPOSED SYSTEM
The we aim at securing data by combining blockchain and AI together, and design a
Secure Networking architecture (termed as SecNet) to significantly improve the security of
data sharing, and then the security of the whole network, even the whole CPS. In SecNet, to
protect data, one of the biggest challenges is where and how to store data, because users have
to give their data to service providers if they want to use certain services or applications [1],
[3]. This is caused by the inherent coupling of user data and application in current service
mechanisms, which significantly hinders the development of data protection and application
innovation. Inspired by the concept of Personal Data Store (PDS) from openPDS [5] and the
Private Data Center (PDC) from HyperNet [1], SecNet finally inherits and adopts PDC instead
of PDS, as PDC is more suitable to deploy and to deal with this problem, since it provides
more secure and intelligent data toragesystem via physical entities instead of
softwarebasedalgorithms as in openPDS. Each PDC actually serves as a secured as well as
centralized physical space for each SecNet user where his/her data lives in.Embedding PDC
into SecNet would allow users to monitor and reason about what and why their data is used as
well as by who, meaning the users can truly control every operation on their own data and
achievefine-grained management on access behaviors for data. Actually, besides PDC, other
choices can also be applied for the data storing in SecNet according to certain requirement.
10
III. IMPLEMENTATION
Modules Information: This project consists of two modules 1) Patients: Patients first create his profile
with all disease details and then select desired hospital with whom he wishes to share/subscribe data.
While creating profile application will create Blockchain object with allowable permission and it will
allow only those hospitals to access data. Patient Login: Patient can login to application with his profile
id and check total rewards he earned from sharing data. 2) Hospital: Hospital1 and Hospital2 are using
in this application as two organizations with whom patient can share data. At a time any hospital can
login to application and then enter search string as disease name. AI algorithm will take input disease
11
string and then perform search operation on all patients to get similar disease patients and then check
whether this hospital has permission to access that patient data or not, if hospital has access permission
then it will display those patients records to that hospital.
CHAPTER 3
REQUIREMENT SPECIFICATIONS
Functional Requirements:
2. Smart Contracts: The system's capability to support the creation and execution of smart contracts is
crucial. Smart contracts automate security protocols, allowing the enforcement of predefined rules. This
feature significantly enhances the system's responsiveness to security events in real-time, contributing to a
dynamic and adaptive security architecture.
3. AI-Powered Threat Detection: The implementation of machine learning algorithms for AI-powered
threat detection is an essential functional requirement. These algorithms analyze historical data patterns,
identify anomalies, and adapt security measures to evolving cyber threats. The incorporation of AI-driven
threat detection enhances the system's ability to proactively identify and respond to emerging security
risks.
4. Predictive Analytics: The utilization of predictive analytics is a key requirement for the system. By
leveraging historical data to assess and mitigate potential security risks, the system adopts a proactive
approach to security. Predictive analytics enable the identification of patterns indicative of emerging
threats and vulnerabilities, empowering the system to pre-emptively address potential risks.
12
access, contributing to an enhanced level of overall system security.
6. Encryption and Privacy Preservation: The robust implementation of encryption techniques for secure
data storage and AI-driven algorithms for privacy preservation is a critical requirement. Encryption
ensures the confidentiality of stored data, while AI-driven privacy measures add an extra layer of
protection against unauthorized access. This combination safeguards sensitive information from potential
security breaches.
7. Real-time Monitoring and Incident Response: The provision of real-time monitoring capabilities
powered by AI is an imperative functional requirement. This feature enables the immediate detection of
security incidents, allowing the system to trigger automated incident response protocols. Real-time
monitoring and automated responses significantly reduce the dwell time of potential threats, minimizing
the overall impact on data security.
Non-Functional Requirements:
1. Scalability: Ensuring the system's design allows for horizontal scalability is a paramount non-
functional requirement. This design characteristic is crucial to accommodate a growing volume of data
transactions and users over time. Scalability ensures the system's effectiveness and responsiveness even
as the user base and data load experience expansion.
Software Requirements:
1. Java Development Kit (JDK): The system requires JDK for compiling and executing Java code.
2. Blockchain Libraries: Libraries such as Web3j or Hyperledger Fabric Java SDK for integrating Blockchain
functionality.
3. AI Libraries: TensorFlow or PyTorch for implementing machine learning algorithms.
4. Database Management System: MySQL or MongoDB for storing and managing data.
5. Integrated Development Environment (IDE): Eclipse, IntelliJ IDEA, or NetBeans for Java development.
6. Web Server: Apache Tomcat or similar server for hosting web-based components.
13
Hardware Requirements:
1. Processor: Multi-core processor with adequate processing power for executing AI algorithms and Blockchain
consensus mechanisms.
2. Memory: Sufficient RAM to support concurrent processing and memory-intensive operations.
3. Storage: Sizable storage space for storing Blockchain data, AI models, and system logs.
4. Network Interface: Reliable network connectivity for communication with Blockchain networks and data
sources.
5. Backup Systems: Backup storage and redundancy mechanisms to ensure data integrity and availability.
6. Scalability: Ability to scale hardware resources to accommodate increasing data and user loads.
These software and hardware requirements are essential for deploying and running the system effectively to secure
data using Blockchain and AI technologies in a Java environment.
Hardware Specifications:
Server Infrastructure:
The robustness of the server infrastructure is pivotal for the seamless integration of blockchain and AI components.
Optimal processing power, memory, storage, and network connectivity are crucial for ensuring the system's
responsiveness.
Servers should be equipped with multicore processors, such as Intel Xeon or AMD EPYC, to efficiently handle
concurrent transactions. A minimum of 32GB RAM is recommended to support the execution of blockchain nodes,
smart contracts, and AI algorithms simultaneously. High-speed SSDs with ample storage capacity, preferably 1TB
or more, are essential for storing blockchain data and AI models. Network connectivity should be high-speed and
redundant to facilitate smooth communication between nodes.
Edge Devices and Clients:
Edge devices and client systems, interacting with the blockchain network, should possess adequate processing
power, memory, and storage. These devices play a crucial role in ensuring a seamless user experience.
Devices should be equipped with modern CPUs, such as Intel Core i5 or equivalent, to handle blockchain
interactions efficiently. A minimum of 8GB RAM is recommended for smooth interaction with the blockchain
network. Storage space should be sufficient for client applications, with SSDs preferred for faster data access.
Software Specifications:
Operating System:
14
The choice of a robust and secure operating system is foundational for the proper functioning of the system.
Different components of the system may require specific operating systems for optimal performance.
For server infrastructure, Linux distributions such as Ubuntu Server or CentOS are recommended for their stability,
security features, and compatibility with Java-based applications. Edge devices and clients can run popular
operating systems like Windows, macOS, or Linux based on user preferences and application compatibility.
Blockchain Framework:
The blockchain framework serves as the core of the system, providing the decentralized and tamper-resistant
ledger. The selection of an appropriate blockchain framework is critical for the successful implementation of the
project.
For a Java-based project, options include Hyperledger Fabric and Ethereum with Java SDKs. Hyperledger Fabric's
Java SDK facilitates interactions with the blockchain network, smart contract creation, and transaction
management. Alternatively, Ethereum can be used with Java libraries like Web3j, enabling seamless integration
with Ethereum smart contracts.
Feasibility study
A feasibility study serves as a crucial initial step in determining the viability and potential success of a proposed
project. In the context of developing a system to secure data with Blockchain and AI in Java, conducting a
thorough feasibility study is imperative to assess various aspects such as technical, economic, operational, and
scheduling feasibility. Below is an extensive analysis covering each of these dimensions:
Technical Feasibility:
16
The technical feasibility of the project revolves around evaluating whether the proposed system can be developed
using existing technology and resources. Given the availability of mature Blockchain and AI frameworks, such as
Hyperledger Fabric, TensorFlow, and PyTorch, it is technically feasible to implement the proposed system.
Additionally, the Java programming language provides robust support for integrating these technologies into a
cohesive solution. Furthermore, the presence of skilled Java developers and ample documentation and community
support enhances the technical feasibility of the project.
Economic Feasibility:
Economic feasibility entails assessing whether the proposed system is financially viable and offers a favorable
return on investment. Developing a system to secure data with Blockchain and AI involves upfront costs associated
with acquiring hardware, software licenses, and development resources. However, the potential benefits, such as
improved data security, enhanced decision-making capabilities, and operational efficiency, outweigh the initial
investment. Moreover, the cost-effectiveness of using open-source Blockchain and AI frameworks mitigates
financial risks and ensures long-term sustainability.
Operational Feasibility:
Operational feasibility examines whether the proposed system aligns with the organization's operational processes
and can be effectively integrated into existing workflows. Implementing a data security system based on
Blockchain and AI requires collaboration between various stakeholders, including IT personnel, data scientists, and
business users. Conducting thorough training and change management initiatives can facilitate the smooth adoption
of the new system. Additionally, establishing clear governance policies and compliance measures ensures that the
system complies with regulatory requirements and industry standards.
Scheduling Feasibility:
Scheduling feasibility evaluates whether the project can be completed within the specified time frame and aligns
with organizational objectives and priorities. Developing a secure data system with Blockchain and AI involves
multiple phases, including requirements gathering, system design, development, testing, and deployment. Creating
a detailed project plan with clear milestones, timelines, and resource allocations is essential for managing project
schedules effectively. Moreover, adopting agile development methodologies enables iterative development and
facilitates timely feedback and adjustments.
Conclusion:
17
The feasibility study indicates that developing a system to secure data with Blockchain and AI in Java is
technically, economically, operationally, and scheduling feasible. By leveraging existing technology, skilled
resources, and proven development practices, organizations can successfully implement the proposed system to
enhance data security, integrity, and reliability. However, it is essential to continuously monitor and evaluate
project progress to address any potential challenges and ensure project success.
18
(flow and storage) of goods in an organization whereas Supply Chain Management is the
coordination and management (movement) of supply chains of an organization
CHAPTER 4
20
The project involves handling sensitive data, and ensuring compliance with data privacy regulations is a critical
constraint. Designing systems that adhere to privacy laws, such as GDPR, may impose limitations on data storage,
processing, and sharing. Implementing robust encryption and access control mechanisms becomes essential to
address these constraints.
3.3 User Adoption and Education:
User adoption of a system that combines blockchain and AI technologies may face resistance due to unfamiliarity
or perceived complexity. Designing intuitive user interfaces and providing educational resources to users becomes
a crucial aspect of system implementation. Ensuring a user-friendly experience and clear documentation is
imperative to overcome adoption constraints.
4. Security Considerations:
4.1 Key Management:
The secure management of cryptographic keys used in blockchain transactions and AI model encryption is a
critical constraint. Designing a robust key management system to prevent unauthorized access and key compromise
is essential for maintaining the integrity and security of the entire system.
4.2 Immutable Record Challenges:
While the immutability of blockchain is a strength, it can also be a constraint when errors or vulnerabilities are
discovered post-implementation. Designing mechanisms to address and rectify issues in an immutable record
without compromising the integrity of the entire system is a considerable challenge.
4.3 Regulatory Compliance:
Adhering to regulatory frameworks and compliance standards poses design and implementation constraints. The
integration of blockchain and AI must navigate legal frameworks governing data protection, financial transactions,
and emerging regulations in the field. Ensuring compliance without compromising the system's security and
functionality is a complex consideration.
5. Resource Allocation and Optimization:
5.1 Resource Efficiency:
Optimizing resource usage, both in terms of computational power and storage, is a constraint that affects the
system's efficiency. Designing algorithms and data structures that minimize resource requirements while
maximizing performance is crucial. Implementing strategies for efficient resource allocation, especially in
resource-constrained environments, becomes a significant consideration.
5.2 Energy Consumption:
The energy consumption associated with blockchain consensus mechanisms, especially in public blockchains, is a
well-known constraint. Balancing the environmental impact of energy-intensive proof-of-work mechanisms with
21
the security and decentralization they provide requires thoughtful design decisions. Exploring alternative consensus
mechanisms or hybrid approaches becomes essential to address this constraint.
6. Usability and Accessibility:
6.1 Accessibility Constraints:
Ensuring that the system is accessible to users with diverse needs, including those with disabilities, is a usability
constraint. Designing interfaces that adhere to accessibility standards and providing alternative means of
interaction is essential for inclusive usability.
6.2 Cross-Platform Compatibility:
Designing the system to be compatible with various platforms and devices adds complexity. Ensuring cross-
platform compatibility involves addressing differences in operating systems, screen sizes, and input methods.
Implementing responsive design and leveraging cross-platform development frameworks can alleviate this
constraint.
SYSTEM CONSTRAINTS
When considering the implementation of a system utilizing Blockchain and AI technologies, various constraints
must be taken into account to ensure successful deployment and operation. One significant constraint is the
computational overhead associated with both Blockchain and AI algorithms. Blockchain, by its nature, requires
substantial computational resources for consensus mechanisms, transaction verification, and data storage.
Similarly, AI algorithms, especially deep learning models, demand significant computational power for training
and inference tasks. These computational requirements may pose challenges for resource-constrained environments
or systems with limited processing capabilities.
Another constraint is the scalability of the system. While Blockchain offers decentralized and transparent data
storage, its scalability remains a concern, particularly in public Blockchain networks where transaction throughput
and latency can become bottlenecks as the network grows. Similarly, the scalability of AI algorithms, particularly
in real-time applications, may be limited by the size and complexity of the models and the availability of
computational resources.
Additionally, interoperability and integration with existing systems and frameworks pose constraints on system
development. Ensuring seamless interaction between Blockchain and AI components, as well as compatibility with
legacy systems and data formats, requires careful planning and design considerations. Furthermore, regulatory and
22
compliance constraints, such as data privacy regulations and industry standards, may impact the implementation
and deployment of the system, necessitating adherence to legal requirements and best practices.
Overall, addressing these constraints requires a comprehensive understanding of the technological, operational, and
regulatory aspects of the system, as well as careful planning and optimization to ensure optimal performance and
compliance with relevant standards and regulations.
Constraints in Analysis
The analysis phase of a complex project like "Securing Data with Blockchain and AI" is a critical stage where
various constraints shape the project's direction, influencing decision-making and setting boundaries for design and
implementation. These constraints emerge from diverse factors, including technological limitations, regulatory
considerations, and practical challenges inherent in the integration of blockchain and AI technologies. This section
explores the key constraints in the analysis phase that guide the subsequent development of the system.
Adhering to data privacy regulations, notably the General Data Protection Regulation (GDPR), poses a significant
constraint. The analysis must consider the implications of collecting, processing, and storing sensitive user data.
Ensuring compliance with GDPR's principles of data minimization, purpose limitation, and transparency becomes
a fundamental aspect of the project.
The project's global nature may encounter challenges related to cross-border data transfer regulations. Analysis
must address legal constraints associated with transferring data across jurisdictions and ensure that the system's
design aligns with international data protection laws.
Integrating robust encryption mechanisms into the system is essential for data security. However, the analysis
phase must carefully consider the computational overhead and key management challenges associated with
23
encryption and decryption processes. Striking a balance between data security and system efficiency becomes a
critical consideration.
2. Technological Constraints:
Analyzing the compatibility of diverse technologies, such as blockchain frameworks and AI libraries, presents a
considerable constraint. Ensuring seamless interoperability between these technologies is crucial for the project's
success. The analysis must identify potential integration challenges and formulate strategies to address them.
The resource-intensive nature of both blockchain and AI technologies imposes constraints on the choice of
hardware infrastructure. The analysis phase must carefully evaluate the computational and storage requirements of
blockchain consensus mechanisms and AI model training. Optimizing resource usage without compromising
performance becomes a significant challenge.
Choosing the right technology stack for the project is a constraint that requires careful analysis. The selection of
blockchain frameworks, AI libraries, and database management systems must align with the project's goals and
constraints. The analysis must consider factors such as developer expertise, community support, and the long-term
viability of chosen technologies.
Conducting a thorough vulnerability analysis is a constraint that demands meticulous attention. Identifying
potential security vulnerabilities in both blockchain and AI components is crucial for preemptive risk mitigation.
The analysis must encompass aspects such as smart contract vulnerabilities, AI model robustness, and protection
against adversarial attacks.
24
The immutability of blockchain, while a strength, introduces challenges in addressing errors or vulnerabilities
discovered post-implementation. The analysis must explore strategies to rectify issues in an immutable record
without compromising the integrity of the entire system. Implementing mechanisms for secure updates and patches
becomes a vital consideration.
The secure management of cryptographic keys is a constraint that affects both blockchain transactions and AI
model encryption. The analysis phase must evaluate key management practices to prevent unauthorized access and
key compromise. Implementing secure key storage and distribution mechanisms becomes essential for maintaining
the integrity of the entire system.
The project's success hinges on user adoption, and the analysis phase must address constraints related to user
interface design. Ensuring a user-friendly and intuitive interface while accommodating the complexity of
blockchain and AI functionalities presents a unique challenge. The analysis must consider strategies to streamline
user interactions and provide effective user education.
Overcoming user resistance to the adoption of advanced technologies like blockchain and AI is a constraint that
requires a comprehensive analysis. Designing effective user education and training programs becomes crucial for
ensuring that users understand the system's benefits and functionalities. The analysis must explore strategies to
facilitate a smooth onboarding process and ongoing user support.
Scalability constraints in blockchain networks pose challenges to the system's performance. The analysis must
evaluate scalability solutions, such as sharding or sidechains, to ensure efficient transaction processing. Balancing
decentralization with scalability becomes a crucial aspect of the project's design considerations.
25
5.2 AI Model Training:
The analysis of AI integration must address constraints related to model training. The computational intensity of
training large AI models demands significant resources. Strategies for optimizing model architectures, exploring
transfer learning, or leveraging pre-trained models become essential considerations to overcome scalability
challenges in AI.
Ensuring the legality of smart contracts, especially in contractual agreements, is a constraint that requires careful
legal analysis. The analysis phase must identify potential legal challenges associated with smart contract execution
and propose mechanisms to align smart contract functionality with existing legal frameworks.
The ethical use of AI, including considerations of bias and fairness, imposes constraints on the analysis phase.
Evaluating potential biases in AI models and designing mechanisms for fair and ethical AI usage are crucial. The
analysis must explore ethical AI frameworks and guidelines to inform the project's design choices.
Continuous monitoring of the blockchain network and AI components is a constraint that demands analysis.
Establishing mechanisms for real-time monitoring and alerting is crucial for identifying security incidents
promptly. The analysis must consider tools and protocols for continuous monitoring to ensure the system's
resilience against emerging threats.
The immutability of blockchain introduces constraints regarding system upgrades and patches. The analysis must
explore strategies for implementing updates without compromising the integrity of existing records. Designing
mechanisms for secure upgrades and patches becomes crucial for maintaining the system's relevance and security.
26
Conclusion:
The analysis phase of "Securing Data with Blockchain and AI" in Java involves navigating through a myriad of
constraints that shape the project's trajectory. From legal and regulatory considerations to technological challenges,
each constraint poses unique demands on the project's design and implementation. Addressing these constraints
requires a multidisciplinary approach, involving legal experts, technologists, and domain specialists. The
subsequent phases of the project will build upon the insights gained during the analysis, incorporating strategic
solutions to overcome these constraints and deliver a robust, secure, and effective system.
Constraints in Design
4.1.2 Constraints in Design for "Securing Data with Blockchain and AI" in Java
In the design phase of the "Securing Data with Blockchain and AI" project, specific constraints
shape the architectural decisions and guide the creation of a system that seamlessly integrates
blockchain and AI technologies. These constraints encompass various aspects, from technological
limitations to regulatory requirements, influencing the design choices and the overall system
architecture.
1. Technological Constraints:
The selection of a specific blockchain framework imposes constraints on the overall system
design. Ensuring compatibility with the chosen blockchain framework, whether it's Hyperledger
Fabric or Ethereum, becomes pivotal. The design must align with the features and constraints
inherent in the selected blockchain technology, influencing how smart contracts are created,
executed, and managed.
The design phase must contend with constraints related to data storage mechanisms. Choosing
between traditional relational databases and NoSQL databases like MongoDB introduces trade-
offs in terms of scalability, data retrieval speed, and schema flexibility. Designing a storage
solution that aligns with the system's needs and constraints is a crucial aspect of the overall
architecture.
Smart contracts are integral to the system's security, but they come with their own set of
constraints. Ensuring the security of smart contracts against vulnerabilities such as reentrancy or
integer overflow requires careful design considerations. The architecture must include
mechanisms for thorough code audits, testing, and secure coding practices to mitigate these
constraints.
The design must address constraints related to key management for both blockchain transactions
and AI model encryption. Ensuring secure key generation, storage, and distribution mechanisms
becomes crucial for maintaining the confidentiality and integrity of data. The architecture must
incorporate robust key management practices to mitigate the constraints associated with
28
cryptographic keys.
Designing APIs for interoperability between different system components is a constraint that
arises from the need for seamless integration. The architecture must include well-defined APIs
that facilitate communication between blockchain and AI modules. Ensuring standardization and
compatibility between components is essential to overcome interoperability constraints.
The design must contend with constraints related to cross-platform compatibility. Ensuring that
the system works seamlessly across different operating systems, devices, and browsers requires
thoughtful architecture. Designing responsive interfaces and utilizing cross-platform development
frameworks becomes essential to address these constraints.
Ensuring real-time AI model inference introduces constraints related to model size, complexity,
and computational requirements. The design must optimize AI model architectures for efficient
inference within the constraints of available hardware resources. Implementing techniques like
model quantization or edge computing becomes essential for addressing performance constraints.
The design must incorporate mechanisms for ethical AI usage, addressing constraints related to
bias, fairness, and transparency. Implementing fairness-aware algorithms and designing AI
models that adhere to ethical guidelines becomes essential. The architecture must include checks
and balances to mitigate ethical constraints associated with AI deployment.
Constraints in Implementation
The application at this side controls and communicates with the following three main
general components.
⮚ embedded browser in charge of the navigation and accessing to the web service;
⮚ Server Tier: The server side contains the main parts of the functionality of the
proposed architecture. The components at this tier are the following.
30
Web Server, Security Module, Server-Side Capturing Engine, Preprocessing
Engine, Database System, Verification Engine, Output Module.
31
1. The software may be safety-critical. If so, there are issues associated with its
integrity level
2. The software may not be safety-critical although it forms part of a safety-critical
system. For example, software may simply log transactions.
3. If a system must be of a high integrity level and if the software is shown to be of
that integrity level, then the hardware must be at least of the same integrity level.
4. There is little point in producing 'perfect' code in some language if hardware and
system software (in widest sense) are not reliable.
5. If a computer system is to run software of a high integrity level then that system
should not at the same time accommodate software of a lower integrity level.
6. Systems with different requirements for safety levels must be separated.
7. Otherwise, the highest level of integrity required must be applied to all systems in
the same environment.
CHAPTER 5
32
5.2 Sequence Diagram:
A Sequence diagram is a kind of interaction diagram that shows how processes operate
with one another and in what order. It is a construct of Message Sequence diagrams are
sometimes called event diagrams, event sceneries and timing diagram.
33
5.3 Use Case Diagram:
A Use case Diagram is used to present a graphical overview of the functionality provided
by a system in terms of actors, their goals and any dependencies between those use cases.
Use case diagram consists of two parts:
Use case: A use case describes a sequence of actions that provided something of measurable
value to an actor and is drawn as a horizontal ellipse.
Actor: An actor is a person, organization or external system that plays a role in one or more
interaction with the system.
34
5.4 Activity Diagram:
CHAPTER 6
6.1 MODULES
⮚ Dataset collection
⮚ Prediction
37
6.2 MODULE EXPLANATION:
Dataset is collected from the kaggle.com. That dataset have some value like gender,
marital status, self-employed or not, monthly income, etc,. Dataset has the information,
whether the previous loan is approved
or not depends up on the customer information. That data well be preprocessed and proceed to the
next step.
In this stage, the collected data will be given to the machine algorithm for training
process. We use multiple algorithms to get high accuracy range of prediction. A preprocessed
dataset are processed in different machine learning algorithms. Each algorithm gives some
accuracy level. Each one is undergoes for the comparison.
✔ Logistic Regression
✔ K-Nearest Neighbors
Prediction:
Preprocessed data are trained and input given by the user goes to the trained
dataset. The Logistic Regression trained model is used to predict and determine whether
the loan given to a particular person shall be approved or not.
38
CHAPTER 7
7.1 CODING
Once the design aspect of the system is finalizes the system enters into the coding and
testing phase. The coding phase brings the actual system into action by converting the design
of the system into the code in a given programming language. Therefore, a good coding style
has to be taken whenever changes are required it easily screwed into the system.
7.2 CODING STANDARDS
Coding standards are guidelines to programming that focuses on the physical structure and
appearance of the program. They make the code
easier to read, understand and maintain. This phase of the system actually implements the
blueprint developed during the design phase. The coding specification should be in such a
way that any programmer must be able to understand the code and can bring about changes
whenever felt necessary. Some of the standard needed to achieve the above-mentioned
objectives are as follows:
Program should be simple, clear and easy to understand. Naming
conventions
Value conventions
Naming conventions of classes, data member, member functions, procedures etc., should
be self-descriptive. One should even get the meaning and scope of the variable by its name.
39
The conventions are adopted for easy understanding of the intended message by the user. So
it is customary to follow the conventions. These conventions are as follows:
Class names
40
Class names are problem domain equivalence and begin with capital letter and have
mixed cases.
Member Function and Data Member name
Member function and data member name begins with a lowercase letter with each
subsequent letters of the new words in uppercase and the rest of letters in lowercase.
7.2.2 VALUE CONVENTIONS
Script writing is an art in which indentation is utmost important. Conditional and looping
statements are to be properly aligned to facilitate easy understanding. Comments are included
to minimize the number of surprises that could occur when going through the code.
assurance. Testing is an integral part of the entire development and maintenance process. The
goal of the testing during phase is to verify that the specification has been accurately and
completely incorporated into the design, as well as to ensure the correctness of the design
itself. For example the design must not have any logic faults in the design is detected before
coding commences, otherwise the cost of fixing the faults will be considerably higher as
reflected. Detection of design faults can be achieved by means of inspection as well as
walkthrough.
Testing is one of the important steps in the software development phase. Testing checks
for the errors, as a whole of the project testing involves the following test cases:
⮚ Static analysis is used to investigate the structural properties of the Source code.
⮚ Dynamic testing is used to investigate the behavior of the source code by executing
the program on the test data.
Functional test cases involved exercising the code with nominal input values
for which the expected results are known, as well as boundary values and special values, such
42
as logically related inputs, files of identical elements, and empty files.
Three types of tests in Functional test:
⮚ Performance Test
⮚ Stress Test
⮚ Structure Test
It determines the amount of execution time spent in various parts of the unit,
program throughput, and response time and device utilization by the program unit.
7.4.4 STRESS TEST
Stress Test is those test designed to intentionally break the unit. A Great deal can
be learned about the strength and limitations of a program by examining the manner in which a
programmer in which a program unit breaks.
7.4.5 STRUCTURED TEST
Structure Tests are concerned with exercising the internal logic of a program and
traversing particular execution paths. The way in which White-Box test strategy was employed
to ensure that the test cases could Guarantee that all independent paths within a module have
been have been exercised at least once.
43
⮚ Handling end of file condition, I/O errors, buffer problems and textual errors
in output information
7.4.6 INTEGRATION TESTING
7.5.1 TESTING
45
review of specification design and coding. Testing is the process of executing the program
with the intent of finding the error. A good test case design is one that as a probability of
finding an yet undiscovered error. A successful test is one that uncovers an yet undiscovered
error. Any engineering product can be tested in one of the two ways:
7.5.1.1 WHITE BOX TESTING
This testing is also called as Glass box testing. In this testing, by knowing
the specific functions that a product has been design to perform test can be conducted that
demonstrate each function is fully operational at the same time searching for errors in each
function. It is a test case design method that uses the control structure of the procedural design
to derive test cases. Basis path testing is a white box testing.
Basis path testing:
⮚ Cyclometric complexity
⮚ Equivalence partitioning
PAGE \* MERGEFORMAT 75
⮚ Boundary value analysis
⮚ Comparison testing
A software testing strategy provides a road map for the software developer. Testing
is a set activity that can be planned in advance and conducted systematically. For this reason a
template for software testing a set of steps into which we can place specific test case design
methods should be strategy should have the following characteristics:
⮚ Testing begins at the module level and works “outward” toward the integration
of the entire computer based system.
⮚ The developer of the software and an independent test group conducts testing.
PAGE \* MERGEFORMAT 75
7.5.2.2 PROGRAM TESTING:
The logical and syntax errors have been pointed out by program testing. A syntax error
is an error in a program statement that in violates
one or more rules of the language in which it is written. An improperly defined field
dimension or omitted keywords are common syntax error. These errors are shown through
error messages generated by the computer. A logic error on the other hand deals with the
incorrect data fields, out-off-range items and invalid combinations. Since the compiler s will
not deduct logical error, the programmer must examine the output. Condition testing exercises
the logical conditions contained in a module. The possible types of elements in a condition
include a Boolean operator, Boolean variable, a pair of Boolean parentheses A relational
operator or on arithmetic expression. Condition testing method focuses on testing each
condition in the program the purpose of condition test is to deduct not only errors in the
condition of a program but also other a errors in the program.
7.5.2.3 SECURITY TESTING:
Future enhancement
Future enhancements for the system aimed at securing data with Blockchain and AI in Java can significantly
contribute to its effectiveness, scalability, and adaptability. These enhancements encompass various aspects,
including technological advancements, integration with emerging technologies, and expansion of functionalities.
Below are extensive considerations for potential future enhancements:
PAGE \* MERGEFORMAT 75
against evolving cyber threats and vulnerabilities. By continuously monitoring network activities, detecting
anomalies, and correlating security events, the system can proactively identify and mitigate potential security
breaches, unauthorized access attempts, and data manipulation incidents, thereby safeguarding critical assets and
ensuring business continuity.
In conclusion, implementing these future enhancements can empower the system aimed at securing data with
Blockchain and AI in Java to adapt to evolving technological trends, address emerging challenges, and unlock new
opportunities for innovation and growth. By embracing continuous improvement and staying abreast of
technological advancements, organizations can leverage the full potential of the system to achieve their strategic
objectives and drive digital transformation initiatives effectively.
In conclusion, the development of a robust system for securing data with Blockchain and AI in Java represents a
significant step towards addressing the evolving challenges of data security, privacy, and integrity in modern
digital environments. Through the integration of cutting-edge technologies, innovative approaches, and robust
security measures, the system aims to provide organizations with the tools and capabilities needed to safeguard
their critical assets, mitigate cyber threats, and maintain regulatory compliance.
CONCLUSION
The system's utilization of Blockchain technology offers several distinct advantages, including immutability,
transparency, and decentralization, which enhance data integrity, auditability, and trustworthiness. By leveraging
Blockchain's distributed ledger technology, the system ensures that data transactions are securely recorded, tamper-
resistant, and verifiable, thereby reducing the risk of data manipulation, unauthorized access, and fraudulent
activities.
Additionally, the incorporation of Artificial Intelligence (AI) enables the system to enhance its threat detection
PAGE \* MERGEFORMAT 75
capabilities, automate security operations, and adapt to dynamic cyber threats in real-time. By leveraging AI-driven
analytics, machine learning algorithms, and predictive modeling techniques, the system can identify anomalous
behavior, detect emerging threats, and proactively respond to security incidents, thereby strengthening the
organization's cyber resilience and incident response capabilities.
Furthermore, the system's focus on interoperability, scalability, and performance optimization ensures seamless
integration with existing IT infrastructure, accommodating evolving business requirements, and scaling to meet the
growing demands of data processing and transaction validation. By adopting open standards, modular
architectures, and cloud-native design principles, the system provides organizations with the flexibility, agility, and
scalability needed to adapt to changing business environments and technological landscapes.
Moreover, the continuous monitoring, threat intelligence, and compliance management capabilities embedded
within the system enable organizations to maintain a proactive security posture, mitigate risks, and demonstrate
regulatory compliance effectively. By integrating security information and event management (SIEM) capabilities,
threat intelligence feeds, and compliance frameworks, the system empowers organizations to identify, assess, and
respond to security threats and vulnerabilities promptly, ensuring the confidentiality, integrity, and availability of
sensitive data assets.
In conclusion, the development and implementation of a comprehensive system for securing data with Blockchain
and AI in Java represent a critical imperative for organizations seeking to safeguard their digital assets, mitigate
cyber risks, and maintain trust and confidence among stakeholders. By embracing innovative technologies,
adopting best practices, and fostering a culture of security awareness and vigilance, organizations can strengthen
their resilience against evolving cyber threats and position themselves for long-term success in today's digital-first
world.
PAGE \* MERGEFORMAT 75
APPENDICES
A. DATA DICTIONARY
A data dictionary is a comprehensive documentation that provides detailed descriptions of the data elements,
attributes, and relationships within a database or information system. It serves as a valuable reference guide for
understanding the structure, organization, and meaning of the data stored within the system. In the context of the
proposed system for securing data with Blockchain and AI in Java, the data dictionary plays a crucial role in
facilitating data management, analysis, and interpretation. Below is a detailed overview of the data dictionary for
the system:
1. User Profile:
- Description: This data element includes information about the users of the system, including their usernames,
passwords, roles, and access privileges.
- Attributes:
- Username: Unique identifier for each user.
- Password: Securely encrypted password for user authentication.
- Role: Specifies the role or permission level of the user (e.g., administrator, standard user).
- Access Privileges: Defines the specific actions or functions that the user is authorized to perform within the
system.
PAGE \* MERGEFORMAT 75
2. Blockchain Transactions:
- Description: This data element captures all transactions recorded on the Blockchain, including details such as
transaction IDs, timestamps, sender/receiver addresses, and transaction amounts.
- Attributes:
- Transaction ID: Unique identifier for each transaction.
- Timestamp: Date and time when the transaction was initiated.
- Sender Address: Blockchain address of the sender.
- Receiver Address: Blockchain address of the recipient.
- Transaction Amount: Quantity or value of assets transferred in the transaction.
3. Smart Contracts:
- Description: This data element represents the smart contracts deployed on the Blockchain, which contain
programmable logic for executing predefined actions or conditions autonomously.
- Attributes:
- Contract ID: Unique identifier for each smart contract.
- Contract Name: Descriptive name or label for the smart contract.
- Contract Address: Blockchain address where the smart contract is deployed.
- Contract Source Code: Code snippet or bytecode defining the logic and rules of the smart contract.
4. Security Events:
- Description: This data element logs security-related events and incidents detected by the system, such as
unauthorized access attempts, malware infections, or suspicious activities.
- Attributes:
- Event ID: Unique identifier for each security event.
- Event Type: Classification of the security event (e.g., intrusion attempt, data breach).
- Event Timestamp: Date and time when the event occurred.
- Event Description: Detailed description of the event, including relevant context and implications.
6. Audit Logs:
- Description: This data element contains a chronological record of all system activities, including user
interactions, configuration changes, and administrative actions.
- Attributes:
- Log ID: Unique identifier for each log entry.
- Log Timestamp: Date and time when the log entry was generated.
- Log Type: Classification of the log entry (e.g., user login, system error).
- Log Details: Detailed information about the event or activity recorded in the log.
7. Compliance Regulations:
- Description: This data element encompasses regulatory requirements, industry standards, and internal policies
governing data security, privacy, and compliance.
- Attributes:
- Regulation ID: Unique identifier for each compliance regulation.
- Regulation Name: Name or title of the regulation (e.g., GDPR, HIPAA, PCI DSS).
- Regulatory Requirements: Specific rules, guidelines, or obligations imposed by the regulation.
- Compliance Status: Indicates whether the organization is compliant with the regulation (e.g., compliant, non-
compliant, in progress).
8. System Configuration:
- Description: This data element captures the configuration settings and parameters of the system, including
network settings, encryption algorithms, and access controls.
- Attributes:
- Configuration ID: Unique identifier for each configuration profile.
- Configuration Name: Descriptive label for the configuration profile.
- Configuration Parameters: Settings and parameters defining the behavior and functionality of the system.
- Configuration Value: Specific values assigned to each configuration parameter.
PAGE \* MERGEFORMAT 75
This comprehensive data dictionary provides a structured overview of the data elements, attributes, and
relationships within the proposed system for securing data with Blockchain and AI in Java. By documenting these
key components, stakeholders can gain a deeper understanding of the system's data assets, facilitate data
management and governance, and ensure consistency and accuracy in data processing and analysis.
B. OPERATIONAL MANUAL
The operational manual for the system titled "Securing Data with Blockchain and AI in Java" serves as a
comprehensive guide for users and administrators on how to effectively operate and manage the system. This
manual outlines the necessary steps and procedures for installing, configuring, and utilizing the system to ensure
optimal performance and security. Below is a detailed overview of the operational manual:
1. System Installation:
- To install the system, users must first download the installation package from the designated repository or
source.
- Once downloaded, users should extract the contents of the package to a designated directory on their local
machine or server.
- Users can then execute the installation script included in the package to initiate the installation process.
- During installation, users will be prompted to specify configuration settings such as database credentials,
network parameters, and security options.
- Upon successful installation, users can proceed to configure the system according to their specific requirements.
2. System Configuration:
- After installation, users must configure the system settings to customize its behavior and functionality.
- Configuration options may include setting up user accounts and access privileges, defining data retention
policies, and configuring integration with external systems.
- Users can access the system's configuration interface through a web-based administration console or command-
line interface.
- Configuration changes should be carefully reviewed and tested to ensure compatibility and compliance with
organizational policies and requirements.
3. User Management:
- The system includes functionality for managing user accounts, roles, and permissions.
PAGE \* MERGEFORMAT 75
- Administrators have the authority to create, modify, and delete user accounts, as well as assign roles and access
privileges.
- Users should adhere to best practices for password management and account security, including the use of
strong, unique passwords and regular password updates.
- Access to sensitive system features and data should be restricted to authorized personnel only, with appropriate
authentication mechanisms in place to verify user identities.
4. Data Protection:
- The system employs advanced encryption techniques and cryptographic algorithms to safeguard sensitive data
from unauthorized access and tampering.
- Users should adhere to data protection policies and guidelines when handling and processing confidential
information within the system.
- Data backups should be performed regularly to prevent data loss in the event of system failures or security
incidents.
- Administrators are responsible for monitoring data access and usage patterns to detect and mitigate potential
security threats or breaches.
This operational manual provides users and administrators with a comprehensive overview of the procedures and
best practices for operating and managing the system effectively. By following these guidelines, users can ensure
the secure and reliable operation of the system while minimizing the risk of security incidents and data breaches.
SAMPLE CODE
import java.security.*;
import java.util.ArrayList;
// Constructor
public Block(String data, String previousHash) {
this.data = data;
this.previousHash = previousHash;
this.hash = calculateHash();
}
// Constructor
public Blockchain() {
this.blockchain = new ArrayList<>();
// Genesis block
blockchain.add(new Block("Genesis Block", "0"));
}
if (!currentBlock.calculateHash().equals(currentBlock.getHash())) {
return false;
}
if (!currentBlock.getPreviousHash().equals(previousBlock.getHash())) {
return false;
}
}
return true;
}
}
PAGE \* MERGEFORMAT 75
// Utility class for cryptographic operations
class StringUtil {
// Applies SHA-256 hashing algorithm to a string
public static String applySha256(String input) {
try {
MessageDigest digest = MessageDigest.getInstance("SHA-256");
byte[] hash = digest.digest(input.getBytes("UTF-8"));
StringBuilder hexString = new StringBuilder(); // This will contain hash as hexadecimal
for (byte b : hash) {
String hex = Integer.toHexString(0xff & b);
if (hex.length() == 1) {
hexString.append('0');
}
hexString.append(hex);
}
return hexString.toString();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
BIBLIOGRAPHY
REFERENCES
[1] H. Yin, D. Guo, K. Wang, Z. Jiang, Y. Lyu, and J. Xing, ‘‘Hyper connected network: A decentralized trusted
computing and networking paradigm,’’ IEEE Net w., vol. 32, no. 1, pp. 112–117, Jan./Feb. 2018.
[2] K. Fan, W. Jiang, H. Li, and Y. Yang, ‘‘Lightweight RFID protocol for medical privacy protection in IoT,’’
IEEE Trans Ind. Informat., vol. 14, no. 4, pp. 1656–1665, Apr. 2018.
[3] T. Chajed, J. Gjengset, J. Van Den Hooff, M. F. Kaashoek, J. Mickens, R. Morris, and N. Zeldovich, ‘‘Amber:
Decoupling user data from Web applications,’’ in Proc. 15th Workshop Hot Topics Oper. Syst. (HotOS XV),
WarthWeiningen, Switzerland, 2015, pp. 1–6.
[4] M. Lecuyer, R. Spahn, R. Geambasu, T.-K. Huang, and S. Sen, ‘‘Enhancing selectivity in big data,’’ IEEE
Security Privacy, vol. 16, no. 1, pp. 34–42, Jan./Feb. 2018. [5] Y.-A. de Montjoye, E. Shmueli, S. S. Wang, and A.
S. Pentland, ‘‘openPDS: Protecting the privacy of metadata through SafeAnswers,’’ PLoS ONE, vol. 9, no. 7,
2014, Art. no. e98790.
[6] C. Perera, R. Ranjan, and L. Wang, ‘‘End-toend privacy for open big data markets,’’ IEEE Cloud Comput., vol.
2, no. 4, pp. 44–53, Apr. 2015.
[7] X. Zheng, Z. Cai, and Y. Li, ‘‘Data linkage in smart Internet of Things systems: A consideration from a privacy
perspective,’’ IEEE Commun. Mag., vol. 56, no. 9, pp. 55–61, Sep. 2018.
[8] Q. Lu and X. Xu, ‘‘Adaptable blockchain-based systems: A case study for product traceability,’’ IEEE Softw.,
vol. 34, no. 6, pp. 21–27, Nov./Dec. 2017.
[9] Y. Liang, Z. Cai, J. Yu, Q. Han, and Y. Li, ‘‘Deep learning based inference of private information using
embedded sensors in smart devices’’ IEEE Netw. Mag., vol. 32, no. 4, pp. 8–14, Jul./Aug. 2018.
PAGE \* MERGEFORMAT 75
[10] Q. Xia, E. B. Sifah, K. O. Asamoah, J. Gao, X. Du, and M. Guizani, ‘‘MeDShare: Trust-less medical data
sharing among cloud service providers via blockchain,’’ IEEE Access, vol. 5, pp. 14757–14767, 2017.
PAGE \* MERGEFORMAT 75