Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter - 1: 1.1 Objectives

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 35

Chapter – 1

Introduction
Building algorithms and models to predict prices and future events has been given a
significant amount of attention in the past decade. With user data being collected through
various forms of paths, there has never been an abundance in raw data like there is now. Any
model capable of predicting a future event whether it be to find out what the next big trend is
or to predict the next behavior of a customer, most predictive models possess great potential
to change opportunity into revenue. The price prediction category is no different. For years
analysts and researches have been studying and trying to improve algorithms to help predict
future prices. Unfortunately, the protections cannot be based purely on just previous prices,
though it should still considered to have the most contribution to the model, other things such
as economic growth, social and popularity of the commodity also play a significant role price
predictions.

1.1 Objectives
The objective of this project is to predict the bitcoin prices based on the data and providing
information to user that in which direction price will move.

1.2 Problem Specification


For years analysts and researches have been studying and trying to improve algorithms to help
predict future prices. Unfortunately, the predictions cannot be based purely on just previous
prices, though it should still considered to have the most contribution to the model, other
things such as economic growth, social and popularity of the commodity also play a
significant role price predictions.
A methodology was developed for predicting Bitcoin price by predicted trading volume using
Google Trends views. However, one limitation of such studies is the often small sample size,
and propensity for misinformation to spread through various (social) media channels such as
Twitter or on message boards such as Reddit, which artificially inflate/deflate prices [5]. In
the Bitcoin exchanges liquidity is considerably limited. As a result, the market suffers from a
greater risk of manipulation. For this reason, sentiment from social media is not considered
further.

1
 Predictions based solely on previous data results in low accuracy.
 Sentiment analysis is not considered to be a factor because social media suffer from
greater risk of manipulation

1.3 Methodologies
Support vector machine algorithms have been successfully used in the past as we studies in
research works done in . In particular, support vector machines (SVM) are suggested to work
well with small or noisy data and this have been used widely in the asset returns prediction
problems. SVM classification has the advantage of yielding global optimal values. In this
project, a predictive model is analyzed based on the input and the accuracy of the result. The
model was built using the SVM.

Having the data is critical to build and machine learning model and the quality of data is also
important. In such a scenario, there is need to be an algorithm and procedure to check whether
the given data is valid. In the project, an anomaly detection model was implemented by using
unsupervised learning. K-means clustering was used to group the data into m-data points as
there are no labels for the data. Once the group is ready the data was fed into a unsupervised
support vector machine to recognize the anomalies in the given sequence of m-data points.

The neural networks built on in this project were completed using the Keras libraries. Keras
offers neural network API which can run on Tensorflow or Theano. Keras was selected for its
user-friendly API’s and its ability to support multiple CPU’s as well as GPU’s. Keras
facilitates seamless prototyping. Like all python libraries Keras also takes advantages of the
modularity concept providing users with independent configurable modules. These modules
are also customizable allowing the developers to create new and more effective model to suit
their requirements. Since all the code is purely written in python, python developers do not
find it hard to debug or run complex modified code.

1.4 Contributions
This project differs from the already done studies in that it is among the first to examine the
predictability of bitcoin value using a predictive model using deep learning methodology.

2
1.5 Organization Of The Project
The rest of the project is organized as follows.
Chapter 2 discusses the related work done in the area of bitcoin price prediction. This section
greatly helped us to work on our project giving precise areas for improvement.
Chapter 3 gives the software requirements, hardware requirements, functional and non
functional requirements of the proposed project work .
Chapter 4 discusses the project design with System Architecture, DFD, UML Diagram and
ER Diagrams.
Chapter 5 give overall implementation details of our project.
Chapter 6 provides the testing part, test cases and results
The last chapter gives conclusion and future prospects.

3
Chapter – 2
Literature Survey

In [1] the authors Pagnotta, E. and A. Buraschi address the valuation of bitcoins and other
blockchain tokens in a new type of production economy: a decentralized financial network
(DN). An identifying property of these assets is that contributors to the DN trust (miners)
receive units of the same asset used by consumers of DN services. Therefore, the overall
production (hashrate) and the bitcoin price are jointly determined. Pagnotta, E. and A.
Buraschi characterize the demand for bitcoins and the supply of hashrate and show that the
equilibrium price is obtained by solving a fixed-point problem and study its determinants.
Price-hashrate “spirals” amplify demand and supply shocks.

Unlike traditional payment systems, has no owner and is governed by a computer protocol.
This project models Bitcoin as a platform that intermediates between users and computer
servers (“miners”) which operate the Bitcoin payment system (BPS) and studies the novel
market design of this owner-less platform. In [2] Gur Huberman find that the BPS can
eliminate inefficiencies due to market power but incurs other costs. Having fixed transaction
processing capacity, the BPS experiences service delays which motivate users to pay for
service priority. Free entry implies that miners cannot profitably affect the level of fees paid
by users. The project derives closed form formulas of the fees and waiting times and studies
their properties; compares pricing under the BPS to that under a traditional payment system
operated by a profit maximizing firm; and suggests protocol design modification to enhance
the platform’s efficiency. The appendix describes and explains the main attributes of Bitcoin
and the underlying blockchain technology.

4
Owned by nobody and controlled by an almost immutable protocol the Bitcoin payment
system is a platform with two main constituencies: users and profit seeking miners who
maintain the system's infrastructure. The project seeks to understand the economics of the
system: How does the system raise revenue to pay for its infrastructure? How are usage fees
determined? How much infrastructure is deployed? What are the implications of changing
parameters in the protocol? A simplified economic model that captures the system's properties
answers these questions. Transaction fees and infrastructure level are determined in an
equilibrium of a congestion queueing game derived from the system's limited throughput. The
system eliminates dead-weight loss from monopoly but introduces other inefficiencies and
requires congestion to raise revenue and fund infrastructure. Huberman , G. , J.D.Leshno
explore the future potential of such systems and provide design suggestions[3].

The proposed tests in [4], the technical trading rule of moving average (MA) in a long-only
portfolio using exchange traded funds (ETFs). Huang, J.-Z. and Z.J. Huang also propose a
quasi-intraday version of the MA strategy (QUIMA) that allows investors to trade
immediately upon observing MA crossover signals. We find that 1) this QUIMA strategy
outperforms the traditional version of the MA strategy that only trades at the close of a trading
day, when the long-term MA lag length is not too long, 2) the documented profitability of MA
strategy on indices is greatly reduced on ETFs, mainly due to more frequent and larger
opening gaps on ETF prices than those on indices, and 3) relative to the buy-and-hold
strategy, MA strategies have lower return, but better risk-adjusted performance measures such
as the CAPM alpha. In addition, Huang, J.-Z. and Z.J. Huang find that among various long-
term MA lengths, the 10-day MA turns out to be overly exploited by investors as its
performance is significantly lower than those of surrounding MA lengths. Overall, our
findings indicate that profitability of the MA trading rule reduces on tradable ETFs than on
non-tradable indices.

5
The authors in [5] developed a model of user adoption and use of virtual currency (such as
Bitcoin) and focusing on the dynamics of adoption in the presence of frictions arising from
exchange rate uncertainty. The theoretical model can be used to analyze how market
fundamentals determine the exchange rate of fiat currency to Bitcoin. Empirical evidence
from Bitcoin prices and utilization provides mixed evidence about the ability of the model to
explain prices. Further analysis of the history of all individual transactions on Bitcoin’s public
ledger establishes patterns of adoption and utilization across user types, transaction type, and
geography. We show that as of mid-2015, active usage was not growing quickly, and that
investors and infrequent users held the majority of Bitcoins. We document the extent to which
the attributes of the anonymous users of Bitcoin can be inferred through their behavior, and
found that users who engage in illegal activity are more likely to try to protect their financial
privacy.

6
Chapter – 3
Requirement Specification

3.1 Hardware Requirements


The hardware requirements may serve as the basis for a contract for the
implementation of the system and should therefore be a complete and consistent specification
of the whole system. They are used by software engineers as the starting point for the system
design. It shows what the system do and not how it should be implemented.

 PROCESSOR : Intel i5.


 RAM : 4GB RAM
 HARD DISK : 500 GB

3.2 Software Requirements


The software requirements document is the specification of the system. It should include both
a definition and a specification of requirements. It is a set of what the system should do rather
than how it should do it. The software requirements provide a basis for creating the software
requirements specification. It is useful in estimating cost, planning team activities,
performing tasks and tracking the teams and tracking the team’s progress throughout the
development activity.

 Operating System : Windows 10


 IDE/Editor : Anaconda / Spyder3
 Programming Language : Python

Anaconda navigator is an open source distributor for Python. It focuses on providing


IDE’s and programming environments for data science and machine learning. Anaconda is
widely used because of the custom packages that have been built. It is compatible with
Windows, Linux and MacOS. Anaconda also supports development in R programming and
has a wide community base surround in the development in R and python development.

7
The neural networks built on in this project were completed using the Keras libraries.
Keras offers neural network API which can run on Tensorflow or Theano. Keras was selected
for its user-friendly API’s and its ability to support multiple CPU’s as well as GPU’s. Keras
facilitates seamless prototyping. Like all python libraries Keras also takes advantages of the
modularity concept providing users with independent configurable modules. These modules
are also customizable allowing the developers to create new and more effective model to suit
their requirements. Since all the code is purely written in python, python developers do not
find it hard to debug or run complex modified code.

3.3 Functional Requirements


A functional requirement defines a function of a software-system or its component. A
function is described as a set of inputs, the behavior, Firstly, the system is the first that
achieves the standard notion of semantic security for data confidentiality in attribute-based
deduplication systems by resorting to the hybrid cloud architecture.

3.4 Non-Functional Requirements


Our multi-modal event tracking and evolution framework is suitable for multimedia
documents from various social media platforms, which can not only effectively capture their
multi-modal topics, but also obtain the evolutionary trends of social events and generate
effective event summary details over time. Our proposed mmETM model can exploit the
multi-modal property of social event, which can effectively model social media documents
including long text with related images and learn the correlations between textual and visual
modalities to separate the visual-representative topics and non-visual-representative topics.

8
Chapter – 4
System Design
4.1 General
Design Engineering deals with the various UML [Unified Modelling language] diagrams for
the implementation of project. Design is a meaningful engineering representation of a thing
that is to be built. Software design is a process through which the requirements are translated
into representation of the software. Design is the place where quality is rendered in software
engineering.

4.2 System Architecture

USER

ONLINE
GATHER DATA
WEBSITE

PREPROCESSING

SEGMENTATION

APPLY DNN

FORECAST

Fig 4.1: System Architecture

9
4.3 Data Flow Diagram
Level 0

USER
DATABASE

DATA PREPROCESSING

PREDICTION SEGMENTATION

Fig 4.2 level- 0 Data Flow Diagram


Level 1

DATA ASSEMBLY

PREPROCESSING REGULARIZATION PREDICTION

REMOVE NOISE, OVERFITTING, SEGMENTATION,


REMOVE NULLS ACCURACY IMPLEMENTATION,
PREDICTION

PREDICT THE
PRICE

Fig 4.3: level-1Data Flow Diagram

10
4.4 UML Diagrams
4.4.1 Use Case diagram

Data assemblage

Data cleaning
Preprocessing

Removing noisy

Get noiseless data

Segmentation

User

DNN
Device

Regularization

Implementation

Forecasting

Fig 4.4: Use Case Diagram


Explanation:
The main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted. The above diagram consists of
user as actor. Each will play a certain role to achieve the concept.

11
4.4.2 Class Diagram

Data cleaning
Data assemblage
Clean
Data
Remove
Storing the data()
Cleaning the data()
Retreving the data()
Removing nulls()
Removing noisy()

Forecasting Regularization
Prediction Flexible
Result Accuracy

Predict the future data() Madel become more flexible()


Overfitting()

Fig 4.5: Class Diagram


Explanation
In this class diagram represents how the classes with attributes and methods are linked
together to perform the verification with security. From the above diagram shown the various
classes involved in our project.

12
4.4.3 Object Diagram

Data assemblage Data cleaning

Forecasting Regularization

Fig 4.6: Object Diagram


Explanation:
In the above digram tells about the flow of objects between the classes. It is
a diagram that shows a complete or partial view of the structure of a modeled system. In this
object diagram represents how the classes with attributes and methods are linked together to
perform the verification with security.

13
4.4.4 Component Diagram

User Data Data


assemblage cleaning

DNN

Forecasting Implementation Regularization

Fig 4.7: Component Diagram

4.4.5 Deployment Diagram

User Data Data cleaning


assemblage

Forecasting

Implementation Regularization DNN

Fig 4.8: Deployment Diagram

14
4.4.6 Sequence Diagram

User Device DNN

1: Input Data

2: Preprocessing

3: Cleaning Data

4: Segmentation

5: Regularization

6: Data

7: Create Nodes

8: Create Weights

9: Classify

10: Change Weights


11: Forcasting

Fig 4.9: Sequence Diagram


Explanation:
Activity diagrams are graphical representations of workflows of stepwise activities
and actions with support for choice, iteration and concurrency. In the Unified Modeling
Language, activity diagrams can be used to describe the business and operational step-by-step
workflows of components in a system. An activity diagram shows the overall flow of control.

15
4.4.7 Collaboration Diagram

2: Preprocessing
3: Cleaning Data
4: Segmentation
5: Regularization

1: Input Data
User Device

7: Create Nodes
8: Create Weights 6: Data
11: Forcasting
9: Classify
10: Change Weights

DNN

Fig 4.10: Collaboration Diagram

16
4.4.8 State Diagram

Data set

Data cleaning

Data segmentation

Regularization

Implementation

Forecasting

Fig 4.11: State Diagram


Explanation:
State diagram are a loosely defined diagram to show workflows of stepwise activities and
actions, with support for choice, iteration and concurrency. State diagrams require that the
system described is composed of a finite number of states; sometimes, this is indeed the case,
while at other times this is a reasonable abstraction. Many forms of state diagrams exist,
which differ slightly and have different semantics.

17
4.4.9 Activity Diagram

Dataset

Data Cleaning

No

Is Valid (Yes)

Data Segmentation

Regularization

Implementation

Forecasting

Fig 4.12: Activity Diagram


Explanation:
Activity diagrams are graphical representations of workflows of stepwise activities
and actions with support for choice, iteration and concurrency. In the Unified Modeling
Language, activity diagrams can be used to describe the business and operational step-by-step
workflows of components in a system. An activity diagram shows the overall flow of control.

18
4.5 E-R Diagram

Poloniex.co Cleaning
User
m

System Dataset Gathering Preprocessing

Verify
Read data
Read data

Forecasting Classification Segmentation

Display Prediction Raw data

Fig 4.13: E-R Diagram


Explanation:
Entity-Relationship Model (ERM) is an abstract and conceptual representation of data. Entity-
relationship modeling is a database modeling method, used to produce a type of conceptual
schema or semantic data model of a system, often a relational database.

19
Chapter – 5
Implementation
General
This project built primarily on Python. Python is a high level programming language, which is
very efficient when trying to build machine-learning algorithms. Since it is an open source
language, it has a lot of open source libraries built by third party institutions such as Google
for example, which can facilitate in construction of complex programs and algorithms.
Complex programs can be written in shorter lines of code in python when compared to Java
or other object-oriented programs due to python’s modular features. It can also be used to
code across wide range of platforms.

5.1 Python
Python is a high-level, interpreted, interactive and object-oriented scripting language. Python
is designed to be highly readable. It uses English keywords frequently where as other
languages use punctuation, and it has fewer syntactical constructions than other languages.

5.2 History of Python


Python was developed by Guido van Rossum in the late eighties and early nineties at the
National Research Institute for Mathematics and Computer Science in the Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68,
SmallTalk, and Unix shell and other scripting languages.
Python is copyrighted. Like Perl, Python source code is now available under the GNU
General Public License (GPL).
Python is now maintained by a core development team at the institute, although Guido van
Rossum still holds a vital role in directing its progress.

5.3 Importance of Python


 Python is Interpreted − Python is processed at runtime by the interpreter. You do
not need to compile your program before executing it. This is similar to PERL and
PHP.
 Python is Interactive − You can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.

20
 Python is Object-Oriented − Python supports Object-Oriented style or technique of
programming that encapsulates code within objects.
 Python is a Beginner's Language − Python is a great language for the beginner-level
programmers and supports the development of a wide range of applications from
simple text processing to WWW browsers to games.

5.4 Features of Python


 Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
 Easy-to-read − Python code is more clearly defined and visible to the eyes.
 Easy-to-maintain − Python's source code is fairly easy-to-maintain.
 A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.
 Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
 Portable − Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.
 Extendable − You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more efficient.
 Databases − Python provides interfaces to all major commercial databases.
 GUI Programming − Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.
 Scalable − Python provides a better structure and support for large programs than
shell scripting.
Apart from the above-mentioned features, Python has a big list of good features, few are
listed below −
 It supports functional and structured programming methods as well as OOP.
 It can be used as a scripting language or can be compiled to byte-code for building
large applications.
 It provides very high-level dynamic data types and supports dynamic type checking.
 IT supports automatic garbage collection.
 It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.

21
5.5 Libraries used in python
 numpy - mainly useful for its N-dimensional array objects.
 pandas - Python data analysis library, including structures such as dataframes.
 matplotlib - 2D plotting library producing publication quality figures.
 scikit-learn - the machine learning algorithms used for data analysis and data mining
tasks.

Figure 5.1 : NumPy, Pandas, Matplotlib, Scikit-learn

5.6 Modules
5.6.1. Data Collection : We collect the historical data from poloniex.com using an
REST API call. The API returns data from 2015 to the present day in time intervals of 5 mins
and 2 hours. The collected data is then placed into a Data Frame.

5.6.2 . Data Preprocessing : The Data Frame would contain all the columns that
were required as well as a few additional columns. In order to feed relevant data into our
model those extra columns will be removed and the filtered data is stored in to a CSV file.
The exported CSV file is later then called into difference parts of the overall program and
filtered again to get relevant data.

5.6.3 . Convolutional Neural Network : Convolutional Neural Networks (CNN) is a


deep learning methodology used for classification. However, here we tweak it to be used for
prediction. By setting up a onedimensional network instead of 2D or 3D, we can predict the
output by feeding in a list of the close prices from our dataset

22
5.6.4 Recurrent neural networks (RNN) : are also a deep learning methodology
developed in the late 1980s. This neural network is best suited for sequential data. It is much
more efficient as it is capable of remembering the weights at each layer and inputting them to
the next layer. The RNN make use of internal memory to store the sequence of data per row
with the next predictable value on the adjacent upper right cell. The inputs are taken in and
run through three gates the Forget Gate, Input Gate and Output Gate. In each of the gates a
sigmoid function is applied in all the layers to make sure the output is a value between 0 and
1. Therefore when inputting the value to this layer we scale to transform our input data which
is reshaped to fit the neural network.

5.7 Sample Code Segements


import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math

from sklearn.preprocessing import MinMaxScaler


from sklearn.metrics import mean_squared_error

from keras.models import Sequential


from keras.layers import Dense
from keras.layers import LSTM

data = pd.read_csv("bitcoin.csv")
print(data.head())
print(data['rpt_key'].value_counts())
df = data.loc[(data['rpt_key'] == 'btc_usd')]

print(df.head())
f = df.reset_index(drop=True)
df['datetime'] = pd.to_datetime(df['datetime_id'])
df = df.loc[df['datetime'] > pd.to_datetime('2017-06-28 00:00:00')]

23
df = df[['datetime', 'last', 'diff_24h', 'diff_per_24h', 'bid', 'ask', 'low', 'high', 'volume']]
print(df.head())
df = df[['last']]
dataset = df.values
dataset = dataset.astype('float32')

print(dataset)
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)

print(dataset)
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :]
print(len(train), len(test))

def create_dataset(dataset, look_back=1):


dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return np.array(dataX), np.array(dataY)

look_back = 10
trainX, trainY = create_dataset(train, look_back=look_back)
testX, testY = create_dataset(test, look_back=look_back)

print(trainX)
print(trainY)
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))

24
model = Sequential()
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=256, verbose=2)

trainPredict = model.predict(trainX)
testPredict = model.predict(testX)

print("trainPredict:",trainPredict)
print("testPredict:",testPredict)

trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])

trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:, 0]))


print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:, 0]))
print('Test Score: %.2f RMSE' % (testScore))

trainPredictPlot = np.empty_like(dataset)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict) + look_back, :] = trainPredict

testPredictPlot = np.empty_like(dataset)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict) + (look_back * 2) + 1:len(dataset) - 1, :] = testPredict

plt.plot(df['last'], label='Actual')
plt.plot(pd.DataFrame(trainPredictPlot, columns=["close"], index=df.index).close,
label='Training')

25
plt.plot(pd.DataFrame(testPredictPlot, columns=["close"], index=df.index).close,
label='Testing')
plt.legend(loc='best')
plt.show()

5.8 Snapshots

Fig 5.2 Spyder IDE Interface

26
Fig 5.3 Prediction of Train Data and Test Data with RMSE Score

27
Fig 5.4 Final Output

28
Chapter - 6
Software Testing
6.1 General
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

6.2 Developing Methodologies


The test process is initiated by developing a comprehensive plan to test the general
functionality and special features on a variety of platform combinations. Strict quality control
procedures are used.The process verifies that the application meets the requirements specified
in the system requirements document and is bug free. The following are the considerations
used to develop the framework from developing the testing methodologies.

6.3Types of Tests
6.3.1 Unit testing
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program input produce valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual software
units of the application .it is done after the completion of an individual unit before integration.
This is a structural testing, that relies on knowledge of its construction and is invasive. Unit
tests perform basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs and
expected results.

29
6.3.2 Functional test
Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and user
manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.

6.3.3 System Test


System testing ensures that the entire integrated software system meets requirements.
It tests a configuration to ensure known and predictable results. An example of system testing
is the configuration oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.

6.3.4 Performance Test


The Performance test ensures that the output be produced within the time limits,and
the time taken by the system for compiling, giving response to the users and request being
send to the system for to retrieve the results.

6.3.5 Integration Testing


Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the company
level – interact without error.

30
6.3.6 Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

Acceptance testing for Data Synchronization:


 The Acknowledgements will be received by the Sender Node after the Packets are
received by the Destination Node
 The Route add operation is done only when there is a Route request in need
 The Status of Nodes information is done automatically in the Cache Updation process

6.3.7 Build the test plan


Any project can be divided into units that can be further performed for detailed
processing. Then a testing strategy for each of this unit is carried out. Unit testing helps to
identity the possible bugs in the individual component, so the component that has bugs can be
identified and can be rectified from errors.

6.4 Test Cases :


Tested running multiple variations of the same data set to identify which was gave the
best result.

6.4.1 Neural Network Tuning:


Testing efficiency of outputs of the neural network by tuning the number of neurons,
increasing and decreasing the number of layers.

6.4.2 Gap Analysis:


Testing code on different environments to detect potential gap and scenarios of failure.
Customize the code to suit the need of the hosting environment.

6.4.3 Testing with different layers CNN


 Testing with 2 layers - Two layers did not give satisfactory results as the model had
predicted values with with more than 30% difference rate form the original values. This
caused due to in sufficient number of neurons and layers.

31
 Testing with three layers - The three-layered approach had displayed promising
results with less than 5% difference rate between the predicted and actual values.
 Testing with Four layers and LeakyReLU - The fourlayered approach has
significantly better results than the 2-layered approach but could not outperform the 3- layered
approach.

32
Chapter – 7
Conclusions & Future Enhancements
7 .1 Observations
Deep Neural Network (DNN) - based models performed the best for price ups and downs
prediction (classification). In addition, a simple profitability analysis showed that
classification models were more effective than regression models for algorithmic trading.
Overall, the performances of the proposed deep learning-based prediction models were
comparable.
Experimental results also showed that although long short-term memory (LSTM) -based
prediction models slightly outperformed the other prediction models for regression problems,
DNN-based prediction models performed the best for classification problems.

7.3 Limitations
 Model requires a high number of neurons to perform automatic traning.
 Training the model and tuning the neural network is required to output the accuracy.
 Prediction models are going to get more complex and effective in the future due to the
increase in data collection and development of stronger data analytic strategies. The
only factor that might be holding us back is the need for more computational power.

7.3 Future Work


There is always room for improvement and, with the rate at which deep learning is growing,
these improvements will surely be possible:
 Train the model on a larger data set to increase prediction accuracy.
 Design model with high number of neurons and run on a supercomputer or a cluster of
system.

33
7.4 Conclusion
Predicting the future will always be on the top of the list of uses for machine learning
algorithms. Here in this project we have attempted to predict the prices of Bitcoins using two
deep learning methodologies. This work focuses on the development of project based learning
in the field of computer science engineering, by taking into account the problem definition,
progression, student assessment and use of hands on activities based on use of deep learning
algorithm to develop application which can predict bitcoin price

34
REFERENCES
[1] Pagnotta, E. and A. Buraschi, “An Equilibrium Valuation of Bitcoin and Decentralized
Network Assets”, 2018.
[2] Gur Huberman, “An Economic Analysis of the Bitcoin Payment System”, 2019.
[3] Huberman , G. , J.D.Leshno, “Monopoly without a monopolist: An Economic analysis
of the bitcoin payment system”, 2017.
[4] Huang, J.-Z. and Z.J. Huang, “ Testing Moving Average Trading Strategies on ETFs”,
2018.
[5] Athey, S., I. Parashkevov , V, “ Bitcoin Pricing , Adoption , and Usage: Theory and
Evidence”, 2016.

35

You might also like