LP Iii
LP Iii
LP Iii
LABORATORY MANUAL
YEAR: 2023-24
Program Outcomes
1. Engineering knowledge:
Graduates can apply the knowledge of mathematics, science, engineering fundamentals and an engineering
specialization to Civil Engineering related problems.
2. Problem analysis:
An ability to identify, formulate, review research literature, and analyse Civil engineering problems reaching
substantiated conclusions using principles of mathematics and engineering sciences.
3. Design/development of solutions:
An ability to plan, analyse, design, and implement engineering problems and design system components or processes
to meet the specified needs.
4. Conduct investigations of complex problems:
An ability to use research-based knowledge and research methods including design of experiments, analysis and
interpretation of data, and synthesis of the information to provide valid conclusions.
5. Modern tool usage:
An ability to apply appropriate techniques, resources, and modern engineering and IT tools including prediction and
modelling to complex engineering activities with an understanding of the limitations.
6. The engineer and society:
An ability to apply contextual knowledge to assess societal, legal issues and the consequent responsibilities relevant
to the professional engineering practice.
7. Environment and sustainability:
An ability to understand the impact of the professional engineering solutions in societal and environmental contexts,
and demonstrate the knowledge of, and need for sustainable development.
8. Ethics:
An ability to apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice.
9. Individual and teamwork:
An ability to function effectively as an individual, and as a member or leader in diverse teams, and in multidisciplinary
settings to accomplish a common goal.
10. Communication:
An ability to communicate effectively on engineering activities with the engineering community and with society at
large, such as, being able to comprehend and write effective reports and design documentation and make effective
presentations.
11. Project management and finance:
Ability to demonstrate knowledge and understanding of the engineering and management principles and apply these
to one’s own work, as a member and leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning:
An ability to engage in independent and life-long learning in the broadest context of technological change.
Dataset link:
https://www.kaggle.com/datasets/yasserh/uber-
fares-dataset
9 Classify the email using the binary classification PO1,PO2,PO3,
method. Email Spam detection has two states: a) PO4,PO5,PO6,
Normal State – Not Spam, b) Abnormal State – Spam. PO8,PO9,PO11
Use K-Nearest Neighbors and Support Vector
Machine for classification. Analyze their CO1,CO2,CO3 ,PO12
performance.
Dataset link: The emails.csv dataset on the Kaggle
https://www.kaggle.com/datasets/balaka18/email-
spam-classification-dataset-csv
10 Given a bank customer, build a neural network- PO1,PO2,PO3,
based classifier that can determine whether they PO4,PO5,PO6,
will leave or not in the next 6 months. PO8,PO9,PO11
Dataset Description: The case study is from an open-
source dataset from Kaggle. ,PO12
The dataset contains 10,000 sample points with 14
distinct features such as
CustomerId, CreditScore, Geography, Gender, Age,
Tenure, Balance, etc.
Link to the Kaggle project:
https://www.kaggle.com/barelydedicated/bank-
customer-churn-modeling CO1,CO2,CO3
Perform following steps:
1. Read the dataset.
2. Distinguish the feature and target set and divide
the data set into training and test sets.
3. Normalize the train and test data.
4. Initialize and build the model. Identify the points
of improvement and implement the same.
5. Print the accuracy score and confusion matrix (5
B points).
11 Implement Gradient Descent Algorithm to find the PO1,PO2,PO3,
local minima of a function. PO4,PO5,PO6,
For example, find the local minima of the function CO1,CO2,CO3 PO8,PO9,PO11
y=(x+3)² starting from the point x=2.
,PO12
12 Implement K-Nearest Neighbors algorithm on PO1,PO2,PO3,
diabetes.csv dataset. Compute confusion matrix, PO4,PO5,PO6,
accuracy, error rate, precision and recall on the
CO1,CO2,CO3
PO8,PO9,PO11
given dataset.
Dataset link : ,PO12
https://www.kaggle.com/datasets/abdallamahgoub/
diabetes
13 Implement K-Means clustering/ hierarchical PO1,PO2,PO3,
clustering on sales_data_sample.csv dataset. PO4,PO5,PO6,
Determine the number of clusters using the elbow PO8,PO9,PO11
method. CO1,CO2,CO3
Dataset link : ,PO12
https://www.kaggle.com/datasets/kyanyoga/sample-
sales-data
EXPERIMENT NO. 01
Title: Write a program non-recursive and recursive program to calculate Fibonacci numbers and
analyzetheir time and space complexity.
Theory:
● The Fibonacci series, named after Italian mathematician Leonardo Pisano Bogollo, later known
as
● The Fibonacci series is obtained by taking the sum of the previous two numbers in the
series,given that the first and second terms are 0 and 1, respectively.
● In mathematical terms, the sequence Fn of Fibonacci numbers is defined by the recurrence
relation
Fn = Fn-1 + Fn-2
with seed values F0 = 0 and F1 = 1.
The following are different methods to get the nth Fibonacci number.
Here, the function rec_Fibonacci( ) makes a call to itself. This can be easily understood by the below
illustration:
Algorithm rec_Fibonacci(n)
if (n <= 1)
return n;
else
return rFibonacci(n - 1) + rFibonacci(n - 2);
Analysis
● The time complexity of the Fibonacci series is T(N) i.e., linear. We have to find the sum of two
terms, and it is repeated n times depending on the value of n.
● The space complexity of the Fibonacci series using dynamic programming is O(1).
The Fibonacci series finds application in different fields in our day-to-day lives. The different
patterns found in a varied number of fields from nature, to music, and to the human body
follow the Fibonacci series.
Some of the applications of the series are given as,
● It is used in the grouping of numbers and used to study different other special
mathematical sequences.
● It finds application in Coding (computer algorithms, distributed systems, etc). For
example, Fibonacci series are important in the computational run-time analysis of
Euclid's algorithm, used for determining the GCF of two integers.
● It is applied in numerous fields of science like quantum mechanics, cryptography, etc.
● In finance market trading, Fibonacci retracement levels are widely used in technical analysis.
● Conclusion:
In this way Concept of Fibonacci series is explored using recursive and non recursive method and
alsolearn time and space complexity.
EXPERIMENT NO. 02
Objective: Students should be able to understand and solve Huffman Encoding and analyze
time andspace complexity using a greedy strategy.
Theory:
What is a Greedy Method?
● A greedy algorithm is an approach for solving a problem by selecting the best option
available atthe moment. It doesn't worry whether the current best result will bring the overall
optimal result.
● The algorithm never reverses the earlier decision even if the choice is wrong. It works in
a top-down approach.
● This algorithm may not produce the best result for all the problems. It's because it always
goes forthe local best choice to produce the global best result.
Greedy Algorithm
Huffman Encoding
Huffman coding is a lossless data compression algorithm. The idea is to assign variable-length
codes to input characters, lengths of the assigned codes are based on the frequencies of
correspondingcharacters.
The most frequent character gets the smallest code and the least frequent character gets the
largest code. The variable-length codes assigned to input characters are Prefix Codes means the
codes (bit sequences) are assigned in such a way that the code assigned to one character is not
prefix of code assigned to any other character. This is how Huffman Coding makes sure that there
is no ambiguity when decoding the generated bit stream.
Example:
Step 1: Build a min heap that contains 6 nodes where each node represents root of a tree with
singlenode.
Step 2 : Extract two minimum frequency nodes from min heap. Add a new internal node with
frequency 5 + 9 = 14.
Now min heap contains 5 nodes where 4 nodes are roots of trees with single element each, and
oneheap node is root of tree with 3 elements,
Character
Frequencyc 12
d 13
Internal Node 14
e 16
f 45
Step 3: Extract two minimum frequency nodes from heap. Add a new internal node with
frequency12 + 13 = 25
Now min heap contains 4 nodes where 2 nodes are roots of trees with single element each, and
twoheap nodes are root of tree with more than one nodes.
Character Frequency
Internal Node 14
e 16
Internal Node 25
f 45
Step 4: Extract two minimum frequency nodes. Add a new internal node with frequency 14 +
16 =30
Step 5: Extract two minimum frequency nodes. Add a new internal node with frequency 25 +
30 =55
Step 6: Extract two minimum frequency nodes. Add a new internal node with frequency 45 +
55 =100
Time Complexity-
O(nlogn) where n is the number of unique characters. If there are n nodes, extractMin() is called 2*(n
– 1) times. extractMin() takes O(logn) time as it calles minHeapify(). So, overall complexity is
O(nlogn).
Thus, Overall time complexity of Huffman Coding becomes
O(nlogn).If the input array is sorted, there exists a linear time
algorithm.
Conclusion: In this way concept of Huffman Encoding is explored using greedy method.
EXPERIMENT NO. 03
Title: Write a program to solve a fractional Knapsack problem using a greedy method.
Objective: To analyze time and space complexity of fractional Knapsack problem using a
greedymethod.
Theory:
Knapsack Problem
● The value or profit obtained by putting the items into the knapsack is maximum.
● And the weight limit of the knapsack does not exceed.
Fractional Knapsack
Problem- In Fractional
Knapsack Problem,
Step-02: Arrange all the items in decreasing order of their value / weight ratio.
Step-03: Start putting the items into the knapsack beginning from the item with the highest
ratio. Putas many items as you can into the knapsack.
Example:
Time Complexity-
• The main time taking step is the sorting of all items in decreasing order of their value /
weightratio.
• If the items are already arranged in the required order, then while loop takes O(n) time.
• The average time complexity of Quick Sort is O(nlogn).
• Therefore, total time taken including the sort is O(nlogn).
Viva Questions:
Conclusion: In this way concept of Fractional Knapsack is explained using greedy method.
Assignment No: 4
Title: Write a program to solve a 0-1 Knapsack problem using dynamic programming strategy.
Objective: To analyze time and space complexity of 0-1 Knapsack problem using dynamic
programming.
Theory:
What is Dynamic Programming?
● Dynamic Programming algorithm solves each sub-problem just once and then saves its
answer in a table, thereby avoiding the work of re-computing the answer every time.
● Two main properties of a problem suggest that the given problem can be solved using Dynamic
Programming. These properties are overlapping sub-problems and optimal
substructure.
● For example, Binary Search does not have overlapping sub-problem. Whereas recursive
program of Fibonacci numbers have many overlapping sub-problems.
0/1 knapsack problem is solved using dynamic programming in the following steps-
Step-01:
● Draw a table say ‘T’ with (n+1) number of rows and (w+1) number of columns.
● Fill all the boxes of 0th row and 0th column with zeroes as shown-
Step-02:
Start filling the table row wise top to bottom
Here, T(i , j) = maximum value of the selected items if we can take items 1 to i and have
weight restrictions of j.
Step-03:
● To identify the items that must be put into the knapsack to obtain that maximum
profit,
● Consider the last column of the table.
● Start scanning the entries from bottom to top.
● On encountering an entry whose value is not same as the value stored
in theentry immediately above it, mark the row label of that entry.
● After all the entries are scanned, the marked labels represent the
items thatmust be put into the knapsack
Problem-.
For the given set of items and knapsack capacity = 5 kg, find the optimal solution for
the0/1 knapsack problem making use of a dynamic programming approach.
Solution-
Given
Step-01:
Draw a table say ‘T’ with (n+1) = 4 + 1 = 5 number of rows and (w+1) = 5 + 1 = 6 number of columns.Fill all
the boxes of 0th row and 0th column with 0.
Time Complexity-
● Each entry of the table requires constant time θ(1) for its computation.
● It takes θ(nw) time to fill (n+1)(w+1) table entries.
● It takes θ(n) time for tracing the solution since tracing process traces the n rows.
Conclusion: In this way we have explored Concept of 0/1 Knapsack using Dynamic approach.
Assignment No: 5
Title: Write a program for analysis of quick sort by using deterministic and randomized variant.
Objective: To analyze time and space complexity of quick sort by using deterministic and
randomizedvariant.
Theory:
What is a Randomized Algorithm?
• An algorithm that uses random numbers to decide what to do next anywhere in its
logic iscalled Randomized Algorithm..
• For example, in Randomized Quick Sort, we use random number to pick the next pivot
(or werandomly shuffle the array).
• Typically, this randomness is used to reduce time complexity or space complexity in
otherstandard algorithms.
• Randomized algorithm for a problem is usually simpler and more efficient
than itsdeterministic counterpart.
• The output or the running time are functions of the input and random bits chosen .
Types of Randomized Algorithms
The running time of quicksort depends mostly on the number of comparisons performed in all
calls tothe Randomized-Partition routine. Let X denote the random variable counting the number
of comparisons in all calls to Randomized-Partition.
Assignment No: 6
Mini Project
Mini Project - Write a program to implement matrix multiplication. Also implement
multithreadedmatrix multiplication with either one thread per row or one thread per cell.
Analyze and compare their performance.
Mini Project - Implement merge sort and multithreaded merge sort. Compare time required by
boththe algorithms. Also analyze the performance of each algorithm for the best case and the
worst case.
Mini Project - Different exact and approximation algorithms for Travelling-Sales-Person Problem
Theory:
1. Introduction to problems
2. Approach used to solve problem (Introduction with example)
3. Algorithm/ Pseudo code of problems
4. Complexity Analyze for all cases
5. Implementation of project with output
Group B
Assignment no:1
Title of the Assignment: Predict the price of the Uber ride from a given pickup
point to theagreed drop-off location. Perform following tasks:
1. Pre-process the dataset.
2. Implement linear regression and random forest regression models.
3. Evaluate the models and compare their respective scores like R2, RMSE, etc.
Dataset Description:The project is about on world's largest taxi company Uber inc. In
this project, we're looking to predict the fare for their future transactional cases. Uber
delivers service to lakhs of customers daily. Now it becomes really important to manage
their data properly to come up with new business ideas to get best results. Eventually,
it becomes really important to estimate the fare prices accurately.
Prerequisite:
1. Basic knowledge of Python
2. Concept of preprocessing data
3. Basic knowledge of Data Science and Big Data Analytics.
1. Data Preprocessing
2. Linear regression
3. Random forest regression models
4. Box Plot
5. Outliers
6. Haversine
7. Mathplotlib
8. Mean Squared Error
Data Preprocessing:
Data preprocessing is a process of preparing the raw data and making it suitable
for amachinelearning model. It is the rst and crucial step while creating a machine
learning model.
When creating a machine learning project, it is not always a case that we come across the
clean and formatted data. And while doing any operation with data, it is mandatory to clean
itand put in a formatted way. So for this, we use data preprocessing task.
A real-world data generally contains noises, missing values, and maybe in an unusable format
which cannotbe directly used for machine learning models. Data preprocessing is required
tasks for cleaning thedata andmaking it suitable for a machine learning model which also
increases the accuracy .
Linear Regression:
Linear regression is one of the easiest and most popular Machine Learning algorithms. It is
a statistical method that is used for predictive analysis. Linear regression makes predictions
for continuous/real or numeric variables such assales, salary, age, product price,etc.
Linear regression algorithm shows a linear relationship between a dependent (y) and one
or moreindependent (y) variables, hence called as linear regression. Since linear regression
shows the linear relationship, which means it nds how the value of the dependent variable is
changing according to the valueof the independent variable.
The linear regression model provides a sloped straight line representing the relationship
between the variables. Consider the below image:
prediction from each tree and based on the majority votes of predictions, and
it predicts thenal output.
The greater number of trees in the forest leads to higher accuracy and prevents
theproblem ofover tting.
Outlier:
Themajor thing about the outliers is what you do with them. If you are going toanalyze
any task to analyzedata sets, you will always have some assumptions based onhow this
data is generated. If you ndsome data points that are likely to contain some form of error,
then these are de nitely outliers, and depending on the context, you want toovercome
those errors. The data mining process involves the analysis and prediction of datathat the
data holds. In 1969, Grubbs introduced the rst de nition of outliers.
Global Outliers
Global outliers are also called point outliers. Global outliers are taken as the simplest form of
outliers. Whendata points deviate from all the rest of the data points in a given data set, it is
known as the global outlier. In most cases, all the outlier detection procedures are targeted
to determine the global outliers. The green data point is the global outlier.
Collective Outliers
In a given set of data, when a group of data points deviates from the rest of the data set is
called collectiveoutliers. Here, the particular set of data objects may not be outliers, but when
you consider the data objectsas a whole, they may behave as outliers. To identify thetypes of
different outliers, you need to go through background information about the relationship
between the behavior of outliers shown by different data objects. For example, in an
Intrusion Detection System, the DOS package from one system to another is taken as normal
behavior. Therefore, if this happens with the various computer simultaneously, it is
considered abnormal behavior, and as a whole, they are called collective outliers. The
greendata points asa whole represent the collective outlier
Contextual Outliers
As the name suggests, "Contextual" means this outlier introduced within a context. For
example, in the speech recognition technique, the single background noise. Contextual
outliers are also known as Conditional outliers. These types of outliers happen if a data object
deviates from the other data points because of any speci c condition in a given data set. As we
know, there are two types of attributes of objectsof data: contextual attributes and behavioral
attributes. Contextual outlier analysis enables the users to examine outliers in different
contexts and conditions, which can be useful in various applications. For example, A
temperature reading of 45 degrees Celsius may behave as an outlier in a rainy season. Still,
it will behave like a normal data point in the context of a summer season. In thegiven
diagram, a green dot representing the low-temperature value in June is a contextual outlier
since the same value in December isnot an outlier.
Haversine:
The Haversine formula calculates the shortest distance between two points on a sphere using
their latitudes and longitudes measured along the surface. It is important for use in
navigation.
Matplotlib:
One of the greatest benefits of visualization is that it allows us visual access to huge amounts
of data in easily digestible visuals. Matplotlib consists of several plots like line, bar, scatter,
histogram etc.
measures the average of error squares i.e. the average squared difference between
theestimated values and true value. It is a risk function, corresponding to the expected
value ofthe squared error loss. It is always non – negative and values close to zero are
better.
Conclusion:
In this way we have explored Concept correlation and implement linear regression
andrandomforest regression models.
Assignment no:2
Dataset Description:The csv file contains 5172 rows, each row for each
email. There are 3002 columns. The first column indicates Email name.
The name has been set with numbers and not recipients' name to protect
privacy. The last column has the labels for prediction : 1for spam, 0 for
not spam. The remaining 3000 columns are the 3000 most common
words inall theemails, after excluding the non-alphabetical
characters/words. For each row, thecount of each word(column) in that
email(row) is stored in the respective cells. Thus,information regarding
all 5172 emails are stored in a compact dataframe rather than
asseparate text files.
Link:https://www.kaggle.com/datasets/balaka18/email-spam-
Prerequisite:
1. Basic knowledge of Python
2.Concept of K-Nearest Neighbors and Support Vector Machine for classification.
1. Data Preprocessing
2. Binary Classification
3. K-Nearest Neighbours
4. Support Vector Machine
5. Train, Test and Split Procedure
Data Preprocessing:
● Importing libraries
● Importing datasets
● Feature scalin
Assignment no:3
Prerequisite:
1. Basic knowledge of Python
2. Concept of Confusion Matrix
There are around 1000 billion neurons in the human brain. Each neuron
has an association point somewherein the range of 1,000 and 100,000. In
the human brain, data is stored in such a manner as to be distributed,and
we can extract more than one piece of this data when necessary from our
memory parallelly. We can say that the human brain is made up
ofincredibly amazing parallel processors.
Confusion Matrix:
● It not only tells the error made by the classi ers but also the
type of errors such as itis either type-I or type-II error.
Assignment no:4
Link for Dataset: Diabetes predication system with KNN algorithm | Kaggle
Prerequisite:
1. Basic knowledge of Python
2. Concept of Confusion Matrix
3. Concept of roc_auc curve.
4. Concept of Random Forest and KNN algorithms
For our k-NN model, the first step is to read in the data we will use as input. For this example,
we are using the diabetes dataset. To start, we will use Pandas to read in the data. I will not go
into detail on Pandas, but it is a library you should become familiar with if you’re looking to
dive further into data science and machine learning.
Split the dataset into train and test data
Now we will split the dataset into into training data and testing data. The
training data is the data thatthe model will learn from. The testing data is the
data we will use to see how well the model performs on unseen data.
it easy for us to split our dataset into training and testing data.
from sklearn.model_selection import train_test_split#split dataset into train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1, stratify=y)
input and target data we split up earlier. Next, we will set ‘test_size’ to 0.2.
This means that 20% of all the data will be used for testing, which leaves
80% of the data as training data for the model to learn from. Setting
‘random_state’ to 1 ensures that we get the same split each time so we can
each value in the y variable. For example, in our dataset, if 25% of patients
have diabetes and 75% don’t have diabetes, setting ‘stratify’ to y will ensure
that the random split has 25% of patients with diabetes and 75% of patients
without diabetes.
as ‘no diabetes’, and vice versa. In other words, a new data point is labeled
detail below on how to better select a value for ‘n_neighbors’ so that the
Conclusion:
Assignment no:5
Prerequisite:
1. Knowledge of Python
2. Unsupervised learning
3. Clustering
4. Elbow method
Clustering algorithms try to find natural clusters in data, the various aspects
of how the algorithms to cluster data can be tuned and modified. Clustering
is based on the principle that items within the same cluster must be similar
to each other. The data is grouped in such a way that related elements are
close to each other.
Diverse and different types of data are subdivided into smaller groups.
Uses of Clustering
Marketing:
In the field of marketing, clustering can be used to identify various customer groups
with existing customer data. Based on that, customers can be provided with discounts,
offers, promo codes etc.
Often, we need to group together various research texts and documents according to
similarity. And in such cases, we don’t have any labels. Manually labelling large amounts
of data is also not possible. Using clustering, the algorithm can process the text and group
it into different themes.
K-Means Clustering
K-Means clustering is an unsupervised machine learning algorithm that divides the
given data into the given number of clusters. Here, the “K” is the given number of
predefined clusters, that need to be created.
The algorithm takes raw unlabelled data as an input and divides the dataset into
clusters and the process is repeated until the best clusters are found.
#Importing the
necessary
librariesimport
numpy as np
import pandas as pd
import
matplotlib.py
plot as plt
import
seaborn as
sns
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
Bharati Vidyapeeth’s College Of Engineering Lavale Pune.
Conclusion: We can clearly see that 5 different clusters have been formed from the 1
data. The red cluster is the customers with the least income and least spending score,
similarly, the blue cluster is the customers with the most income and most spending
score.
Group C
Assignment no:1
Title of the Assignment: Installation of MetaMask and study spending Ether per
transaction
Prerequisite:
Introduction to Blockchain
● Blockchain can be described as a data structure that holds
transactional records and while ensuring security,
transparency, and decentralization. You can also think of it asa
chain or records stored in the forms of blocks which are
controlled by no single authority.
● A blockchain is a distributed ledger that is completely open to
any and everyone onthenetwork. Once an information is stored
on a blockchain, it is extremely difficult to change or alter it.
● Each transaction on a blockchain is secured with a digital
signature that proves its authenticity. Due to the use of
encryption and digital signatures, the data stored on the
blockchain is tamper-proof and cannot be changed.
● Blockchain technology allows all the network participants to
reach an agreement, commonly known as consensus. All the
data stored on a blockchain is recorded digitally and has a
common history which is available for all the network
participants. This way,the chances of any fraudulent activity or
duplication of transactions is eliminated without the need of a
third-party.
Blockchain Features
The following features make the revolutionary technology of blockchain
stand out:
● Decentralized
Blockchains are decentralized in nature meaning that no single
person or group holds the authority of the overall network.
While everybody in the network has the copy of the distributed
ledger with them, no one can modify it on his or her own. This
unique feature of blockchain allows transparency and security
while giving power to the users.
● Immutable
The immutability property of a blockchain refers to the fact that any data
once written on the blockchain cannot be changed. To understand
immutability, consider sending email as an example. Once you send an email
to a bunch of people, you cannot take it back. In order to find a way around,
you’ll have to ask all the recipients to delete your email which is pretty
tedious. This is how immutability works.
● Tamper-Proof
With the property of immutability embedded in blockchains, it
becomes easier to detecttampering of any data. Blockchains are
considered tamper-proof as any change in even one single
block can be detected and addressed smoothly. There are two
key ways of detecting tampering namely, hashes and blocks.
● Click on the extension icon in the upper right corner to open MetaMask.
● To install the latest version and be up to date, click Try it now.
● Click Continue.
● You will be prompted to create a new password. Click Create.
Step 3. Depositing funds.
user’s computer.
● Integrated - Dapps are designed to work with MetaMask, so it
becomes much easier to sendEther in and out.
Assignment no:2
Title of the Assignment: Create your own wallet using Metamask for crypto
transactions
Prerequisite:
1. Basic knowledge of cryptocurrency
2. Basic knowledge of distributed computing concept
3. Working of blockchain
-----------------------------------------------------------------------------------------------------------
----
Introduction to Cryptocurrency
● Cryptocurrency is a digital payment system that doesn't rely on banks
to verify transactions. It’s a peer-to-peer system that can enable
anyone anywhere to send and receive payments. Instead of being
physical money carried around and exchanged in the real world,
cryptocurrency payments exist purely as digital entries to an online
database describing specific transactions. When you transfer
cryptocurrency funds, the transactions are recorded in a publicledger.
Cryptocurrency is stored in digital wallets.
● Cryptocurrency received its name because it uses encryption to verify
transactions. This meansadvanced coding is involved in storing and
transmitting cryptocurrency data between wallets and to public
ledgers. The aim of encryption is to provide security and safety.
● The first cryptocurrency was Bitcoin, which was founded in 2009 and
remains the best known today. Much of the interest in
Cryptocurrency examples
● Bitcoin:
Founded in 2009, Bitcoin was the first cryptocurrency and is still the
most commonly traded. The currency was developed by Satoshi
Nakamoto – widely believed to be a pseudonym for an individual or
group of people whose precise identity remains unknown.
● Ethereum:
● Litecoin:
This currency is most similar to bitcoin but has moved more quickly
to develop new innovations, including faster payments and processes
to allow more transactions.
● Ripple:
Ripple is a distributed ledger system that was founded in 2012. Ripple
can be used to track different kinds of transactions, not just
cryptocurrency. The company behind it has worked with various
banks and financial institutions.
● Non-Bitcoin cryptocurrencies are collectively known as “altcoins” to
distinguish them from the original.
Assignment no:3
Title of the Assignment: Write a smart contract on a test network, for Bank
account of a customer for followingoperations:
Deposit money
Withdraw Money
Show balance
Prerequisite:
1. Basic knowledge of cryptocurrency
2. Basic knowledge of distributed computing concept
3. Working of blockchain.
-----------------------------------------------------------------------------------------------------------
----
solidity
^0.4.19;
contract
TipJar {
functio
n
TipJa
r()
publi
c{
owne
r=
msg.s
ende
r;
}
function
withdraw()
public {
require(owner
==
msg.sender);
msg.sender.transfer(address(this).balance);
}
another withdrawal from the banking contract. When the banking contract handles this
second withdrawal request, it would have already transferred ether for the original
withdrawal, but it would not have an updated balance, so it would allow this second
withdrawal!
To avoid this sort of reentrancy bug, follow the “Checks-Effects-Interactions pattern” as
described in the Solidity documentation. The withdraw()function above is an example of
implementing this pattern
Assignment no:4
Title of the Assignment: Write a survey report on types of Blockchains and its real
time use cases.
Prerequisite:
1. Basic knowledge of cryptocurrency
2. Basic knowledge of distributed computing concept
3. Working of blockchain
-----------------------------------------------------------------------------------------------------------
----
Contents for Theory:
There are 4 types of blockchain:Public Blockchain.
Private Blockchain. Hybrid Blockchain. Consortium Blockchain
---------------------------------------------------------------------------------------------------------------
1. Public Blockchain
These blockchains are completely open to following the idea of decentralization. They
don’t have any restrictions, anyone having a computer and internet can participate in
the network.
As the name is public this blockchain is open to the public, which means it is
not owned by anyone. Anyone having internet and a computer with good
hardware can participate in this public blockchain. All the computer in the
network hold the copy of other nodes or block present in the network
In this public blockchain, we can also perform verification of
transactions or records Advantages:
Trustable: There are algorithms to detect no fraud. Participants need not worry about
the other nodes in the network
Secure: This blockchain is large in size as it is open to the public. In a large size,
there is greater distribution of records
Anonymous Nature: It is a secure platform to make your transaction properly at the
same time, you are not required to reveal your name and identity in order to
participate.
Decentralized: There is no single platform that maintains the network, instead every
user has a copy of the ledger.
Disadvantages:
Processing: The rate of the transaction process is very slow, due to its large size.
Verification of each node is a very time-consuming process.
Energy Consumption: Proof of work is high energy-consuming. It requires good
computer hardware to participate in the network
Acceptance: No central authority is there so governments are facing the issue to
implement the technology faster.
Use Cases: Public Blockchain is secured with proof of work or proof of stake they
can be used to displace traditional financial systems. The more advanced side of
this blockchain is the smart contract that enabled this blockchain to support
decentralization. Examples of public blockchain are Bitcoin, Ethereum.
2. Private Blockchain
These blockchains are not as decentralized as the public blockchain only selected
nodes can participate in the process, making it more secure than the others.
Speed: The rate of the transaction is high, due to its small size. Verification of each
node is less time- consuming.
Scalability: We can modify the scalability. The size of the network can be
decided manually. Privacy: It has increased the level of privacy for
confidentiality reasons as the businesses required.
Balanced: It is more balanced as only some user has the access to the transaction
which improves the performance of the network.
Disadvantages:
3. Hybrid Blockchain
It is the mixed content of the private and public blockchain, where some
part is controlled by some organization and other makes are made visible
as a public blockchain.
Ecosystem: Most advantageous thing about this blockchain is its hybrid nature. It
cannot be hacked as 51% of users don’t have access to the network
Cost: Transactions are cheap as only a few nodes verify the transaction. All
the nodes don’t carry the verification hence less computational cost.
Architecture: It is highly customizable and still maintains integrity, security,
and transparency. Operations: It can choose the participants in the blockchain
and decide which transaction can be made public.
Disadvantages: Efficiency: Not everyone is in the position to implement a hybrid
Blockchain. The organizationalso faces
Speed: A limited number of users make verification fast. The high speed makes this
more usable for organizations.
Authority: Multiple organizations can take part and make it decentralized at
every level. Decentralized authority, makes it more secure.
Privacy: The information of the checked blocks is unknown to the public view. but
any member belonging to the blockchain can access it.
Flexible: There is much divergence in the flexibility of the blockchain. Since it is not a
very large decision can be taken faster.
Disadvantages:
Approval: All the members approve the protocol making it less flexible. Since one or
more organizations are involved there can be differences in the vision of interest.
Transparency: It can be hacked if the organization becomes corrupt. Organizations
may hide information from the users.
Vulnerability: If few nodes are getting compromised there is a greater
chance of vulnerability in this blockchain
Use Cases: It has high potential in businesses, banks, and other payment processors.
Food tracking of the organizations frequently collaborates with their sectors making it
a federated solution ideal for their use. Examples of consortium Blockchain are
Tendermint and Multichain.
Conclusion-In this way we have explored types of blockchain and its applications in
real time
Assignment no:5
Prerequisite:
1. Basic knowledge of cryptocurrency
2. Basic knowledge of distributed computing concept
3. Working of blockchain
-----------------------------------------------------------------------------------------------------------
----
Hyperledger Composer is an extensive, open development toolset and framework to make developing
blockchain applications easier. The primary goal is to accelerate time to value, and make it easier to integrate
You can use Composer to rapidly develop use cases and deploy a blockchain solution in days.
Composer allows you to model your business network and integrate existing systems and data with your
blockchain applications.
Hyperledger Composer supports the existing Hyperledger Fabric blockchain infrastructure and runtime.
Hyperleder Composer generate business network archive (bna) file which you can deploy on existing
Hyperledger Fabric network
You can use Hyperledger Composer to model business network, containing your existing assets and the
Key Concepts of
Hyperledger
Composer
1. Blockchain State Storage: It stores all transaction that happens in your hyperledger composer
own assets and submit transactions. Participant must have an identifier and
3. Identities and ID cards: Participants can be associated with an identity. ID cards are
with assets. Transaction processing logic you can define in JavaScript and you
can also emit event for transaction.
5. Queries: Queries are used to return data about the blockchain world-state. Queries
are defined within abusiness network, and can include variable parameters for
simple customisation. By using queries, data can be easily extracted from your
blockchain network. Queries are sent by using the Hyperledger Composer API.
6. Events: Events are defined in the model file. Once events have been defined, they
feature of any business blockchain. Using Access Control rules you can define who
can do what in Business networks. The access control language is rich enough to
capture sophisticated conditions.
transactions, including the participants and identities that submitted them. The
historian stores transactions as HistorianRecord assets, which are defined in the
Hyperledger Composer system namespace.
Conclusion: In this way we have learnt about hyperledger and its use case in
business world.