Data Science Report
Data Science Report
Data Science Report
ON
“DATA SCIENCE”
Complete at
Teachnook
Duration
1 April to 31 June 2023
3rd Year (5th Sem)
Submitted By:
Krishna Soni
Krishna
21EEBAD019
Table of content
Certificate
Student Declaration
1. Introduction 1-2
1.1) Data science 1
1.2) Data Science Process 2
2. My Learning 2-3
2.1) Introduction to Data Science 2
2.2) Python for Data Science 3
2.3) Understanding the statistics for Data Science 3
2.4) Predictive Modelling basis of Machine Learning 3
6.Predictive Modelling 13 - 17
6.1) Types 13
6.2) Stages of Predictive Modelling 14
6.3) Problem Definition 14
6.4) Problem Generation 14
6.5) Data Extraction and Collection 14
6.6) Data Exploration and Transportation 14
6.6.1) Variable Treatment 15
6.6.2) Univariate Analysis 15
6.6.3) Bivariate Analysis 15
6.6.4) missing value treatment 15
6.7) Types of Outliers 16
6.7.1) Univariate 16
6.7.2) Bivariate 16
7. Modelling Building 18 - 21
7.1) Algorithm 19
7.2) Algorithm of Machine Learning 19
8. Methodology 22 -28
9. Result 28
10.Refrence 29
INTRODUCTION
OBJECTIVES
To explore, sort and analyse mega data from various sources to take advantage of
them and reach conclusions to optimize business processes and for decision support.
Examples include machine maintenance or (predictive maintenance), in the fields of
marketing and sales with sales forecasting based on weather.
1
data, combine data from different data sources, and transform it. If you have
successfully completed this step, you can progress to data visualization and
modelling.
4. The fourth step is data exploration. The goal of this step is to gain a deep
understanding of the data. You’ll look for patterns, correlations, and deviations
based on visual and descriptive techniques. The insights you gain from this
phase will enable you to start modelling.
5. Finally, we get to the sexiest part: model building (often referred to as “data
modelling” throughout this book). It is now that you attempt to gain the
insights or make the predictions stated in your project charter. Now is the time
to bring out the heavy guns, but remember research has taught us that often
(but not always) a combination of simple models tends to outperform one
complicated model. If you’ve done this phase right, you’re almost done.
6. The last step of the data science model is presenting your results and
automating the analysis, if needed. One goal of a project is to change a process
and/or make better decisions. You may still need to convince the business that
your findings will indeed change the business process as expected. This is
where you can shine in your influencer role. The importance of this step is
more apparent in projects on a strategic and tactical level. Certain projects
require you to perform the business process over and over again, so automating
the project will save time.
MY LEARNINGS
2.1) INTRODUCTION TO DATA SCIENCE
• Overview & Terminologies in Data
Science
Applications of Data Science
➢ Unfamiliar detection (fraud, disease, etc.)
➢ Automation and decision-making (credit worthiness, etc.)
➢ Classifications (classifying emails as “important” or “junk”)
➢ Forecasting (sales, revenue, etc.)
➢ Pattern detection (weather patterns, financial market patterns, etc.)
➢ Recognition (facial, voice, text, etc.)
2
➢ Recommendations (based on learned preferences, recommendation
engines can refer you to movies, restaurants and books you may like)
3
Introduction to Data Science
3.1) Data Science
The field of bringing insights from data using scientific techniques is called data science.
3.2) Applications:
What is likely to
Complexity happen?
Predictive Analysis
What’s happening
now?
Dashboards
Why did it
happen?
Detective Analysis
What happened?
Reporting
4
Reporting / Management Information System
Detective Analysis
Asking questions based on data we are seeing, like. Why something happened?
Predictive Modelling
Big Data
Stage where complexity of handling data gets beyond the traditional system.
Can be caused because of volume, variety or velocity of data. Use specific tools to analyse such
scale data.
• Social Media
1. Recommendation Engine
2. Ad placement
3. Sentiment Analysis
• Deciding the right credit limit for credit card customers.
• Suggesting right products from e-commerce companies
1. Recommendation System
2. Past Data Searched
3. Discount Price Optimization
• How google and other search engines know what are the more relevant results for our
search query?
1. Apply ML and Data Science
5
2. Fraud Detection
3. AD placement
3.5) Reason for choosing data science
Data Science has become a revolutionary technology that everyone seems to talk about. Hailed as
the ‘sexiest job of the 21st century’. Data Science is a buzzword with very few people knowing
about the technology in its true sense.
While many people wish to become Data Scientists, it is essential to weigh the pros and cons of
data science
and give out a real picture. In this article, we will discuss these points in detail and provide you
with the necessary insights about Data Science.
Advantages: -
1. It’s in Demand
2. Abundance of Positions
3. A Highly Paid Career
4. Data Science is Versatile
Disadvantages: -
1. Mastering Data Science is near to impossible
2. A large Amount of Domain Knowledge Required
3. Arbitrary Data May Yield Unexpected Results
4. The problem of Data Privacy
6
Python Introduction
PYTHON
Python is an interpreted, high-level, general-purpose programming language. It has efficient high-
level data structures and a simple but effective approach to object-oriented programming. Python’s
elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language
for scripting and rapid application development in many areas on most platforms.
Easy-to-maintain:
Python's source code is fairly easy-to-maintain.
Interactive Mode:
Python has support for an interactive mode which allows interactive testing
and debugging of snippets of code.
7
Portable:
Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.
Extendable:
You can add low-level modules to the Python interpreter. These modules
enable programmers to add to or customize their tools to be more efficient.
Databases:
Python provides interfaces to all major commercial databases.
GUI Programming:
Python supports GUI applications that can be created and ported to many system calls,
libraries and windows systems, such as Windows MFC, Macintosh, and the X
Window system of Unix.
Scalable:
o Python provides a better structure and support for large programs than shell
scripting. Python has a big list of good features:
o It provides very high-level dynamic data types and supports dynamic type
checking.
o It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.
8
We all know that data Science is applied to gather multiple data sets to collect information, project
the insight, and interpret it to make an effective business decision. However, being a data scientist
requires you to learn some of the best and most highly used programming languages, such
as Java, C++, R, Python, etc. Among these, Python has been considered the preferred choice among
data scientists throughout the globe.
LISTS: A list is an ordered data structure with elements separated by comma and enclosed
within square brackets.
9
Statistics:
5.1) Descriptive Statistic
Mode
It is a number which occurs most frequently in the data series.
It is robust and is not generally affected much by addition of couple of
new values. Code import pandas as pd data=pd.
read_csv(“Mode.csv”) //reads data from csv file
data.head() //print first five lines
mode_data=data['Subject'].mode() //to take mode of
subject column print(mode_data)
Mean
import pandas as pd
data=pd.read_csv( "mean.csv") //reads data
from csv file
data.head() //print first five lines
mean_data=data[Overallmarks].mean() //to take mode of subject column
print(mean_data)
Median
Absolute central value of data set.
import pandas as pd
data=pd.read_csv( "data.csv") //reads data
from csv file
data.head() //print first five lines
median_data=data[Overallmarks].median() //to take mode of subject column
print(median_data)
5.2) Types of variables
• Continous – Which takes continuous numeric values. Eg-marks
• Categorial-Which have discrete values. Eg- Gender
• Ordinal – Ordered categorial variables. Eg- Teacher feedback
• Nominal – Unordered categorial variable. Eg- Gender
5.3) Outliers
Any value which will fall outside the range of the data is termed as an outlier. Eg- 9700 instead of
97.
10
Reasons of Outliers
• Typos-During collection. Eg-adding extra zero by mistake.
• Measurement Error-Outliers in data due to measurement operator being faulty.
• Intentional Error-Errors which are induced intentionally. Eg-claiming smaller amount of
alcohol consumed then actual.
• Legit Outlier—These are values which are not actually errors but in data due to legitimate
reasons.
Eg - a CEO’s salary might actually be high as compared to other employees.
Interquartile 5.4) Range (IQR):
Is difference between third and first quartile from last. It is robust to outliers.
5.5) Histograms:
Histograms depict the underlying frequency of a set of discrete or continuous data that are
measured on an interval scale.
import pandas as pd
histogram=pd.read_csv(histogram.c
sv) import matplotlib.pyplot as plt
%matplot inline plt.hist(x=
'Overall Marks',data=histogram)
plt.show()
5.6) Inferential Statistics
Inferential statistics allows to make inferences about the population from the sample data.
5.7) Hypothesis Testing:
Hypothesis testing is a kind of statistical inference that involves asking a question, collecting data,
and then examining what the data tells us about how to proceed. The hypothesis to be tested is
called the null hypothesis and given the symbol Ho. We test the null hypothesis against an
alternative hypothesis, which is given the symbol Ha.
5.8) T Tests:
When we have just a sample not population statistics.
Use sample standard deviation to estimate population standard deviation.
T test is more prone to errors, because we just have samples.
11
5.9) Z Score:
The distance in terms of number of standard deviations, the observed value is away from mean, is
standard score or z score.
12
Predictive Modelling
A data model helps organizations capture all the points of information necessary to perform
operations and enact policy based on the data they collect. This can be explained with an example
of a sales transaction which is broken down into related groups of data points, describing the
customer, the seller, the item sold, and the payment mechanism. For instance, if the sales
transactions were recorded without the date on which they occurred, it would be impossible to
enforce certain return policies. Data modelling in data science is also performed to help
organizations ensure that they are collecting all the necessary items of information in the first place
Making use of past data and attributes we predict future using
this data.
e.g.-
Past Horror Movies
Future Unwatched Horror
Movies
13
6.2) Stages of Predictive Modelling
1. Problem definition
2. Hypothesis Generation
3. Data Extraction/Collection
4. Data Exploration and Transformation
5. Predictive Modelling
6. Model Development/Implementation
14
• Outlier treatment
• Variable Transformation
15
3. MNAR (Missing not at random): Missing values have relation to the variable in which
missing value exists Identifying
Syntax: -
1. Describe ()
2. Isnull()
Output will we in True or False
Different methods to deal with missing values
1. Imputation
Continuous-Impute with help of mean, median or regression mode.
Categorical-With mode, classification model.
2. Deletion
Row wise or column wise deletion. But it leads to loss of
data. Outlier Treatment
Reasons of Outliers:
1. Data entry Errors
2. Measurement Errors
3. Processing Errors
4. Change in underlying population
6.7) Types of Outliers:
6.7.1) Univariate
Analysing only one variable for
outlier.
Example:
– In box plot of height and
weight.
Weight will we analysed for outlier
6.7.2) Bivariate
Analysing both variables for outlier.
Eg- In scatter plot graph of height and weight. Both will we analysed.
Identifying Outlier
Graphical Method
• Box Plot
16
• Scatter Plot
Formula Method
Using Box Plot
< Q1 - 1.5 * IQR or > Q3+1.5 * IQR
Where IQR= Q3 – Q1
Q3=Value of 3rd quartile
Q1=Value of 1st
quartile Treating Outlier
1. Deleting observations
2. Transforming and binning values
3. Imputing outliers like missing values
4. Treat them as separate Variable Transformation Is the process by which-
1. We replace a variable with some function of that variable. Eg – Replacing a
variable x with its log.
2. We change the distribution or relationship of a variable with others. Used to
–
1. Change the scale of a variable
2. Transforming nonlinear relationships into linear relationship
3. Creating symmetric distribution from skewed distribution.
Common methods of Variable Transformation – Logarithm, Square root, Cube root, Binning, etc.
17
Model Building
It is a process to create a mathematical model for estimating / predicting the future based on past
data.
Example-
A retail wants to know the default behaviour of its credit card customers. They want to predict the
probability of default for each customer in next three months.
• Probability of default would lie between 0 and 1.
• Assume every customer has a 10% default rate.
Probability of default for each customer in next 3 months=0.1
It moves the probability towards one of the extremes based on attributes of past information.
A customer with volatile income is more likely (closer to) to default.
A customer with healthy credit history for last years has low chances of default (closer to 0).
Algorithm Selection
Example-
18
7.1) Algorithms
• Logistic Regression
• Decision Tree
• Random Forest
Training Model
It is a process to learn relationship / correlation between independent and dependent variables.
We use dependent variable of train data set to
predict/estimate. Dataset
• Train
Past data (known dependent variable).
Used to train model.
• Test
Future data (unknown
dependent variable) Used to score.
Prediction / Scoring
It is the process to estimate/predict dependent variable of train data set by applying model rules.
We apply training learning to test data set for prediction/estimation.
19
The equation of regression line is
Y-Values
14 represented as:
12
10
0
0 1 2 3 4 5 6 7 8 9
Logistic Regression
Logistic regression is a statistical model that in its basic form uses a logistic function to model a
binary dependent variable, although many more complex extensions exist.
20
K-Means Clustering (Unsupervised learning)
K-means clustering is a type of unsupervised learning, which is used when you have unlabelled
data (i.e., data without defined categories or groups). The goal of this algorithm is to find groups in
the data, with the number of groups represented by the variable K. The algorithm works iteratively
to assign each data point to one of K groups based on the features that are provided. Data points
are clustered based on feature similarity.
21
METHODOLOGY
PREDICTING IF CUSTOMER BUYS TERM DEPOSIT
Problem Statement:
Your client is a retail banking institution. Term deposits are a major source of income
for a bank.
A term deposit is a cash investment held at a financial institution. Your money is
invested for an agreed rate of interest over a fixed amount of time, or term. The bank
has various outreach plans to sell term deposits to their customers such as email
marketing, advertisements, telephonic marketing and digital marketing.
Telephonic marketing campaigns still remain one of the most effective ways to reach
out to people. However, they require huge investment as large call centres are hired
to actually execute these campaigns. Hence, it is crucial to identify the customers
most likely to convert beforehand so that they can be specifically targeted via call.
You are provided with the client data such as: age of the client, their job type, their
marital status, etc. Along with the client data, you are also provided with the
information of the call such as the duration of the call, day and month of the call,
etc. Given this information, your task is to predict if the client will subscribe to
term deposit. Data Dictionary: -
22
Prerequisites:
We have the following files:
• train.csv: This dataset will be used to train the model. This file contains all the
client and call details as well as the target variable “subscribed”.
• test.csv: The trained model will be used to predict whether a new set of clients
will subscribe the term deposit or not for this dataset.
• TEST.csv file: -
23
TRAIN.csv file: -
Problem Description
Provided with following files: train.csv and test.csv.
Use train.csv dataset to train the model. This file contains all the client and call details as well as
the target variable “subscribed”. Then use the trained model to predict whether a new set of clients
will subscribe the term deposit.
24
25
26
27
RESULTS
In this complete 6 weeks training I successfully learnt about DATA SCIENCE. Also,
now I’m able to perform data analysis using python. I also attempted various quizzes
and assignments provided for periodic evaluation during 6 weeks and completed this
training with 82% score in Final Test.
28
REFRENCE
1) WIKIPEDIA.COM: We have used Wikipedia to define certain terms including the history and
basics of python and data visualisation.
2) TEACHNOOK.REPORT: We have used teachnook site for data science in order to predict the
optimum site data.
3) SCRIBB.NET We have used SCRIBB.NET to learn about certain functions about how they work
and how to define in code, we have also used geeks for geeks to learn more about libraries used
in this project in data wrangling, data collection, web scraping, data visualisation, machine
learning and many more.
***
29