Python-for-Data-Analysis-edgar
Python-for-Data-Analysis-edgar
Analysis
Prepared By Edgar
For
Statistical and Financial Accounting class
History
• Started by Guido Van Guido Van Rossum
Rossum as a hobby by Doc Searls
on Flickr CC-BY-SA
Visualization libraries
• matplotlib
• Seaborn
Link: http://www.numpy.org/
8
Python Libraries for Data Science
SciPy:
collection of algorithms for linear algebra, differential equations, numerical
integration, optimization, statistics and more
built on NumPy
Link: https://www.scipy.org/scipylib/
9
Python Libraries for Data Science
Pandas:
adds data structures and tools designed to work with table-like data (similar
to Series and Data Frames in R)
Link: http://pandas.pydata.org/
10
Python Libraries for Data Science
SciKit-Learn:
provides machine learning algorithms: classification, regression, clustering,
model validation etc.
Link: http://scikit-learn.org/
11
Python Libraries for Data Science
matplotlib:
python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats
Link: https://matplotlib.org/
12
Python Libraries for Data Science
Seaborn:
based on matplotlib
Link: https://seaborn.pydata.org/
13
Anaconda (Jupyter notebooks)
• Easy to use environment
• Web-based
• Combines both text and code
into one
• Come with a great number of
useful packages
Starting a notebook
4. Toolbar
Stop
Save Cut, copy execution Block type
paste
New block Move block Reset block
And clear output
Create a folder Rename
Running blocks
• By pressing the Run button
• Shift + Enter – runs block
• Alt + Enter – creates a new block
Practical guide Using Jupyter Notebook and
Python
• Ensure that you have installed Anaconda onto your system.
• Once installed open Jupyter Notebook
• I have attached a file called “salaries.csv”
• Copy this file and paste it in your working directory.[ To determine
your working directory while in Jupyter, type import os ,run the
command, then type os.getcwd() and you will be able to see your
working directory. You can then copy and paste “salaries.csv” to this
location]
19
Loading Python Libraries
In [ ]: #Import Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
import seaborn as sns
20
Reading data using pandas
In [ ]: #Read csv file
df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv")
Note: The above command has many optional arguments to fine-tune the data import process.
pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None,
na_values=['NA'])
pd.read_stata('myfile.dta')
pd.read_sas('myfile.sas7bdat')
pd.read_hdf('myfile.h5','df')
21
Exploring data frames
In [3]: #List first 5 records
df.head()
Out[3]:
22
Data Frame data types
Pandas Type Native Python Type Description
object string The most general dtype. Will be
assigned to your column if column
has mixed types (numbers and
strings).
datetime64, timedelta[ns] N/A (but see the datetime module Values meant to hold time data.
in Python’s standard library) Look into these for time series
experiments.
23
Data Frame data types
In [4]: #Check a particular column type
df['salary'].dtype
Out[4]: dtype('int64')
df.attribute description
dtypes list the types of the columns
columns list the column names
axes list the row labels and column names
ndim number of dimensions
25
Data Frames methods
Unlike attributes, python methods have parenthesis.
All attributes and methods can be listed with a dir() function: dir(df)
df.method() description
head( [n] ), tail( [n] ) first/last n rows
Note: there is an attribute rank for pandas data frames, so to select a column with a name
"rank" we should use method 1.
27
Data Frames groupby method
Using "group by" method we can:
• Split the data into groups based on some criteria
• Calculate statistics (or apply a function) to each group
• Similar to dplyr() function in R
In [ ]: #Group data using rank
df_rank = df.groupby(['rank'])
In [ ]: #Calculate mean value for each numeric column per each group
df_rank.mean()
28
Data Frames groupby method
Once groupby object is create we can calculate various statistics for each group:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby('rank')[['salary']].mean()
Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object.
When double brackets are used the output is a Data Frame
29
Data Frames groupby method
30
Data Frame: filtering
To subset the data we can apply Boolean indexing. This indexing is commonly
known as a filter. For example if we want to subset the rows in which the salary
value is greater than $120K:
In [ ]: #Calculate mean salary for each professor rank:
df_sub = df[ df['salary'] > 120000 ]
32
Data Frames: Slicing
When selecting one column, it is possible to use single set of brackets, but the
resulting object will be a Series (not a DataFrame):
In [ ]: #Select column salary:
df['salary']
When we need to select more than one column and/or make the output to be a
DataFrame, we should use double brackets:
In [ ]: #Select column salary:
df[['rank','salary']]
33
Data Frames: Selecting rows
If we need to select a range of rows, we can specify the range using ":"
Notice that the first row has a position 0, and the last value in the range is omitted:
So for 0:10 range the first 10 rows are returned with the positions starting with 0
and ending with 9
34
Data Frames: method loc
If we need to select a range of rows, using their labels we can use method loc:
Out[ ]:
35
Data Frames: method iloc
If we need to select a range of rows and/or columns, using their positions we can
use method iloc:
In [ ]: #Select rows by their labels:
df_sub.iloc[10:20,[0, 3, 4, 5]]
Out[ ]:
36
Data Frames: method iloc
(summary)
df.iloc[0] # First row of a data frame
df.iloc[i] #(i+1)th row
df.iloc[-1] # Last row
37
Data Frames: Sorting
We can sort the data by a value in the column. By default the sorting will occur in
ascending order and a new data frame is return.
In [ ]: # Create a new data frame from the original sorted by the column Salary
df_sorted = df.sort_values( by ='service')
df_sorted.head()
Out[ ]:
38
Data Frames: Sorting
Out[ ]:
39
Missing Values
Missing values are marked as NaN
In [ ]: # Read a dataset with missing values
flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv")
Out[ ]:
40
Missing Values
There are a number of methods to deal with missing values in the data frame:
df.method() description
dropna() Drop missing observations
42
Aggregation Functions in Pandas
Aggregation - computing a summary statistic about each group, i.e.
• compute group sums or means
• compute group sizes/counts
min, max
count, sum, prod
mean, median, mode, mad
std, var
43
Aggregation Functions in Pandas
agg() method are useful when multiple statistics are computed per column:
In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max'])
Out[ ]:
44
Basic Descriptive Statistics
df.method() description
describe Basic statistics (count, mean, std, min, quantiles, max)
kurt kurtosis
45
Graphics to explore the data
Seaborn package is built on matplotlib but provides high level
interface for drawing attractive statistical graphics, similar to ggplot2
library in R. It specifically targets statistical data visualization
In [ ]: %matplotlib inline
46
Graphics
description
distplot histogram
barplot estimate of central tendency for a numeric variable
violinplot similar to boxplot, also shows the probability density of the
data
jointplot Scatterplot
regplot Regression plot
pairplot Pairplot
boxplot boxplot
swarmplot categorical scatterplot
factorplot General categorical plot
47
Example from our dataset
48
Basic statistical Analysis
statsmodel and scikit-learn - both have a number of function for statistical analysis
The first one is mostly used for regular analysis using R style formulas, while scikit-learn is
more tailored for Machine Learning.
statsmodels:
• linear regressions
• ANOVA tests
• hypothesis testings
• many more ...
scikit-learn:
• kmeans
• support vector machines
• random forests
• many more ...
49