(Many thanks to Evimaria Terzi and Mark Crovella for their code and examples)
Pandas is the Python Data Analysis Library.
Pandas is an extremely versatile tool for manipulating datasets, mostly tabular data. You can think of Pandas as the evolution of excel spreadsheets, with more capabilities for coding, and SQL queries such as joins and group-by.
It also produces high quality plots with matplotlib, and integrates nicely with other libraries that expect NumPy arrays.
You can find more details here
Most data can be viewed as tables or matrices (in the case where all entries are numeric). The rows correspond to objects and the columns correspond to the attributes or features.
There are different ways we can store such data tables in Python
Two-dimensional lists
D = [[0.3, 10, 1000],[0.5,2,509],[0.4, 8, 789]]
print(D)
D = [[30000, 'Married', 1],[20000,'Single', 0],[45000, 'Maried', 0]]
print(D)
Numpy Arrays
Numpy is a the library of Python for numerical computations and matrix manipulations. It has a lot of the functionality of Matlab but also allows for data analysis operations (similar to Pandas). Read more for Numpy here: http://www.numpy.org/
The Array is the main data structure for numpy. It stores multidimensional numeric tables.
We can create numpy arrays from lists
import numpy as np
#1-dimensional array
x = np.array([2,5,18,14,4])
print ("\n Deterministic 1-dimensional array \n")
print (x)
#2-dimensional array
x = np.array([[2,5,18,14,4], [12,15,1,2,8]])
print ("\n Deterministic 2-dimensional array \n")
print (x)
There are also numpy operations that create arrays of different types
x = np.random.rand(5,5)
print ("\n Random 5x5 2-dimensional array \n")
print (x)
x = np.ones((4,4))
print ("\n 4x4 array with ones \n")
print (x)
x = np.diag([1,2,3])
print ("\n Diagonal matrix\n")
print(x)
Why do we need numpy arrays? Because we can do different linear algebra operations on the numeric arrays
For example:
x = np.random.randint(10,size=(2,3))
print("\n Random 2x3 array with integers")
print(x)
#Matrix transpose
print ("\n Transpose of the matrix \n")
print (x.T)
#multiplication and addition with scalar value
print("\n Matrix 2x+1 \n")
print(2*x+1)
Transform back to list of lists
lx = [list(y) for y in x]
lx
Pandas data frames
A data frame is a table in which each row and column is given a label. Very similar to a spreahsheet or a SQL table.
Pandas DataFrames are documented at: http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.html
Pandas dataframes enable different data analysis operations
A dataframe has names for the columns and the rows of the tables. The column names are stored in the attribute columns, while the row names in the attribute index. When these are not speficied, they are just indexed by default with the numbers 0,1,...
There are multiple ways we can create a data frame. Here we list just a few.
import pandas as pd #The pandas library
from pandas import Series, DataFrame #Main pandas data structures
#Creating a data frame from a list of lists
df = pd.DataFrame([[1,2,3],[9,10,12]])
print(df)
# Each list becomes a row
# Names of columns are 0,1,3
# Rows are indexed by position numbers 0,1
#Creating a data frame from a numpy array
df = pd.DataFrame(np.array([[1,2,3],[9,10,12]]))
print(df)
# Specifying column names
df = pd.DataFrame(np.array([[1,2,3],[9,10,12]]), columns=['A','B','C'])
print(df)
#Creating a data frame from a dictionary
# Keys are column names, values are lists with column values
dfe = pd.DataFrame({'A':[1,2,3], 'B':['a','b','c']})
print(dfe)
# Reading from a csv file:
df = pd.read_csv('example.csv')
print(df)
# The first row of the file is used for the column names
# The property columns gives us the column names
print(df.columns)
print(list(df.columns))
# Reading from a csv file without header:
df = pd.read_csv('no-header.csv',header = None)
print(df)
# Reading from am excel file:
df = pd.read_excel('example.xlsx')
print(df)
#Writing to a csv file:
df.to_csv('example2.csv')
for x in open('example2.csv').readlines():
print(x.strip())
# By default the row index is added as a column, we can remove it by seting index=False
df.to_csv('example2.csv',index = False)
for x in open('example2.csv').readlines():
print(x.strip())
Fetching data
For demonstration purposes, we'll use a library built-in to Pandas that fetches data from standard online sources. More information on what types of data you can fetch is at: https://pandas-datareader.readthedocs.io/en/latest/remote_data.html
We will use stock quotes from IEX. To make use of these you need to first create an account and obtain an API key. Then you set the environment variable IEX_API_KEY to the value of the key as it is snown below
import os
os.environ["IEX_API_KEY"] = "pk_4f1eb9a770e04d2ebc44123e297618bb"#"pk_******************************"
import pandas_datareader.data as web # For accessing web data - you need to install this
from datetime import datetime #For handling dates
stocks = 'FB'
data_source = 'iex'
start = datetime(2018,1,1)
end = datetime(2018,12,31)
stocks_data = web.DataReader(stocks, data_source, start, end)
#If you want to load only some of the attributes:
#stocks_data = web.DataReader(stocks, data_source, start, end)[['open','close']]
# the method info() outputs basic information for our data frame
stocks_data.info()
#the medthod head() outputs the top rows of the data frame
stocks_data.head()
#the medthod tail() outputs the last rows of the data frame
stocks_data.tail()
Note that the date attribute is the index of the rows, not an attribute.
#trying to access the date column will give an error
stocks_data.date
The number of rows in the DataFrame:
len(stocks_data)
stocks_data.to_csv('stocks_data.csv')
for x in open('stocks_data.csv').readlines()[0:10]:
print(x.strip())
df = pd.read_csv('stocks_data.csv')
df.head()
Note that in the new dataframe, there is now a date column, while the index values are numbers 0,1,...
The columns are the "features" in your data
df.columns
We can also assign a list to the columns property in order to change the attribute names.
Alternatively, you can change the name of an attribute using rename:
df = df.rename(columns = {'volume':'V'})
print(list(df.columns))
df.columns = ['date', 'open', 'high', 'low', 'close', 'vol']
df.head()
Selecting a single column from your data.
It is important to keep in mind that this selection process returns a new data frame.
df['open'].head()
Another way of selecting a single column from your data
df.open.head()
Selecting multiple columns
df[['open','close']].head()
We can use the values method to obtain the values of one or more attributes. It returns a numpy array. You can trasform it into a list, by applying the list() operator.
df.open.values
df[['open','close']].values
A DataFrame object has many useful methods.
df.mean() #produces the mean of the columns/features
Note that date did not appear in the list. This is because it stores Strings
df.std() #produces the standard deviation of the columns/features
df.sem() #produces the standard error of the mean of the columns/features
#confidence interval
import scipy.stats as stats
conf = 0.95
t = stats.t.ppf((1+conf)/2.0, len(df)-1)
(df.mean()-t*df.sem(), df.mean()+t*df.sem())
df.median() #produces the median of the columns/features
df.open.mean()
#95%-confidence interval
(df.open.mean()-t*df.open.sem(), df.open.mean()+t*df.open.sem())
Use describe to get all statistics for the data
stocks_data.describe()
stocks_data.sum()
The functions we have seen work on columns. We can apply them to rows as well by specifying the axis of the data.
axis = 1 means columns, and it is the default behavior
axis = 0 means rows
stocks_data.sum(axis=1)
Sorting: You can sort by a specific column, ascending (default) or descending. You can also sort inplace.
stocks_data.sort_values(by = 'open', ascending =False).head()
Methods like sum( ) and std( ) work on entire columns.
We can run our own functions across all values in a column (or row) using apply( ).
df.date.head()
The values property of the column returns a list of values for the column. Inspecting the first value reveals that these are strings with a particular format.
first_date = df.date.values[0]
first_date
#returns a string
The datetime library handles dates. The method strptime transforms a string into a date (according to a format given as parameter).
datetime.strptime(first_date, "%Y-%m-%d")
We will now make use of two operations:
The apply method takes a dataframe and applies a function that is given as input to apply to all the entries in the data frame. In the case below we apply it to just one column.
The lambda function allows to define an anonymus function that takes some parameters (d) and uses them to compute some expression.
Using the lambda function with apply, we can apply the function to all the entries of the data frame (in this case the column values)
df.date = df.date.apply(lambda d: datetime.strptime(d, "%Y-%m-%d"))
date_series = df.date # We want to keep the dates
df.date.head()
#Another way to do the same thing, by applying the function to every row (axis = 1)
#df.date = df.apply(lambda row: datetime.strptime(row.date, "%Y-%m-%d"), axis=1)
df.date.head()
For example, we can obtain the integer part of the open value
#dftest = df[['open','close']]
#dftest.apply(lambda x: int(x))
df.apply(lambda r: int(r.open), axis=1)
Each row in a DataFrame is associated with an index, which is a label that uniquely identifies a row.
The row indices so far have been auto-generated by pandas, and are simply integers starting from 0.
From now on we will use dates instead of integers for indices -- the benefits of this will show later.
Overwriting the index is as easy as assigning to the index
property of the DataFrame.
df.index = df.date
df.head()
Another example using the simple example.csv data we loaded
dfe
dfe.index = dfe.B
dfe
Now that we have made an index based on date, we can drop the original date
column.
We will not do it in this example to use it later on.
df = df.drop(columns = ['date']) #Equivalent to df = df.drop(columns = ['date']), axis=1)
#axis = 0 refers to dropping labels from rows (or you can use index = labels)
#axis = 1 refers to dropping labels from columns.
df.info()
So far we've seen how to access a column of the DataFrame. To access a row we use a different notation.
To access a row by its index value, use the .loc()
method.
df.loc[datetime(2018,5,7)]
To access a row by its sequence number (ie, like an array index), use .iloc()
('Integer Location')
df.iloc[10:20] #dataframe with rows from 10 to 20
df.iloc[0:2,[1,3]] #dataframe with rows 0:2, and the second and fourth columns
df[['high','close']].iloc[0:2]
.iterrows()
¶num_positive_days = 0
for idx, row in df.iterrows(): #returns the index name and the row
if row.close > row.open:
num_positive_days += 1
print("The total number of positive-gain days is {}.".format(num_positive_days))
You can also do it this way:
num_positive_days = 0
for i in range(len(df)):
row = df.iloc[i]
if row.close > row.open:
num_positive_days += 1
print("The total number of positive-gain days is {}.".format(num_positive_days))
Or this way:
pos_days = [idx for (idx,row) in df.iterrows() if row.close > row.open]
print("The total number of positive-gain days is "+str(len(pos_days)))
It is very easy to select interesting rows from the data.
All these operations below return a new DataFrame, which itself can be treated the same way as all DataFrames we have seen so far.
tmp_high = df.high > 170
tmp_high.head()
Summing a Boolean array is the same as counting the number of True
values.
sum(tmp_high)
Now, let's select only the rows of df
that correspond to tmp_high
df[tmp_high].head()
Putting it all together, we have the following commonly-used patterns:
positive_days = df[df.close > df.open]
positive_days.head()
very_positive_days = df[df.close-df.open > 5]
very_positive_days.head()
df[(df.high<170)&(df.low>80)]
To create a new column, simply assign values to it. Think of the columns as a dictionary:
df['profit'] = (df.close - df.open)
df.head()
df.profit[df.profit>0].describe()
for idx, row in df.iterrows():
if row.close < row.open:
df.loc[idx,'gain']='negative'
elif (row.close - row.open) < 1:
df.loc[idx,'gain']='small_gain'
elif (row.close - row.open) < 3:
df.loc[idx,'gain']='medium_gain'
else:
df.loc[idx,'gain']='large_gain'
df.head()
Here is another, more "functional", way to accomplish the same thing.
Define a function that classifies rows, and apply
it to each row.
def gainrow(row):
if row.close < row.open:
return 'negative'
elif (row.close - row.open) < 1:
return 'small_gain'
elif (row.close - row.open) < 3:
return 'medium_gain'
else:
return 'large_gain'
df['test_column'] = df.apply(gainrow, axis = 1)
#axis = 0 means rows, axis =1 means columns
df.head()
OK, point made, let's get rid of that extraneous test_column
:
df = df.drop('test_column', axis = 1)
df.head()
Data often has missing values. In Pandas these are denoted as NaN values. These may be part of our data (e.g. empty cells in an excel sheet), or they may appear as a result of a join. There are special methods for handling these values.
mdf = pd.read_csv('example-missing.csv')
mdf
We can fill the values using the fillna method
mdf.fillna(0)
mdf.A = mdf.A.fillna(mdf.A.mean())
mdf = mdf.fillna('')
mdf
We can drop the rows with missing values
mdf = pd.read_csv('example-missing.csv')
mdf.dropna()
We can find those rows
mdf[mdf.B.isnull()]
An extremely powerful DataFrame method is groupby()
.
This is entirely analagous to GROUP BY
in SQL.
It will group the rows of a DataFrame by the values in one (or more) columns, and let you iterate through each group.
Here we will look at the average gain among the categories of gains (negative, small, medium and large) we defined above and stored in column gain
.
gain_groups = df.groupby('gain')
type(gain_groups)
Essentially, gain_groups
behaves like a dictionary
gain
column, and for gain, gain_data in gain_groups:
print(gain)
print(gain_data.head())
print('=============================')
We can obtain the dataframe that corresponds to a specific group by using the get_group method of the groupby object
sm = gain_groups.get_group('small_gain')
sm.head()
for gain, gain_data in df.groupby("gain"):
print('The average closing value for the {} group is {}'.format(gain,
gain_data.close.mean()))
for gain, gain_data in df.groupby("gain"):
print('The median volumn value for the {} group is {}'.format(gain,
gain_data.vol.median()))
We often want to do a typical SQL-like group by, where we group by one or more attributes, and aggreagate the values (of some) of the other attributes.
For example group by "gain" and take the average of the values for open, high, low, close, volume.
You can also use other aggregators such as count, sum, median, max, min.
Pandas is now returning a new dataframe indexed by the values of the group-by attribute(s), with columns the other attributes
gdf= df[['open','low','high','close','vol','gain']].groupby('gain').mean()
type(gdf)
gdf
If you want to remove the (hierarchical) index and have the group-by atrribute(s) to be part of the table, you can use the reset_index method. The result of this method is to make the index attribute one more column in the dataframe, and use the default index
#This can be used to remove the hiearchical index, if necessary
gdf = gdf.reset_index()
gdf
gdf.set_index('gain')
Another example:
test = pd.DataFrame({'A':[1,2,3,4],'B':['a','b','b','a'],'C':['a','a','b','a']})
test
gtest = test.groupby(['B','C']).mean()
gtest
#note that in this case we get a hierarchical index
gtest = gtest.reset_index()
gtest
#the hierarchical index is flattened out
We can join data frames in a similar way that we can do joins in SQL
data_source = 'iex'
start = datetime(2018,1,1)
end = datetime(2018,12,31)
dfb = web.DataReader('FB', data_source, start, end)
dgoog = web.DataReader('GOOG', data_source, start, end)
print(dfb.head())
print(dgoog.head())
Perform join on the date (the index value)
common_dates = pd.merge(dfb,dgoog,on='date')
common_dates.head()
Compute gain and perform join on the data AND gain
dfb['gain'] = dfb.apply(gainrow, axis = 1)
dgoog['gain'] = dgoog.apply(gainrow, axis = 1)
dfb['profit'] = dfb.close-dfb.open
dgoog['profit'] = dgoog.close-dgoog.open
common_gain_dates = pd.merge(dfb, dgoog, on=['date','gain'])
common_gain_dates.head()
More join examples, including left outer join
left = pd.DataFrame({'key': ['foo', 'foo', 'boo'], 'lval': [1, 2,3]})
print(left)
print('\n')
right = pd.DataFrame({'key': ['foo', 'hoo'], 'rval': [4, 5]})
print(right)
print('\n')
dfm = pd.merge(left, right, on='key') #keeps only the common key 'foo'
print(dfm)
Left outer join
dfm = pd.merge(left, right, on='key', how='left') #keeps all the keys from the left and puts NaN for missing values
print(dfm)
print('\n')
dfm = dfm.fillna(0) #fills the NaN values with specified value
dfm