Now that we've covered some of the basic syntax and libraries in Python we can start to tackle our data analysis problem. We are interested in understanding the relationship between the weather and the number of mosquitos so that we can plan mosquito control measures. Since we want to apply these mosquito control measures at a number of different sites we need to understand how the relationship varies across sites. Remember that we have a series of CSV files with each file containing the data for a single location.
When approaching computational tasks like this one it is typically best to start small, check each piece of code as you go, and make incremental changes. This helps avoid marathon debugging sessions because it's much easier to debug one small piece of the code at a time than to write 100 lines of code and then try to figure out all of the different bugs in it.
Let's start by reading in the data from a single file and conducting a simple regression analysis on it. In fact, I would actually start by just importing the data and making sure that everything is coming in OK.
import pandas as pd
d = pd.read_csv('data/A2_mosquito_data.csv')
d
The import seems to be working properly, so that's good news, but does anyone have anyone see anything about the code that they don't like?
That's right. The variable name I've chosen for the data doesn't really communicate any information to anyone about what it's holding, which means that when I come back to my code next month to change something I'm going to have a more difficult time understanding what the code is actually doing. This brings us to one of our first major lessons for the morning, which is that in order to understand what our code is doing so that we can quickly make changes in the future, we need to write code for people, not computers, and an important first step is to use meaningful varible names.
import pandas as pd
data = pd.read_csv('data/A2_mosquito_data.csv')
data.head()
The .head()
method lets us just look at the first few rows of the data.
A method is a function attached to an object that operates on that object.
So in this case we can think of it as being equivalent to head(data)
.
Everything looks good, but either global warming has gotten really out of control or the temperatures are in degrees Fahrenheit. Let's convert them to Celsius before we get started.
We don't need to reimport the data in our new cell because all of the executed cells in IPython Notebook share the same workspace.
However, it's worth noting that if we close the notebook and then open it again it is necessary to rerun all of the individual blocks of code that a code block relies on before continuing.
To rerun all of the cells in a notebook you can select Cell -> Run All
from the menu.
data['temperature'] = (data['temperature'] - 32) * 5 / 9.0
data.head()
That's better.
Now let's go ahead and conduct a regression on the data.
We'll use the statsmodels
library to conduct the regression.
import statsmodels.api as sm
regr_results = sm.OLS.from_formula('mosquitos ~ temperature + rainfall', data).fit()
regr_results.summary()
As you can see statsmodels
lets us use the names of the columns in our dataframe
to clearly specify the form of the statistical model we want to fit.
This also makes the code more readable since the model we are fitting is written in a nice,
human readable, manner.
The summary
method gives us a visual representation of the results.
This summary is nice to look at, but it isn't really useful for doing more computation,
so we can look up particular values related to the regression using the regr_results
attributes.
These are variables that are attached to regr_results
.
regr_results.params
regr_results.rsquared
If we want to hold onto these values for later we can assign them to variables:
parameters = regr_results.params
rsquared = regr_results.rsquared
And then we can plot the observed data against the values predicted by our regression to visualize the results. First, remember to tell the notebook that we want our plots to appear in the notebook itself.
%matplotlib inline
import matplotlib.pyplot as plt
predicted = parameters[0] + parameters[1] * data['temperature'] + parameters[2] * data['rainfall']
plt.plot(predicted, data['mosquitos'], 'ro')
min_mosquitos, max_mosquitos = min(data['mosquitos']), max(data['mosquitos'])
plt.plot([min_mosquitos, max_mosquitos], [min_mosquitos, max_mosquitos], 'k-')
OK, great. So putting this all together we now have a piece of code that imports the modules we need, loads the data into memory, fits a regression to the data, and stores the parameters and fit of data.
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
data = pd.read_csv('data/A2_mosquito_data.csv')
data['temperature'] = (data['temperature'] - 32) * 5 / 9.0
regr_results = sm.OLS.from_formula('mosquitos ~ temperature + rainfall', data).fit()
parameters = regr_results.params
rsquared = regr_results.rsquared
predicted = parameters[0] + parameters[1] * data['temperature'] + parameters[2] * data['rainfall']
plt.plot(predicted, data['mosquitos'], 'ro')
min_mosquitos, max_mosquitos = min(data['mosquitos']), max(data['mosquitos'])
plt.plot([min_mosquitos, max_mosquitos], [min_mosquitos, max_mosquitos], 'k-')
print(parameters)
print("R^2 = ", rsquared)
The next thing we need to do is loop over all of the possible data files, but in order to do that we're going to need to grow our code some more. Since our brain can only easily hold 5-7 pieces of information at once, and our code already has more than that many pieces, we need to start breaking our code into manageable sized chunks. This will let us read and understand the code more easily and make it easier to reuse pieces of our code. We'll do this using functions.
Functions in Python take the general form
def function_name(inputs):
do stuff
return output
So, if we want to write a function that returns the value of a number squared we could use:
def square(x):
x_squared = x ** 2
return x_squared
print("Four squared is", square(4))
print("Five squared is", square(5))
We can also just return the desired value directly.
def square(x):
return x ** 2
square(3)
And remember, if we want to use the result of the function later we need to store it somewhere.
two_squared = square(2)
two_squared
Write a function that converts temperature from Fahrenheit to Celsius and use it to replace this line of code:
data['temperature'] = (data['temperature'] - 32) * 5 / 9.0Write a function called
analyze()
that takesdata
as an input, performs the regression, makes the observed-predicted plot, and returnsparameters
.*Walk through someone's result. When discussing talk about different names. E.g., fahr_to_celsius is better than temp_to_celsius since it is explicit both the input and the output. Talk about the fact that even though this doesn't save us any lines of code it's still easier to read.*
Let's take a closer look at what happens when we call a function. To make things clearer, we'll start by putting the initial value 32 in a variable and store the final result in one as well:
# Don't worry if this fails
%load_ext tutormagic
%%tutor --lang python3
# Uncomment ^ that line if the previous cell ran OK
def celsius_to_kelvin(tempC):
tempK = tempC + 273.15
return tempK
original = 32.0
final = celsius_to_kelvin(original)
When the first three lines of this function are executed the function is created,
but nothing happens.
The function is like a recipe,
it contains the information about how to do something,
but it doesn't do so until you explicitly ask it to.
We then create the variable original
and assign the value 32.0 to it.
The values tempC
and tempK
don't currently exist.
When we call celsius_to_kelvin
,
Python creates another stack frame to hold the function's variables.
Upon creation this stack frame only includes the inputs being passed to the function,
so in our case tempC
.
As the function is executed variables created by the function are stored in the functions stack frame,
so tempC
is created in the celsius_to_kelvin
stack frame.
When the call to celsius_to_kelvin
returns a value,
Python throws away celsius_to_kelvin
's stack frame,
including all of the variables it contains,
and creates a new variable
in the original stack frame to hold the temperature in Celsius.
This global stack frame is always there;
it holds the variables we defined outside the functions in our code.
What it doesn't hold is the variables that were in the other stack frames.
If we try to get the value of tempC
or tempK
after our functions have finished running,
Python tells us that there's no such thing:
print(tempK)
The reason for this is encapsulation, and it's one of the key to writing correct, comprehensible programs. A function's job is to turn several operations into one so that we can think about a single function call instead of a dozen or a hundred statements each time we want to do something. That only works if functions don't interfere with each other by potentially changing the same variables; if they do, we have to pay attention to the details once again, which quickly overloads our short-term memory.
Once we start putting things into functions so that we can re-use them, we need to start testing that those functions are working correctly. The most basic thing we can do is some informal testing to make sure the function is doing what it is supposed to do. To see how to do this, let's write a function to center the values in a dataset prior to conducting statistical analysis. Centering means setting the mean of each variable to be the same value, typically zero.
def center(data):
return data - data.mean()
We could test this on our actual data, but since we don't know what the values ought to be, it will be hard to tell if the result was correct. Instead, let's create a made up data frame where we know what the result should look like.
import pandas as pd
test_data = pd.DataFrame([[1, 1], [1, 2]])
test_data
Now that we've made some test data we need to figure out what we think the result should be
and we need to do this before we run the test.
This is important because we are biased to believe that any result we get back is correct,
and we want to avoid that bias.
This also helps make sure that we are confident in what we want the code to do.
So, what should the result of running center(data)
be?
OK, let's go ahead and run the function.
center(test_data)
That looks right,
so let's try center
on our real data:
data = pd.read_csv('data/A2_mosquito_data.csv')
center(data)
It's hard to tell from the default output whether the result is correct, but there are a few simple tests that will reassure us:
print('original mean:')
print(data.mean())
centered = center(data)
print()
print('mean of centered data:')
print(centered.mean())
The mean of the centered data is very close to zero; it's not quite zero because of floating point precision issues. We can even go further and check that the standard deviation hasn't changed (which it shouldn't if we've just centered the data):
print('std dev before and after:')
print(data.std())
print()
print(centered.std())
The standard deviations look the same. It's still possible that our function is wrong, but it seems unlikely enough that we we're probably in good shape for now.
Testing is really important when writing scientific code. If you haven't checked that your code works properly, you can't be confident in your results. We'll talk more about testing tomorrow.
OK, the center
function seems to be working fine.
Does anyone else see anything that's missing before we move on?
Yes, we should write some documentation
to remind ourselves later what it's for and how to use it.
This function may be fairly straightforward,
but in most cases it won't be so easy to remember exactly what a function is doing in a few months.
Just imagine looking at our analyze
function a few months in the future
and trying to remember exactly what it was doing just based on the code.
The usual way to put documentation in code is to add comments like this:
# center(data): return a new DataFrame containing the original data centered around zero.
def center(data, desired):
return data - data.mean()
There's a better way to do this in Python. If the first thing in a function is a string that isn't assigned to a variable, that string is attached to the function as its documentation:
def center(data, desired):
"""Return a new DataFrame containing the original data centered around zero."""
return data - data.mean()
This is better because we can now ask Python's built-in help system to show us the documentation for the function.
help(center)
A string like this is called a docstring and there are also automatic documentation generators that use these docstrings to produce documentation for users. We use triple quotes because it allows us to include multiple lines of text and because it is considered good Python style.
def center(data):
"""Return a new array containing the original data centered on zero
Example:
>>> import pandas
>>> data = pandas.DataFrame([[0, 1], [0, 2])
>>> center(data)
0 1
0 0 -0.5
1 0 0.5
"""
return data - data.mean()
help(center)
So now our code looks something like this:
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
def fahr_to_celsius(tempF):
"""Convert fahrenheit to celsius"""
tempC = (tempF - 32) * 5 / 9.0
return tempC
def analyze(data):
"""Perform regression analysis on mosquito data
Takes a dataframe as input that includes columns named 'temperature',
'rainfall', and 'mosquitos'.
Performs a multiple regression to predict the number of mosquitos.
Creates an observed-predicted plot of the result and
returns the parameters of the regression.
"""
regr_results = sm.OLS.from_formula('mosquitos ~ temperature + rainfall', data).fit()
parameters = regr_results.params
predicted = parameters[0] + parameters[1] * data['temperature'] + parameters[2] * data['rainfall']
plt.figure()
plt.plot(predicted, data['mosquitos'], 'ro')
min_mosquitos, max_mosquitos = min(data['mosquitos']), max(data['mosquitos'])
plt.plot([min_mosquitos, max_mosquitos], [min_mosquitos, max_mosquitos], 'k-')
return parameters
data = pd.read_csv('data/A2_mosquito_data.csv')
data['temperature'] = fahr_to_celsius(data['temperature'])
regr_results = analyze(data)
print(regr_results)
Now we want to loop over all of the possible data files,
and to do that we need to know their names.
If we only had a dozen files we could write them all down,
but if we have hundreds of files or the filenames change then that won't really work.
Fortunately Python has a built in library to help us find the files we want to work with called glob
.
import glob
filenames = glob.glob('data/*.csv')
filenames
The object returned by glob
is a list of strings.
A list is a Python data type that holds a group of potentially heterogenous values.
That means it can hold pretty much anything,
including functions.
mylist = [1, 'a', center]
mylist
In this case all of the values are strings
that contain the names of all of the files that match the expression given to glob
,
so in this case all of the files with the .csv
extension.
Let's restrict the filenames a little more finely, so that we don't accidentally get any data we don't want, and print out the filenames one at a time.
filenames = glob.glob('data/*.csv')
for filename in filenames:
print(filename)
Modify your code to loop over all of the files in your directory, making an observed-predicted plot for each file and printing the parameters.