This tutorial was contributed by Kevin Tang
Credits: Much of this was copied or inspired by https://github.com/donnemartin/data-science-ipython-notebooks
This tutorial will assume that the user is familiar with Python up to a CS 1 level. This tutorial is meant to introduce Numpy and Matplotlib as well as introducing Jupyter notebooks as a way to tackle machine learning problems.
Numpy is a package for python that makes data science easy. We import numpy as follows
import numpy as np
The most basic data type is the array. The array is akin to the Matrix in Matlab in that it is used for basically everything in NumPy. Like in Matlab, arrays can only contain one datatype. We can create arrays as follows
# Row Vector (or a 1D array)
a = np.array([1, 2, 3])
print a
print a.dtype
print a.shape
# Column vector
b = np.array([[10.],[20.],[30.]])
print b
print b.dtype
print b.shape
Some of the very useful array creation tools include np.zeros, np.ones, np.full, np.eye, and np.random.random. Look these up and see what they do, or play around with them
c = np.eye((3))
print c
d = np.random.random((3,3))
print d
We can index arrays very similarly to how we index lists in python (or matrices in Matlab). Slicing is really cool too. Just remember that python is 0 indexed unlike in Matlab
e = np.arange(1,10).reshape((3,3))
print e
print e[1, 2]
print e[0][1]
# slicing
print "Row: ", e[2]
print "Col: ", e[:,2]
# broadcasting
f = a + b
print f
print f.dtype
print f/[2, 5, 10]
print np.sin(f/10)
All arrays function as matrices! You can do a lot of matrix operations built in
# matrix multiplication
print np.dot(e, f)
# matrix transposition
print np.transpose(e)
# matrix inverse
print np.linalg.inv(e)
# Eigenvalues and right eigenvector
eigval, eigvec = np.linalg.eig(e)
print eigval
print eigvec
# norm
print np.linalg.norm(e)]
# 0th order norm for vector [1, 2, 3]
print np.linalg.norm(a, 0)
We can use numpys Linear Algebra to write some common tasks quicker. For example, suppose we had a table of two points and we wanted to calculate the slope for each pair of points
# generate points
lines = np.random.rand(10,4)*10
print "X1, X2, Y1, Y2"
print lines
# Use matrix multiplication for a linear transform into a 2x10 matrix
transform = np.transpose([[1,-1,0,0],[0,0,1,-1]])
diffs = np.dot(lines, transform)
print "dx, dy"
print diffs
# note the use of broadcasting
slope = diffs[:,1]/diffs[:,0]
print "dy/dx"
print slope
functions work just as well in SciPy.
def OLS(X, Y):
"""Implements ordinary least squares estimation for linear regression"""
Xt = np.transpose(X)
return np.dot(np.linalg.inv(np.dot(Xt,X)), np.dot(Xt, Y))
Let's test it out by randomly generating some data!
# Generate 100 random numbers from [0,10)
X = np.random.rand(100, 2) * 10
# Generate Y
Y = np.dot(X,[[3],[4]])
# add some noise
Y += np.random.rand(100, 1) * .1
print OLS(X,Y)
We can now plot stuff! Note the specific text used to get plots to show up in jupyter
%matplotlib inline
import matplotlib.pyplot as plt
x = np.linspace(0, 2, 10)
plt.plot(x, x, 'o-', label='linear')
plt.plot(x, x ** 2, 'x-', label='quadratic')
plt.legend(loc='best')
plt.title('Linear vs Quadratic progression')
plt.xlabel('Input')
plt.ylabel('Output');
plt.show()