Machine Learning
Math Essentials
Jeff Howbert
Introduction to Machine Learning
Winter 2012
Areas of math essential to machine learning
z
Machine learning is part of both statistics and computer
science
Probability
Statistical inference
Validation
Estimates of error, confidence intervals
Li
Linear
algebra
l b
Hugely useful for compact representation of linear
transformations on data
Dimensionality reduction techniques
Optimization
p
theoryy
Jeff Howbert
Introduction to Machine Learning
Winter 2012
Why worry about the math?
There are lots of easy-to-use machine learning
packages out there.
z After this course, you will know how to apply
several of the most g
general-purpose
p p
algorithms.
g
z
HOWEVER
z To get really useful results, you need good
mathematical
at e at ca intuitions
tu t o s about ce
certain
ta ge
general
ea
machine learning principles, as well as the inner
workings of the individual algorithms.
Jeff Howbert
Introduction to Machine Learning
Winter 2012
Why worry about the math?
These intuitions will allow you to:
Choose the right algorithm(s) for the problem
Make good choices on parameter settings,
validation strategies
g
Recognize over- or underfitting
Troubleshoot poor / ambiguous results
Put appropriate bounds of confidence /
uncertainty on results
Do a better job of coding algorithms or
incorporating
p
g them into more complex
p
analysis pipelines
Jeff Howbert
Introduction to Machine Learning
Winter 2012
Notation
aA
z|B|
z || v ||
z
z
z
z n
z
set membership: a is member of set A
cardinality: number of items in set B
norm: length of vector v
summation
integral
th sett off reall numbers
the
b
real number space of dimension n
n = 2 : plane or 2-space
n = 3 : 3- (dimensional) space
p
or hyperspace
yp p
n > 3 : n-space
Jeff Howbert
Introduction to Machine Learning
Winter 2012
Notation
x, y, z, vector (bold, lower case)
u, v
z A, B, X matrix (bold, upper case)
z y = f( x ) function (map): assigns unique value in
range of y to each value in domain of x
z dy
y / dx derivative of y with respect
p
to single
g
variable x
z y = f(( x ) function on multiple
p variables, i.e. a
vector of variables; function in n-space
z y / xi
partial derivative of y with respect to
element i of vector x
z
Jeff Howbert
Introduction to Machine Learning
Winter 2012
The concept of probability
Intuition:
z In some process, several outcomes are possible.
When the process is repeated a large number of
times, each outcome occurs with a characteristic
relative frequency
frequency, or probability.
probability If a particular
outcome happens more often than another
outcome we say it is more probable
outcome,
probable.
Jeff Howbert
Introduction to Machine Learning
Winter 2012
The concept of probability
Arises in two contexts:
z In actual repeated experiments.
Example: You record the color of 1000 cars driving
by. 57 of them are green. You estimate the
probability of a car being green as 57 / 1000 = 0
0.0057.
0057
z In idealized conceptions of a repeated process.
Example: You consider the behavior of an unbiased
six-sided die. The expected probability of rolling a 5 is
1 / 6 = 0.1667.
Example: You need a model for how peoples heights
are distributed. You choose a normal distribution
((bell-shaped
p curve)) to represent
p
the expected
p
relative
probabilities.
Jeff Howbert
Introduction to Machine Learning
Winter 2012
Probability spaces
A probability space is a random process or experiment with
three components:
, the set of possible outcomes O
number of possible outcomes = | | = N
F,
F the
th sett off possible
ibl events
t E
an event comprises 0 to N outcomes
number of possible events = | F | = 2N
P, the probability distribution
function mapping each outcome and event to real number
b t
between
0 and
d 1 (th
(the probability
b bilit off O or E)
probability of an event is sum of probabilities of possible
outcomes in event
Jeff Howbert
Introduction to Machine Learning
Winter 2012
Axioms of probability
1.
Non-negativity:
for any event E F,
F p( E ) 0
2
2.
All possible outcomes:
p( ) = 1
3.
Additivity of disjoint events:
for all events E, E F where E E = ,
p( E U E ) = p( E ) + p( E )
Jeff Howbert
Introduction to Machine Learning
Winter 2012
10
Types of probability spaces
Define | | = number of possible outcomes
z
Discrete space
| | is finite
Analysis involves summations ( )
Continuous
C
ti
space | | is
i infinite
i fi it
Analysis involves integrals ( )
Jeff Howbert
Introduction to Machine Learning
Winter 2012
11
Example of discrete probability space
Single roll of a six-sided die
6p
possible outcomes: O = 1,, 2,, 3,, 4,, 5,, or 6
26 = 64 possible events
example: E = ( O { 1, 3, 5 } ), i.e. outcome is odd
If die is fair, then probabilities of outcomes are equal
p( 1 ) = p( 2 ) = p( 3 ) =
p( 4 ) = p( 5 ) = p( 6 ) = 1 / 6
Jeff Howbert
example: probability of event E = ( outcome is odd ) is
p( 1 ) + p( 3 ) + p( 5 ) = 1 / 2
Introduction to Machine Learning
Winter 2012
12
Example of discrete probability space
Three consecutive flips of a coin
8p
possible outcomes: O = HHH,, HHT,, HTH,, HTT,,
THH, THT, TTH, TTT
28 = 256 possible events
example: E = ( O { HHT, HTH, THH } ), i.e. exactly two flips
are heads
example: E = ( O { THT, TTT } ), i.e. the first and third flips
are tails
If coin is fair, then probabilities of outcomes are equal
p( HHH ) = p( HHT ) = p( HTH ) = p( HTT ) =
p( THH ) = p( THT ) = p( TTH ) = p( TTT ) = 1 / 8
Jeff Howbert
example: probability of event E = ( exactly two heads ) is
p(( HHT ) + p(( HTH ) + p(( THH ) = 3 / 8
Introduction to Machine Learning
Winter 2012
13
Example of continuous probability space
Height of a randomly chosen American male
Infinite number of possible outcomes: O has some
single value in range 2 feet to 8 feet
Infinite number of possible events
example: E = ( O | O < 5.5 feet ), i.e. individual chosen is less
than 5.5 feet tall
Probabilities of outcomes are not equal
equal, and are
described by a continuous function, p( O )
Jeff Howbert
Introduction to Machine Learning
Winter 2012
14
Example of continuous probability space
Height of a randomly chosen American male
Probabilities of outcomes O are not equal
equal, and are
described by a continuous function, p( O )
p( O ) is a relative, not an absolute probability
Jeff Howbert
p( O ) for any particular O is zero
p( O ) from O = - to (i.e. area under curve) is 1
example: p( O = 5
58
8 ) > p( O = 6
62
2 )
example: p( O < 56 ) = ( p( O ) from O = - to 56 ) 0.25
Introduction to Machine Learning
Winter 2012
15
Probability distributions
z
Discrete:
probability mass function (pmf)
example:
sum of two
fair dice
Continuous:
probability density function (pdf)
example:
waiting time between
eruptions of Old Faithful
(minutes)
Jeff Howbert
probability
Introduction to Machine Learning
Winter 2012
16
Random variables
z
A random variable X is a function that associates a number x with
each outcome O of a process
Common
C
notation:
t ti
X( O ) = x, or just
j tX=x
Basically a way to redefine (usually simplify) a probability space to a
new probability space
X must obey axioms of probability (over the possible values of x)
X can be discrete or continuous
Example: X = number of heads in three flips of a coin
Possible values of X are 0, 1, 2, 3
p( X = 0 ) = p( X = 3 ) = 1 / 8
p( X = 1 ) = p( X = 2 ) = 3 / 8
Size of space (number of outcomes) reduced from 8 to 4
z
Example: X = average height of five randomly chosen American men
Size of space unchanged (X can range from 2 feet to 8 feet)
feet), but
pdf of X different than for single man
Jeff Howbert
Introduction to Machine Learning
Winter 2012
17
Multivariate probability distributions
z
Scenario
Several random p
processes occur ((doesnt matter
whether in parallel or in sequence)
Want to know probabilities for each possible
combination
bi ti off outcomes
t
Can describe as joint probability of several random
variables
Example: two processes whose outcomes are
represented by random variables X and Y. Probability
that process X has outcome x and process Y has
outcome y is denoted as:
p(( X = x, Y = y )
Jeff Howbert
Introduction to Machine Learning
Winter 2012
18
Example of multivariate distribution
joint probability: p( X = minivan, Y = European ) = 0.1481
Jeff Howbert
Introduction to Machine Learning
Winter 2012
19
Multivariate probability distributions
Marginal probability
Probability distribution of a single variable in a
joint distribution
Example: two random variables X and Y:
p( X = x ) = b=all values of Y p( X = x, Y = b )
z Conditional probability
Probability distribution of one variable given
that another variable takes a certain value
Example: two random variables X and Y:
p( X = x | Y = y ) = p( X = x,
x Y = y ) / p( Y = y )
z
Jeff Howbert
Introduction to Machine Learning
Winter 2012
20
Example of marginal probability
marginal probability: p( X = minivan ) = 0.0741 + 0.1111 + 0.1481 = 0.3333
Jeff Howbert
Introduction to Machine Learning
Winter 2012
21
Example of conditional probability
conditional probability: p( Y = European | X = minivan ) =
0.1481 / ( 0.0741 + 0.1111 + 0.1481 ) = 0.4433
p
probability
0.2
0 15
0.15
0.1
0.05
0
American
sport
Asian
Y = manufacturer
Jeff Howbert
SUV
minivan
European
sedan
Introduction to Machine Learning
X = model type
Winter 2012
22
Continuous multivariate distribution
z
z
Same concepts of joint, marginal, and conditional
probabilities apply (except use integrals)
Example: three-component Gaussian mixture in two
dimensions
Jeff Howbert
Introduction to Machine Learning
Winter 2012
23
Expected value
Given:
z A discrete random variable X,
X with possible
values x = x1, x2, xn
z Probabilities p( X = xi ) that X takes on the
various values of xi
z A function yi = f( xi ) defined on X
The expected value of f is the probability-weighted
average value of f( xi ):
E( f ) = i p( xi ) f( xi )
Jeff Howbert
Introduction to Machine Learning
Winter 2012
24
Example of expected value
z
Process: game where one card is drawn from the deck
If face card,, dealer pays
p y yyou $
$10
If not a face card, you pay dealer $4
Random variable X = { face card, not face card }
p( face card ) = 3/13
p( not face card ) = 10/13
Function f( X ) is payout to you
f( face card ) = 10
f( nott face
f
card
d ) = -4
4
Expected value of payout is:
E( f ) = i p( xi ) f( xi ) = 3/13 10 + 10/13 -4
4 = -0.77
0 77
Jeff Howbert
Introduction to Machine Learning
Winter 2012
25
Expected value in continuous spaces
E( f ) = x = a b p( x ) f( x )
Jeff Howbert
Introduction to Machine Learning
Winter 2012
26
Common forms of expected value (1)
z
Mean ()
f( xi ) = xi
= E( f ) = i p( xi ) xi
Average value of X = xi, taking into account probability
of the various xi
Most
M t common measure off center
t off a distribution
di t ib ti
Compare to formula for mean of an actual sample
1 n
= xi
N i =1
Jeff Howbert
Introduction to Machine Learning
Winter 2012
27
Common forms of expected value (2)
z
Variance (2)
f( xi ) = ( xi - )
2 = i p( xi ) ( xi - )2
Average value of squared deviation of X = xi from
mean , taking into account probability of the various xi
Most
M t common measure off spread
d off a distribution
di t ib ti
is the standard deviation
Compare to formula for variance of an actual sample
1 n
2
2
=
(
x
)
i
N 1 i =1
Jeff Howbert
Introduction to Machine Learning
Winter 2012
28
Common forms of expected value (3)
z
Covariance
f( xi ) = ( xi - x ), g( yi ) = ( yi - y )
cov( x,
x y ) = i p( xi , yi ) ( xi - x ) ( yi - y )
high (pos
sitive)
covaria
ance
no co
ovariance
Measures tendency for x and y to deviate from their means in
same (or opposite) directions at same time
Compare to formula for covariance of actual samples
1 n
cov( x, y ) =
( xi x )( yi y )
N 1 i =1
Jeff Howbert
Introduction to Machine Learning
Winter 2012
29
Correlation
z
Pearsons correlation coefficient is covariance normalized
by the standard deviations of the two variables
cov( x, y )
corr( x, y ) =
x y
Always lies in range -1 to 1
Only reflects linear dependence between variables
Linear dependence
with noise
Linear dependence
without noise
Various nonlinear
dependencies
Jeff Howbert
Introduction to Machine Learning
Winter 2012
30
Complement rule
Given: event A, which can occur or not
p( not A ) = 1 - p( A )
not A
areas represent relative probabilities
Jeff Howbert
Introduction to Machine Learning
Winter 2012
31
Product rule
Given: events A and B, which can co-occur (or not)
p( A,
A B ) = p( A | B ) p( B )
(same expression given previously to define conditional probability)
(not A, not B)
( A,
A B)
(A, not B)
(not A, B)
areas represent relative probabilities
Jeff Howbert
Introduction to Machine Learning
Winter 2012
32
Example of product rule
z
Probability that a man has white hair (event A)
and is over 65 (event B)
p( B ) = 0.18
p( A | B ) = 0.78
p( A, B ) = p( A | B ) p( B ) =
0.78 0.18 =
0.14
Jeff Howbert
Introduction to Machine Learning
Winter 2012
33
Rule of total probability
Given: events A and B, which can co-occur (or not)
p( A ) = p( A,
A B ) + p( A,
A not B )
(same expression given previously to define marginal probability)
(not A, not B)
( A,
A B)
(A, not B)
(not A, B)
areas represent relative probabilities
Jeff Howbert
Introduction to Machine Learning
Winter 2012
34
Independence
Given: events A and B, which can co-occur (or not)
p( A | B ) = p( A )
or
p( A,
A B ) = p( A ) p( B )
(not A, not B)
(not A, B)
B
(A, not B)
( A, B )
areas represent relative probabilities
Jeff Howbert
Introduction to Machine Learning
Winter 2012
35
Examples of independence / dependence
z
Independence:
Outcomes on multiple
p rolls of a die
Outcomes on multiple flips of a coin
Height of two unrelated individuals
Probability of getting a king on successive draws from
a deck, if card from each draw is replaced
D
Dependence:
d
Height of two related individuals
Duration of successive eruptions of Old Faithful
Probability of getting a king on successive draws from
a deck,, if card from each draw is not replaced
p
Jeff Howbert
Introduction to Machine Learning
Winter 2012
36
Example of independence vs. dependence
z
z
Independence: All manufacturers have identical product
mix. p( X = x | Y = y ) = p( X = x ).
Dependence: American manufacturers love SUVs,
Europeans manufacturers dont.
Jeff Howbert
Introduction to Machine Learning
Winter 2012
37
Bayes rule
A way to find conditional probabilities for one variable when
conditional probabilities for another variable are known.
p( B | A ) = p( A | B ) p( B ) / p( A )
where p( A ) = p( A, B ) + p( A, not B )
(not A, not B)
( A, B )
(A, not B)
Jeff Howbert
(not A, B)
Introduction to Machine Learning
Winter 2012
38
Bayes rule
posterior probability likelihood prior probability
p( B | A ) = p( A | B ) p( B ) / p( A )
(not A, not B)
( A, B )
(A, not B)
Jeff Howbert
(not A, B)
Introduction to Machine Learning
Winter 2012
39
Example of Bayes rule
z
Marie is getting married tomorrow at an outdoor ceremony in the
desert. In recent years, it has rained only 5 days each year.
Unfortunately the weatherman is forecasting rain for tomorrow
Unfortunately,
tomorrow. When
it actually rains, the weatherman has forecast rain 90% of the time.
When it doesn't rain, he has forecast rain 10% of the time. What is the
probability it will rain on the day of Marie's
Marie s wedding?
Event A: The weatherman has forecast rain.
Event B: It rains.
We know:
p( B ) = 5 / 365 = 0.0137 [ It rains 5 days out of the year. ]
p( not B ) = 360 / 365 = 0.9863
p( A | B ) = 0.9 [ When it rains, the weatherman has forecast
rain 90% of the time. ]
p( A | not B ) = 0.1
0 1 [When it does not rain
rain, the weatherman has
forecast rain 10% of the time.]
Jeff Howbert
Introduction to Machine Learning
Winter 2012
40
Example of Bayes rule, contd.
z
1.
2.
3.
We want to know p( B | A ), the probability it will rain on
the day of Marie's wedding, given a forecast for rain by
th weatherman.
the
th
Th
The answer can b
be d
determined
t
i d ffrom
Bayes rule:
p( B | A ) = p( A | B ) p( B ) / p( A )
p( A ) = p( A | B ) p( B ) + p( A | not B ) p( not B ) =
(0.9)(0.014) + (0.1)(0.986) = 0.111
p( B | A ) = (0.9)(0.0137) / 0.111 = 0.111
The result seems unintuitive but is correct. Even when the
weatherman predicts rain, it only rains only about 11% of
p the weatherman's g
gloomy
yp
prediction,, it
the time. Despite
is unlikely Marie will get rained on at her wedding.
Jeff Howbert
Introduction to Machine Learning
Winter 2012
41
Probabilities: when to add, when to multiply
z
ADD: When you want to allow for occurrence of
any of several possible outcomes of a single
process. Comparable to logical OR.
MULTIPLY: When you want to allow for
simultaneous occurrence of p
particular outcomes
from more than one process. Comparable to
logical AND.
But only if the processes are independent.
Jeff Howbert
Introduction to Machine Learning
Winter 2012
42
Linear algebra applications
1)
2)
3)
4)
5)
6)
Operations on or between vectors and matrices
Coordinate transformations
Dimensionality reduction
Linear regression
Solution of linear systems of equations
M
Many
others
th
Applications 1) 4) are directly relevant to this
course. Today well start with 1).
Jeff Howbert
Introduction to Machine Learning
Winter 2012
43
Why vectors and matrices?
Most common form of data
organization for machine
learning is a 2D array, where
rows represent
p
samples
p
(records, items, datapoints)
columns represent
p
attributes
(features, variables)
z Natural to think of each sample
as a vector of attributes, and
whole array as a matrix
vector
Jeff Howbert
Introduction to Machine Learning
Refund Marital
Status
Taxable
Income Cheat
Yes
Single
125K
No
No
Married
100K
No
No
Single
70K
No
Yes
Married
120K
No
No
Divorced 95K
Yes
No
Married
No
Yes
Divorced 220K
No
No
Single
85K
Yes
No
Married
75K
No
No
Single
90K
Yes
60K
10
matrix
Winter 2012
44
Vectors
Definition: an n-tuple of values (usually real
numbers).
n referred to as the dimension of the vector
n can be any positive integer
integer, from 1 to infinity
z Can be written in column form or row form
Column form is conventional
Vector elements referenced by subscript
z
x1
x= M
x
n
Jeff Howbert
x T = ( x1 L xn )
T
means " transpose"
t
"
Introduction to Machine Learning
Winter 2012
45
Vectors
z
Can think of a vector as:
a point in space or
a directed line segment with a magnitude and
direction
Jeff Howbert
Introduction to Machine Learning
Winter 2012
46
Vector arithmetic
z
Addition of two vectors
add corresponding elements
z = x + y = (x1 + y1 L xn + yn )
result is a vector
z
Scalar multiplication of a vector
multiply each element by scalar
y = ax = (a x1 L axn )
result is a vector
Jeff Howbert
Introduction to Machine Learning
Winter 2012
47
Vector arithmetic
z
Dot product of two vectors
multiply
lti l corresponding
di elements,
l
t th
then add
dd products
d t
n
a = x y = xi yi
i =1
result is a scalar
y
z
Dot product alternative form
a = x y = x y cos ( )
Jeff Howbert
Introduction to Machine Learning
Winter 2012
48
Matrices
Definition: an m x n two-dimensional array of
values (usually real numbers).
m rows
n columns
z Matrix referenced by two-element subscript
first element in
a11 L a1n
subscript is row
A= M O M
second element in
a
L
a
mn
m1
subscript is column
example: A24 or a24 is element in second row,
fourth column of A
z
Jeff Howbert
Introduction to Machine Learning
Winter 2012
49
Matrices
A vector can be regarded as special case of a
matrix, where one of matrix dimensions = 1.
z Matrix transpose (denoted T)
swap columns and rows
z
row 1 becomes column 1, etc.
m x n matrix becomes n x m matrix
example:
4
2
2 7 1 0 3
A =
4 6 3 1 8
Jeff Howbert
6
7
AT = 1 3
1
0
3
Introduction to Machine Learning
Winter 2012
50
Matrix arithmetic
z
Addition of two matrices
matrices must be same size
add corresponding elements:
cij = aij + bij
result is a matrix of same size
z
C= A+B =
a11 + b11 L a1n + b1n
M
O
M
a + b
L
a
+
b
m
1
m
1
mn
mn
Scalar multiplication of a matrix
multiply each element by scalar:
bij = d aij
result is a matrix of same size
Jeff Howbert
Introduction to Machine Learning
B = d A =
d a11 L d a1n
O
M
M
d a
L
d
a
m1
mn
Winter 2012
51
Matrix arithmetic
z
Matrix-matrix multiplication
vector-matrix multiplication
p
jjust a special
p
case
TO THE BOARD!!
z
z
Multiplication is associative
A(BC)=(AB)C
Multiplication is not commutative
A B B A (generally)
Transposition rule:
( A B )T = B T A T
Jeff Howbert
Introduction to Machine Learning
Winter 2012
52
Matrix arithmetic
RULE: In any chain of matrix multiplications, the
column dimension of one matrix in the chain must
match the row dimension of the following matrix
in the chain.
z Examples
A3x5
B5x5
C3x1
Right:
A B AT CT A B AT A B C CT A
Wrong:
ABA
CAB
A AT B CT C A
z
Jeff Howbert
Introduction to Machine Learning
Winter 2012
53
Vector projection
z
Orthogonal projection of y onto x
Can take place in any space of dimensionality > 2
Unit vector in direction of x is
y
x / || x ||
Length of projection of y in
direction of x is
x
|| y || cos( )
projx( y )
Orthogonal projection of
y onto x is the vector
projx( y ) = x || y || cos( ) / || x || =
[ ( x y ) / || x ||2 ] x (using dot product alternate form)
Jeff Howbert
Introduction to Machine Learning
Winter 2012
54
Optimization theory topics
Maximum likelihood
z Expectation maximization
z Gradient descent
z
Jeff Howbert
Introduction to Machine Learning
Winter 2012
55