[go: up one dir, main page]

0% found this document useful (0 votes)
39 views33 pages

ML Lecture Linear Regression 1

Uploaded by

yiruiliu115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views33 pages

ML Lecture Linear Regression 1

Uploaded by

yiruiliu115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

PATTERN RECOGNITION

AND MACHINE LEARNING

CHAPTER 3: LINEAR MODELS FOR REGRESSION


Learning Objectives
1、How to achieve linear regression using basis functions?
2、What are the relationships between maximum likelihood and least
squares, between maximum a posterior and regularization, and among
expected loss, bias, variance, and noise?
3、What are the common regularization methods for regression?
4、How to achieve Bayesian linear regression?
5、What is the kernel for regression?
6、How to choose the model complexity?
7、What are the evidence approximation and maximization?
Bayesian Machine Learning
Process of Machine Learning:

p |training data, model  p(training data | model, ) p0 |model


posterior likelihood prior

Process of Prediction:

p testing data | 𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 𝑑𝑎𝑡𝑎, 𝑚𝑜𝑑𝑒𝑙 =


න 𝑝 (testing data | model, ) p  | training 𝑑𝑎𝑡𝑎, 𝑚𝑜𝑑𝑒𝑙 d

Process of Model Evaluation: For super-parameter tuning

p 𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 𝑑𝑎𝑡𝑎, | 𝑚𝑜𝑑𝑒𝑙 =


න 𝑝 (training data | model, ) p0  | 𝑚𝑜𝑑𝑒𝑙 d
Bayesian Learning for LGS
Given 𝑦 = 𝐴𝑥 + 𝑣
𝑝 𝑥 = 𝒩 𝑥|𝜇, Σ 𝑝 𝑣 = 𝒩 𝑣|0, 𝑄
𝑥 =𝑚+𝑢
𝑝 𝑥|𝑦 = 𝒩 𝑥|𝑚, 𝐿 𝑝 𝑢 = 𝒩 𝑢|0, 𝐿

we have
𝐿−1 = 𝐴𝑇𝑄 −1 𝐴 + Σ−1
𝑚 = 𝐿{𝐴𝑇𝑄 −1 𝑦 + Σ−1 𝜇}
Bayesian Prediction for LGS
Given 𝑦 = 𝐴𝑥 + 𝑣
𝑝 𝑥 = 𝒩 𝑥|𝜇, Σ 𝑝 𝑣 = 𝒩 𝑣|0, 𝑄
𝑥 =𝑚+𝑢
𝑝 𝑥|𝑦 = 𝒩 𝑥|𝑚, 𝐿 𝑝 𝑢 = 𝒩 𝑢|0, 𝐿

We have
𝑝 𝑦|𝑥 = 𝒩 𝑦|𝐴𝑥, 𝑄

𝑝 𝑦′ = න 𝑝 𝑦′|𝑥 𝑝 𝑥|𝑦 𝑑𝑥 = 𝒩 𝑦′|𝐴𝑚, 𝐴𝐿𝐴𝑇 + 𝑄


Bayesian Model Evaluation for LGS
Given 𝑦 = 𝐴𝑥 + 𝑣
𝑝 𝑥 = 𝒩 𝑥|𝜇, Σ 𝑝 𝑣 = 𝒩 𝑣|0, 𝑄

we have

𝑝 𝑦|𝑥 = 𝒩 𝑦|𝐴𝑥, 𝑄

𝑝 𝑦 = න 𝑝 𝑦|𝑥 𝑝 𝑥 𝑑𝑥 = 𝒩 𝑦|𝐴𝜇, 𝐴Σ𝐴𝑇 + 𝑄


Outlines

 Linear Basis Function Models


 Maximum Likelihood and Least Squares
 Bias Variance Decomposition
 Bayesian Linear Regression
 Predictive Distribution
 Bayesian Model Comparison
 Evidence Approximation and Maximization
Linear Basis Function Models (1)
Example: Polynomial Curve Fitting
Linear Basis Function Models (2)
 Generally

where Áj(x) are known as basis functions.


 Typically, Á0(x) = 1, so that w0 acts as a
bias.
 In the simplest case, we use linear basis
functions : Ád(x) = xd.
Linear Basis Function Models (3)
Polynomial basis functions:

These are global; a small change


in x affect all basis functions.
Linear Basis Function Models (4)
Gaussian basis functions:

These are local; a small change


in x only affect nearby basis
functions. ¹j and s control
location and scale (width).
Linear Basis Function Models (5)
Sigmoidal basis functions:

where

Also these are local; a small


change in x only affect nearby
basis functions. ¹j and s
control location and scale
(slope).
Outlines

 Linear Basis Function Models


 Maximum Likelihood and Least Squares
 Bias Variance Decomposition
 Bayesian Linear Regression
 Predictive Distribution
 Bayesian Model Comparison
 Evidence Approximation and Maximization
Maximum Likelihood and Least Squares (1)

 Assume observations from a deterministic function


with added Gaussian noise:
where

which is the same as saying,

 Given observed inputs, , and targets,


, we obtain the likelihood function
Maximum Likelihood and Least Squares (2)

Taking the logarithm, we get

where

is the sum-of-squares error.


Maximum Likelihood and Least Squares (3)

Computing the gradient and setting it to zero yields

Solving for w, we get The Moore-Penrose


pseudo-inverse, .

Roger Penrose
where 2020 Nobel Prize
Laurate in Physics
Geometry of Least Squares
Consider

N-dimensional
M-dimensional

S is spanned by .
wML minimizes the distance
between t and its orthogonal
projection on S, i.e. y.
Sequential Learning
 Data items considered one at a time (a.k.a.
online learning); use stochastic (sequential)
gradient descent:

 This is known as the least-mean-squares


(LMS) algorithm. Issue: how to choose ´?
Regularized Least Squares (1)
 Consider the error function:

Data term + Regularization term

 With the sum-of-squares error function and a


quadratic regularizer, we get

¸ is called the
regularization
which is minimized by coefficient.
Regularized Least Squares (2)
With a more general regularizer, we have

Lasso Quadratic
Regularized Least Squares (3)
Lasso tends to generate sparser solutions than a
quadratic regularizer.
Multiple Outputs (1)
Analogously to the single output case we have:

Given observed inputs, , and targets,


, we obtain the log likelihood function
Multiple Outputs (2)
 Maximizing with respect to W, we obtain

 If we consider a single target variable, tk, we see


that

where , which is identical with the


single output case.
Outlines

 Linear Basis Function Models


 Maximum Likelihood and Least Squares
 Bias Variance Decomposition
 Bayesian Linear Regression
 Predictive Distribution
 Bayesian Model Comparison
 Evidence Approximation and Maximization
The Expected Squared Loss Function
predictor data

ground truth: optimal predictor

predictor noise

https://stats.stackexchange.com/questions/228561/loss-functions-for-regression-proof
The Bias-Variance Decomposition (1)
 Recall the expected squared loss,

where

 The second term of E[L] corresponds to the noise


inherent in the random variable t.
 What about the first term?
The Bias-Variance Decomposition (2)
 Suppose we were given multiple data sets, each of
size N. Any particular data set, D, will give a
particular function y(x;D). We then have
The Bias-Variance Decomposition (3)
 Taking the expectation over D yields
The Bias-Variance Decomposition (4)
 Thus we can write

where
Model:

Model:

Data:
The Bias-Variance Decomposition (5)
 Example: 25 data sets from the sinusoidal, varying
the degree of regularization, ¸.
The Bias-Variance Decomposition (6)
 Example: 25 data sets from the sinusoidal, varying
the degree of regularization, ¸.
The Bias-Variance Decomposition (7)
 Example: 25 data sets from the sinusoidal, varying
the degree of regularization, ¸.
The Bias-Variance Trade-off
From these plots, we note
that an over-regularized
model (large ¸) will have a
high bias, while an under-
regularized model (small ¸)
will have a high variance.

You might also like