Ex 7
Ex 7
Ex 7
Introduction
In this exercise, you will implement the K-means clustering algorithm and
apply it to compress an image. In the second part, you will use principal
component analysis to find a low-dimensional representation of face images.
Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated
topics.
To get started with the exercise, you will need to download the starter
code and unzip its contents to the directory where you wish to complete the
exercise. If needed, use the cd command in Octave/MATLAB to change to
this directory before starting this exercise.
You can also find instructions for installing Octave/MATLAB in the Environment Setup Instructions of the course website.
K-means Clustering
In this this exercise, you will implement the K-means algorithm and use it
for image compression. You will first start on an example 2D dataset that
1
Octave is a free alternative to MATLAB. For the programming exercises, you are free
to use either Octave or MATLAB.
will help you gain an intuition of how the K-means algorithm works. After
that, you wil use the K-means algorithm for image compression by reducing
the number of colors that occur in an image to only those that are most
common in that image. You will be using ex7.m for this part of the exercise.
1.1
Implementing K-means
The inner-loop of the algorithm repeatedly carries out two steps: (i) Assigning each training example x(i) to its closest centroid, and (ii) Recomputing the mean of each centroid using the points assigned to it. The K-means
algorithm will always converge to some final set of means for the centroids.
Note that the converged solution may not always be ideal and depends on the
initial setting of the centroids. Therefore, in practice the K-means algorithm
is usually run a few times with different random initializations. One way to
choose between these different solutions from different random initializations
is to choose the one with the lowest cost function value (distortion).
You will implement the two phases of the K-means algorithm separately
in the next sections.
1.1.1
where c(i) is the index of the centroid that is closest to x(i) , and j is the
position (value) of the jth centroid. Note that c(i) corresponds to idx(i) in
the starter code.
Your task is to complete the code in findClosestCentroids.m. This
function takes the data matrix X and the locations of all centroids inside
centroids and should output a one-dimensional array idx that holds the
index (a value in {1, ..., K}, where K is total number of centroids) of the
closest centroid to every training example.
You can implement this using a loop over every training example and
every centroid.
Once you have completed the code in findClosestCentroids.m, the
script ex7.m will run your code and you should see the output [1 3 2]
corresponding to the centroid assignments for the first 3 examples.
You should now submit your solutions.
1.1.2
1 X (i)
x
|Ck | iC
k
1.2
0
1
1.3
Random initialization
The initial assignments of centroids for the example dataset in ex7.m were
designed so that you will see the same figure as in Figure 1. In practice, a
good strategy for initializing the centroids is to select random examples from
the training set.
In this part of the exercise, you should complete the function kMeansInitCentroids.m
with the following code:
% Initialize the centroids to be random examples
% Randomly reorder the indices of examples
randidx = randperm(size(X, 1));
% Take the first K examples as centroids
centroids = X(randidx(1:K), :);
The code above first randomly permutes the indices of the examples (using randperm). Then, it selects the first K examples based on the random
permutation of the indices. This allows the examples to be selected at random without the risk of selecting the same example twice.
You do not need to make any submissions for this part of the exercise.
1.4
straightforward 24-bit color representation of an image,2 each pixel is represented as three 8-bit unsigned integers (ranging from 0 to 255) that specify
the red, green and blue intensity values. This encoding is often refered to as
the RGB encoding. Our image contains thousands of colors, and in this part
of the exercise, you will reduce the number of colors to 16 colors.
By making this reduction, it is possible to represent (compress) the photo
in an efficient way. Specifically, you only need to store the RGB values of
the 16 selected colors, and for each pixel in the image you now need to only
store the index of the color at that location (where only 4 bits are necessary
to represent 16 possibilities).
In this exercise, you will use the K-means algorithm to select the 16 colors
that will be used to represent the compressed image. Concretely, you will
treat every pixel in the original image as a data example and use the K-means
algorithm to find the 16 colors that best group (cluster) the pixels in the 3dimensional RGB space. Once you have computed the cluster centroids on
the image, you will then use the 16 colors to replace the pixels in the original
image.
1.4.1
K-means on pixels
The provided photo used in this exercise belongs to Frank Wouters and is used with
his permission.
assign each pixel position to its closest centroid using the findClosestCentroids
function. This allows you to represent the original image using the centroid
assignments of each pixel. Notice that you have significantly reduced the
number of bits that are required to describe the image. The original image
required 24 bits for each one of the 128128 pixel locations, resulting in total
size of 128 128 24 = 393, 216 bits. The new representation requires some
overhead storage in form of a dictionary of 16 colors, each of which require
24 bits, but the image itself then only requires 4 bits per pixel location. The
final number of bits used is therefore 16 24 + 128 128 4 = 65, 920 bits,
which corresponds to compressing the original image by about a factor of 6.
1
Figure 3: Original and reconstructed image (when using K-means to compress the image).
Finally, you can view the effects of the compression by reconstructing the
image based only on the centroid assignments. Specifically, you can replace
each pixel location with the mean of the centroid assigned to it. Figure 3
shows the reconstruction we obtained. Even though the resulting image retains most of the characteristics of the original, we also see some compression
artifacts.
You do not need to make any submissions for this part of the exercise.
1.5
In this exercise, modify the code we have supplied to run on one of your
own images. Note that if your image is very large, then K-means can take a
long time to run. Therefore, we recommend that you resize your images to
managable sizes before running the code. You can also try to vary K to see
the effects on the compression.
8
In this exercise, you will use principal component analysis (PCA) to perform
dimensionality reduction. You will first experiment with an example 2D
dataset to get intuition on how PCA works, and then use it on a bigger
dataset of 5000 face image dataset.
The provided script, ex7 pca.m, will help you step through the first half
of the exercise.
2.1
Example Dataset
To help you understand how PCA works, you will first start with a 2D dataset
which has one direction of large variation and one of smaller variation. The
script ex7 pca.m will plot the training data (Figure 4). In this part of the
exercise, you will visualize what happens when you use PCA to reduce the
data from 2D to 1D. In practice, you might want to reduce data from 256 to
50 dimensions, say; but using lower dimensional data in this example allows
us to visualize the algorithms better.
8
2.2
Implementing PCA
In this part of the exercise, you will implement PCA. PCA consists of two
computational steps: First, you compute the covariance matrix of the data.
10
Then, you use Octave/MATLABs SVD function to compute the eigenvectors U1 , U2 , . . . , Un . These will correspond to the principal components of
variation in the data.
Before using PCA, it is important to first normalize the data by subtracting the mean value of each feature from the dataset, and scaling each dimension so that they are in the same range. In the provided script ex7 pca.m,
this normalization has been performed for you using the featureNormalize
function.
After normalizing the data, you can run PCA to compute the principal
components. You task is to complete the code in pca.m to compute the principal components of the dataset. First, you should compute the covariance
matrix of the data, which is given by:
1 T
X X
m
where X is the data matrix with examples in rows, and m is the number of
examples. Note that is a n n matrix and not the summation operator.
After computing the covariance matrix, you can run SVD on it to compute
the principal components. In Octave/MATLAB, you can run SVD with the
following command: [U, S, V] = svd(Sigma), where U will contain the
principal components and S will contain a diagonal matrix.
=
(Figure 5). The script will also output the top principal component (eigenvector) found, and you should expect to see an output of about [-0.707
-0.707]. (It is possible that Octave/MATLAB may instead output the negative of this, since U1 and U1 are equally valid choices for the first principal
component.)
You should now submit your solutions.
2.3
After computing the principal components, you can use them to reduce the
feature dimension of your dataset by projecting each example onto a lower
dimensional space, x(i) z (i) (e.g., projecting the data from 2D to 1D). In
this part of the exercise, you will use the eigenvectors returned by PCA and
project the example dataset into a 1-dimensional space.
In practice, if you were using a learning algorithm such as linear regression
or perhaps neural networks, you could now use the projected data instead
of the original data. By using the projected data, you can train your model
faster as there are less dimensions in the input.
2.3.1
You should now complete the code in projectData.m. Specifically, you are
given a dataset X, the principal components U, and the desired number of
dimensions to reduce to K. You should project each example in X onto the
top K components in U. Note that the top K components in U are given by
the first K columns of U, that is U reduce = U(:, 1:K).
Once you have completed the code in projectData.m, ex7 pca.m will
project the first example onto the first dimension and you should see a value
of about 1.481 (or possibly -1.481, if you got U1 instead of U1 ).
You should now submit your solutions.
2.3.2
After projecting the data onto the lower dimensional space, you can approximately recover the data by projecting them back onto the original high
dimensional space. Your task is to complete recoverData.m to project each
example in Z back onto the original space and return the recovered approximation in X rec.
12
Once you have completed the code in recoverData.m, ex7 pca.m will
recover an approximation of the first example and you should see a value of
about [-1.047 -1.047].
You should now submit your solutions.
2.3.3
4
4
2.4
In this part of the exercise, you will run PCA on face images to see how it
can be used in practice for dimension reduction. The dataset ex7faces.mat
contains a dataset3 X of face images, each 32 32 in grayscale. Each row
of X corresponds to one face image (a row vector of length 1024). The next
3
This dataset was based on a cropped version of the labeled faces in the wild dataset.
13
step in ex7 pca.m will load and visualize the first 100 of these face images
(Figure 7).
2.4.1
PCA on Faces
To run PCA on the face dataset, we first normalize the dataset by subtracting
the mean of each feature from the data matrix X. The script ex7 pca.m will
do this for you and then run your PCA code. After running PCA, you will
obtain the principal components of the dataset. Notice that each principal
component in U (each row) is a vector of length n (where for the face dataset,
n = 1024). It turns out that we can visualize these principal components by
reshaping each of them into a 32 32 matrix that corresponds to the pixels
in the original dataset. The script ex7 pca.m displays the first 36 principal
components that describe the largest variations (Figure 8). If you want, you
can also change the code to display more principal components to see how
they capture more and more details.
2.4.2
Dimensionality Reduction
Now that you have computed the principal components for the face dataset,
you can use it to reduce the dimension of the face dataset. This allows you to
use your learning algorithm with a smaller input size (e.g., 100 dimensions)
instead of the original 1024 dimensions. This can help speed up your learning
algorithm.
14
Figure 9: Original images of faces and ones reconstructed from only the top
100 principal components.
The next part in ex7 pca.m will project the face dataset onto only the
first 100 principal components. Concretely, each face image is now described
by a vector z (i) R100 .
To understand what is lost in the dimension reduction, you can recover
the data using only the projected dataset. In ex7 pca.m, an approximate
recovery of the data is performed and the original and projected face images
are displayed side by side (Figure 9). From the reconstruction, you can observe that the general structure and appearance of the face are kept while
the fine details are lost. This is a remarkable reduction (more than 10) in
15
the dataset size that can help speed up your learning algorithm significantly.
For example, if you were training a neural network to perform person recognition (gven a face image, predict the identitfy of the person), you can use
the dimension reduced input of only a 100 dimensions instead of the original
pixels.
2.5
16
Part
Find Closest Centroids
Compute Centroid Means
PCA
Project Data
Recover Data
Total Points
Points
30 points
30 points
20 points
10 points
10 points
100 points
You are allowed to submit your solutions multiple times, and we will take
only the highest score into consideration.
17