Feature Extraction
Lecture 5
By: Sarah M. Ayyad
Back Next
What is Feature Detection?
Back Next
What is Feature Detection?
▪ Feature detection, also called interest point detection or key-point detection
is finding points in the image which are somehow "special".
(For example, find a corner, find a region, and so on.)
▪ Feature detection includes methods for computing abstractions of image
information and making local decisions at every image point whether there is
an image feature of a given type at that point or not.
Feature Detection
Back Feature Next
Extraction/Description
What is Feature Extraction/ Feature
Description?
● Feature extraction, also referred to as feature description, produces a quantitative
representation of a neighborhood of pixels around the detected point. (computing
a descriptor from the pixels around each interest point)
● How to represent the interesting points we found to compare them with other
interesting points (features) in the image.
● The goal is to generate features that exhibit high information-packing properties.
● Features are the input that you feed to your model to output a prediction or
classification.
Back Next
What is Feature Extraction/ Feature
Description?
● Feature descriptor often takes the form of a vector.
Back Next
What is Feature Extraction/ Feature
Description?
● Feature Extraction is the key step of many applications.
Back Next
Feature Extraction (Handcrafted Vs.
Automatic extraction)
Back Next
Feature Extraction (Handcrafted Vs.
Automatic extraction)
● In handcrafted based feature extraction, we rely on our domain knowledge
(or partnering with domain experts) to create features which make machine
learning algorithms work better. We then feed the produced features to a
ML classifier to predict the output.
● The idea of Deep Learning is to apply the learning process in an end-to-
end manner. (the network learns how to extract its own features)
Back Next
Feature Extraction Algorithms
(Handcrafted-based)
● Speeded Up Robust Features (SURF)
● Fast Retina Keypoint (FREAK)
● Oriented Fast and Rotated BRIEF (ORB)
● Non-linear pyramid-based (KAZE)
● Simple square neighborhood (Block)
The proper choice for a detection and extraction
algorithm will depend on the specific application
and whether it needs to be robust to changes in scale or rotation.
Back Next
SURF quick review
After calculating integral image
and hessian matrix, the SURF
algorithm calculates their dominant
orientations.
To do this, the algorithm first
identifies a circular neighborhood
around the feature.
Then, using the local gradient
estimates of pixel intensities
in this region, a dominant
orientation is determined.
This orientation allows the
algorithm to more easily
identify the same feature if it
was rotated in another image.
So, it is considered as a
rotation invariant algorithm
Then isolate the square region
around the feature with their
edges perpendicular to the
dominant orientation.
Afterwards, the square is
divided up into 16 sub-regions.
Four gradient values are calculated for each sub-
region.
Finally, all these values are collected into a
vector to form the 64-value feature descriptor.
The detection-extraction process
is an important part of feature
matching and image registration,
but not all workflows involving
features require both detection
and extraction!
For example, image classification often skips the
detection step and instead uses other methods to
extract the features.
Extracting Features– (SURF)
● extractFeatures() function produces two outputs, features and valid points.
● Features is a numerical matrix where each row represents a different features descriptor.
● Valid points is an array that contains various properties about each extracted feature.
Back Next
Extracting Features– (SURF)
Back Next
Extracting Features
Next
Feature Matching
▪ Feature matching is the process of finding pairs of similar features in two
images.
Back Next
Feature Matching
This means finding the same points on the same object or on distinct but nearly
identical objects under different viewing conditions.
Back Next
Feature Matching
Feature matching is an integral part of applications like aligning satellite
images, stitching multiple images together and video stabilization among
many others.
Back Next
Feature Matching
aligning satellite images
Back Next
Feature Matching Process
Detecting feature Matching feature
points descriptors
01 02 03
Extracting feature
descriptors
How does feature matching work?
Two different road signs from a different area
How does feature matching work?
Assume that there is a pair of detected feature on one road signs as well as an
additional point on the other sign.
How does feature matching work?
The extracted feature descriptor gives you a quantitative description of a
neighborhood of pixels around the detected feature point.
the descriptors
are 64 valued
vectors with
values calculated
over a grid within
each neighborhood.
How does feature matching work?
The similarity or dissimilarity of the features is reflected in the similarity
or dissimilarity of the descriptor of vectors.
A feature pair is a match if the distance between them in the descriptor
coordinate space is below some threshold and not a match otherwise.
Descriptor A and C are too far
Descriptor A and B are close enough
to be a match
How does feature matching work?
Features Matching–Matlab
Back Next
Features Matching–Matlab
Back Next
Features Matching–Matlab
Back Next
Features Matching–Matlab
The output of matchFeatures() function will be a list of
index pairs.
Each pair of indices corresponds to a matched feature
descriptor pair.
Back Next
Features Matching–Matlab
Features 1 Features 2
F1 F1
This F2 F2
means F3 F3
F4 F4
F5 F5
F6 F6
Features1 Features2
index index F7 F7
F8 F8
… …
Back Next
Features Matching–visualizing results
Back Next
Features Matching–visualizing results
Back Next
Features Matching–visualizing results
To see the matches more clearly, use the montage option.
Back Next