In previous posts, we have discussed different image pre-processing techniques through different operations using OpenCV. This post includes the next step towards image processing i.e. feature detection algorithm and feature matching using SIFT (Scale-invariant feature transform) and SURF (Speeded-up Robust Features) algorithms and their application using OpenCV.
Here are the links for image processing techniques discussed before:
Table of content
- Feature in an image
- Difference between feature detector and feature descriptor
- Harris Corner Detection
- FAST (Features from Accelerated Segment Test)
- BRIEF (Binary Robust Independent Elementary Features)
- ORB (Oriented FAST and Rotated BRIEF)
Feature in an image
A feature is a piece of information in an image that helps in identifying or classifying the object to complete the task of a particular application.
Let’s understand with an example.
Have you ever thought about how children play puzzle games? How do they combine pieces with each other? How do they identify that a particular piece belongs to that location?
By checking specific patterns, shapes, colors similarity, edges, corner matching, etc. Features help in extracting meaningful information from the image.
How does computer find such features?
By using feature detection algorithms.
The process to identify regions that have maximum intensity variations is called feature detection.
Several algorithms like SIFT, SURF, FAST, BRIEF, ORB help in detecting the features in the image.
Okay! I have detected the features but how will I find out that this particular feature will belong to which category?
Here comes the feature descriptor in the picture.
Whenever a feature is detected, there is some class or name or vector convention assigned to that feature region, that region is called feature descriptor.
Most algorithm for feature extraction works on this principle. First, they detect the features in the image then use descriptors to identify them, then extract the features for a particular use.
Harris Corner Detection for feature detection
Corners have large variations in intensity. Harris Corner Detection algorithm helps in detecting the corners.
How does it work?
- First it finds out difference in intensities for displacement of features over a window size
- To identify maximum variation, it maximize the intensity difference function.
- Eigen values obtained helps in deciding the regions for the corners, edges.
There is one problem with Harris corner detection. The corner shape is changed as the images are scaled up or down.
For e.g. A circle is round in shape if seen from outside. But if its arc is taken for calculation, then its shape seems almost flat. Harris corner detection algorithm will not identify the features in the scaled version.
Before understanding feature detection algorithms, we need to know the points, we have to pay attention to:
- Selected features should be rotation invariant. Features remain same on rotating an image. So detection should also be same.
- Scale variation should not affect the features.
FAST (Features from Accelerated Segmented Test) for feature detection
FAST is a corner detection algorithm. It is computationally efficient.
How to detect features using FAST algorithm?
Steps to detect the features:
- First, random intensity point is selected for which we want to know if it can be considered as keypoint.
- Appropriate threshold is set according to use case.
- A circle consisting of 16 pixels is considered around the selected pixel.
- If the intensity is higher or lower than ‘n’ number of pixels among the 16 pixels, selected pixel is a keypoint. Usually ‘n’ is taken as 12 because if n<12 then too many keypoints get selected.
- While comparing the intensity with neighborhood pixels, non-corner pixels do not count. This makes the process faster.
- Other way to make this process faster, is to use machine learning algorithm (Decision Tree Classifier) for comparison with neighborhood pixels.
BRIEF (Binary Robust Independent Elementary Features)
BRIEF works as a feature descriptor. Feature descriptor helps in differentiating features from one another.
How does BRIEF work?
- First we define patches around keypoint pixels.
- BRIEF convert the patch area into binary feature vector i.e. all pixels into 0 and 1 array.
- BRIEF is noise sensitive. Gaussian filter helps in smoothing the first patch.
ORB Algorithm for feature detection and description
ORB is the best alternative out of SIFT and SURF for feature detection and description.
- It works like SIFT but two times faster than it.
- FAST is the key point descriptor in ORB.
- BRIEF is the Descriptor but in modified way so that it can be rotation invariant.
- ORB creates pyramid of image means it create down sample version of image to extract features from scaled versions as well to make ORB scale invariant.
- After keypoint detection, ORB locates keypoints to different scaled version.
- Once keypoints are located, orientation is assigned to each points based on intensity variation at neighborhood pixels (in which direction intensity is increased, arrow of orientation will be in that direction )
- ORB uses greedy search for all possible feature vectors to find those which have high variance and mean close to 0.5 so that all features will be uncorrelated.
Here is the link for the GitHub repository for code:
Let us know if you found it useful through the comment!