In previous posts, we have discussed different image pre-processing techniques through different operations using openCV. This post includes next step towards image processing i.e. feature detection algorithm and feature matching using SIFT (Scale invariant feature transform) and SURF (Speeded-up Robust Features) algorithms and their application using openCV.
Here are the links for image processing techniques discussed before:
Table of content
- Feature in an image
- Difference between feature detector and feature descriptor
- Harris Corner Detection
- FAST (Features from Accelerated Segment Test)
- BRIEF (Binary Robust Independent Elementary Features)
- ORB (Oriented FAST and Rotated BRIEF)
What is meant by feature in an image?
Feature is a piece of information in an image that is used to identify or classify the object to complete the task of particular application.
Let’s understand with an example.
Have you ever thought how children play puzzle game. How do they combine pieces with each other. How do they identify that a particular piece belong that location.
By checking specific pattern, shapes, colors similarity, edges, corner matching etc. All these are known as features of the image because they are used to extract the meaningful information from the image.
To illustrate this let’s check this image:
Here you can see three patches taken from the image, named as A, B,C.
If we take patch A and move it over the image region, we won’t be able to find out exact location from where this patch has been taken because it has flat area i.e. contains similar features all over the image.
If we take patch B and move it over the image region, we can specify the location with more probability as this patch contains edges.
If we take patch C, it has corner in the image so it gives more unique location as compare to edges. Corner has more variations in intensity so it describes better features.
How does computer find such features?
By using feature detection algorithms.
The process to identify regions which has maximum intensity variations called feature detection.
There are several algorithms like SIFT, SURF, FAST,BRIEF,ORB that are used to detect the features in the image.
Okay! I have detected the features but how will I find out that this particular feature will belong to which category?
Here comes feature descriptor in picture.
Whenever a feature is detected, there is some class or name or vector convention is assigned to that feature region, that region is called feature descriptor.
Mostly algorithm for feature extraction works on this principle. First they detect the features in the image then use descriptor to identify them, then extract the features for particular use.
Harris Corner Detection for feature detection
As we have seen, corners are considered as important features of an image because they large variation in intensity. Through harris corner detection algorithm, corners are detected.
How does it work?
- First it finds out difference in intensities for displacement of features over a window size
- To identify maximum variation, it maximize the intensity difference function.
- On the basis of eigen values obtained, region is decided for corners, edges.
img = cv2.imread('polygon.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) gray = np.float32(gray) dst = cv2.cornerHarris(gray,3,5,0.001) # Threshold for an optimal value, it may vary depending on the image. img[dst>0.01*dst.max()]=[255,0,0] plt.figure(figsize=(15,8)) plt.imshow(img)
There is one problem with Harris corner detection. When an image is scaled up or down it’s corner’s shape changes.
For e.g. A circle is round in shape if it is seen from outside. But if it’s arc is taken for calculation then it’s shape seems almost flat. So in such case if Harris corner detection algorithm is applied, it won’t identify the features in scaled version.
Before understand of feature detection algorithms, we need to know the points, we have to pay attention to:
- Selected features should be rotation invariant. It means if image is rotated, then also features are same. So detection should also be same.
- Feature selection should be done in such a way that scale variation won’t affect the features.
FAST (Features from Accelerated Segmented Test) for feature detection
FAST is basically a corner detection algorithm but widely used for feature detection as well because corner is considered as an important feature in the image. It is computational efficient.
How to detect features using FAST algorithm?
Followings are steps used to detect the features:
- First, random intensity point is selected for which we want to know if it can be considered as keypoint.
- Appropriate threshold is set according to use case.
- Then a circle consists of 16 pixels is considered around the selected pixel.
- Selected pixel can be considered as keypoint only if it’s intensity is higher or lower than ‘n’ number of pixels among 16 pixels. Usually ‘n’ is taken as 12 because if n<12 then too many keypoints get selected.
- To make this process faster, while comparing the intensity with neighborhood pixels, non corner pixels are excluded. Like in this image first pixels are compared with 1,5,9,13.
- Other way to make this process faster, is to use machine learning algorithm (Decision Tree Classifier) for comparison with neighborhood pixels.
BRIEF (Binary Robust Independent Elementary Features)
BRIEF is used as feature descriptor. As we discussed before, feature descriptor is used to differentiate features from one another.
How does BRIEF work?
- First we define patches around keypoint pixels. Patch is defined neighborhood area around keypoint.
- BRIEF convert the patch area into binary feature vector i.e. all pixels into 0 and 1 array.
- As BRIEF is noise sensitive so first patch is smoothed through Gaussian filter.
BRIEF is used widely because it is very fast and has higher recognition rate.
ORB Algorithm for feature detection and description
As SIFT and SURF are patented algorithm, ORB is best alternative of these two for feature detection and description.
- It works like SIFT but two times faster than it.
- FAST is used as Key point detector in ORB.
- BRIEF is used as Descriptor but in modified way so that it can be rotation invariant.
- ORB creates pyramid of image means it create down sample version of image to extract features from scaled versions as well to make ORB scale invariant.
- After keypoint detection, ORB locates keypoints to different scaled version.
- Once keypoints are located, orientation is assigned to each points based on intensity variation at neighborhood pixels. (in which direction intensity is increased, arrow of orientation will be in that direction )
- After feature detection, BRIEF is used for description.
- ORB uses greedy search for all possible feature vectors to find those which have high variance and mean close to 0.5 so that all features will be uncorrelated. Such BRIEF is called rBRIEF.
img = cv2.imread('zebra.jpg',0) # Create ORB detector orb = cv2.ORB_create() # find the keypoints with ORB kp = orb.detect(img,None) # compute the descriptors with ORB kp, des = orb.compute(img, kp) # draw keypoints img_new = cv2.drawKeypoints(img,kp,des,color=(0,255,0), flags=0) plt.figure(figsize=(15,8)) plt.imshow(img_new)
Here is the link for GitHub repository for code: