A set of images is given as input. Each image contains one to four different shape blobs(square, triangle, circle) in different colors.
The requirents to the program are:
- For each shape image, determine its background color and label each and every shape blobs according to their color.
- For each shape blob, implement a border following algorithm to find its outermost contour (i.e. the border pixels) and compare it with OpenCV's "findContours" function.
- For each shape blob, classify its border pixels (in other words, segment its outermost contour) into three types: borders against the background, borders against another shape blob and borders that are a border of the whole image.
- (Optional) You may also segment the border according to their convexity, i.e. find out all convex segments and concave segments. This may help you analyze its shape type.
- For each shape blob, come up with an algorithm that can recognize its shape type (square, circle, or triangle).
Method and Experiments
The general ideas are the production of dicussions between JInghui and me. Based on the fruit of our dicussion, we came up with our own implementation.
Step1. Denoising and Determine the background color
The original image is in JPEG format. Due to the compression algorithm used in JEPG images, the original images are high-noised. As illustrated below, the colors on the edges of blobs are somehow blurred, indicating that in one blob there are several different colors. We need to denoise the image before finding the contours.
We check the color of each pixel and calculate the number of appearance. The color that appears the most is considered the background color.
To denoise the image, we used the average filter function Blur:
void blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), int borderType=BORDER_DEFAULT ).
In our case, the parameter ksize = Size(3, 3).
Step2. Find Contours of Image
The denoised image is ready for us to find its contours. For each pixel in the image, if the RGB value of this pixel is considerably different(threshold = 10)from the RGB value of one of the pixels in its N8 neighbourhood, this pixel is marked as contour. After the procedure is done, we use erode() function to thin the contours to proper width.
Step3. Fill and color
There are still some noise signals in the original image. Now that we have the contours, we can fill it and color it to get an image in which only one color exists in one shape blob. We dilate the image for further use.
Step4. Add borders to the image
We need contours to perform the remaining tasks but the image we have from step.3 is without contours. We added contours to the image from step.3 using bit OR operation. Because of the dilation, some blobs are not entirely surrounded by contours. We implemented a repairTheContours(Mat &image) function to repair the contours.
Step5. Smooth and Recolor
We want the contours to have width of 1 pixel. To achieve this, I implemented a recolorAndSmooth(Mat &image) function to thin the contours and refill the holes after the contour pixel is removed.
Step6. Classify the borders
Its easy to classify the borders since the width of borders is thinned to 1 pixel. I use white to represent the borders between blobs, blue to represent the borders between blob and background and red, borders between blob and the boundary of the image
Step7. Determine the shapes
We use the gradients to classify the shape. We get the slope between some points on the border against the background. If most slopes are 0, the blob is a square. If a certain proportion of slopes is near ¡Ì3, the blob is a triangle. Otherwise, the blob is a circle.
Step8. Calculate the precision
To calculate the presicion, we obtain the centroid of each blob. We check whether it is in one of the white blobs of the annotation masks. If so, we compare the filename with estimated shape type to get TP and FP..
|Trial||Source Image||Background color||Borders||Classified Contours||Blobs|
Discuss your method and results:
- Not all of the shapes are recognized, some of them are omitted. We think its because that not all the contours are marked out. Besides that, during the entire procedure, the dilations and erosions could destroy some part of the contours, making the shape unrecognizable.
- We implemented our findContours() function. Compared to the findContours() in OpenCV, our function is not able to store and output information of contours. Moreover, our function only marks out one thick layer of contour whereas the function in OpenCV is capable of finding the most exterior contour.
- The function I implemented to smooth and recolor the images could distort the contours and the corresponding shape blob, making it difficult to determine the shape of blobs.
- Find better parameters and methods to denoise the images.
- Improve the shape recognization by calculating borders from other directions.
- The convexity of borders can be applied to classifying the blobs.
- Some other methods could be used to perform this task, machine learning methods for example.
- Code of Lab2
- Find the Center of a Blob (Centroid) using OpenCV:https://www.learnopencv.com/find-center-of-blob-centroid-using-opencv-cpp-python/ Access date:09/26/2018
We successfully find ways to implement a program that meets all the mandatory requirements. Our programs could determine the background color of all images in the dataset and detect most of the shapes, find their contours, classify the borders and segment each blob according to their colors. The precision is fine but there are still lots of room for improvments.
Credits and Bibliography