Assignment2 Part2

CS 585 HW 2
Xueying Han
Zhitong Wu
Sep 25, 2018

Problem Definition

The assignment requires that
1. Find the background color and label each shape blobs according to their color
The first problem needs to be solved is getting all major color of the current image. Then the background color can be found by using a simple method. But the problem is there are many noises in the image which have similar RGB values to major color's. So the first difficulty is to get all major color and delete noises.

2. Implement a border following algorithm to find its outermost contour.
The problem can be solved by finding the factor that can be used to distinguish the current blob and other blobs or the current blob and the background. The first candidate is obviously color. But finally I thought that since I labeled all blobs, I can use labels to solve this problem.

3. Distinguish different types of borders
The solution of this problem is very similar to the solution of the previous problem.

4. Distinguish different shapes by their convexities
5. Recognize the blob's shape type
The resolution of the problem 4 can be used to finish problem 5.
The biggest difficulty of this problem is that because a blob may have two kinds of borders, the borders that are between the current blob and other blobs, and the borders that are between the current blob and the background. The borders that between the current blob and other blobs can be very confusing. Because these borders may add some curves or angles to the blob. So the first solution I came up with is using the borders that are only between the current blob and background to judge the shape of this blob. But I also want to find a way that I can detect the shape without distinguishing types of borders, since this work is not very necessary.

Method and Implementation

1. Find the background color and label each shape blobs according to their color.
To find the background color and label all blobs, I need to get all major color in the current image.
The relevant functions are
void getAllColor(Mat& image); // go through all pixels of the image and remove the color of the noises
void convertToBlackAndWhite(Mat& image, string filename); // test the major color by creating relevant black-white images
void labelByColor(Mat& image); // label blobs by the relevant black-white images
void findBgColor(Mat& image); // find the background color
Since there are many noises in the original image, I need to decrease these noises. At first I try to use some existing functions provided by OpenCV such like fastNlMeansDenoisingColored and smooth functions such like GaussianBlur. But they don't work very well. I have to use other methods to solve this problem.
In the function getAllColor, I get the major color by the following steps:

Step1: Get all colors from the image and store all colors into a hash table.
Since the RGB values of noises are similar to the values of the major colors. To get the more specific contour of the shape, the noises which are mostly on the border need to be included. So, I calculate the approximate values of every RGB values from the image by
R / 10 * 10, G /10 * 10, B / 10 * 10
and store these approximate values rather than the original RGB values.
To count the number of pixels which have the same approximate color. So the key of the hash table is the RGB values of one color and the value of the hash table is the relevant count of pixels.
After this process, there are still some approximate RGB values which are different from the major approximate colors have small counts. The pixels which have these approximate RGB values need to be ignored during the next steps.

Step2: There are several processes to remove the approximate RGB values with small counts.
1. Convert every [RGB, count] from the hash table to a pair and push the pair to a priority queue. The priority is the count of a set of RGB values. Then I get a queue of pairs sorted decreasing according to their count . And I need to get the RGB values from top to bottom and these RGB values with counts larger than a number. The number could be a percent of the sum of all pixels of this image. Then put these top N RGB values in the queue to a list named colorList for the next process
2. Since after the first process, there are still some RGB values which are similar to the major colors. And I found the differences between these RGB values and the major color do not exceed a number. So I defined a number DIFF storing this difference. Then I went through colorList from the head. If the current RGB values is A , I find other RGB values B which are after it and have similar values to A. I should remove the B. Because B has the similar RGB values with A but has a smaller count than A.

Step3: Restore the true major colors to the hash table which is very useful in other functions.

I was not sure about the performance of this method, so I wrote a function to check it. I converted the color of the original image which has the approximate color in the current hash table to white and convert the left part of the original image to black. I am very happy that the result is good.
Then I need to label different blobs according to their colors. Since the last test function runs by going through all major colors and each major color maps to a blob. So I can finish the labeling in the last test function.
First, I defined a Mat colorLabel_image which has the same size with the original image. Then I defined a variable named colorLabel to record the current label. When the test function goes through each major color, I should go through the whole original image to find each pixel which has the approximate RGB values that are equal to the current major color. Then setting the colorLabel to the colorLabel_image at the same position. After passing all major color, the colorLabel_image is filled.

The background color can be found by finding a major color which has the largest number of border pixels of the original image.

2. Implement a border following algorithm to find its outermost contour.

The relevant data structures are:
vector borderPixels; // store all borders which are between the background and blobs or between blobs
vector borderVSbg; // store all borders which are between the background and blobs
vector>> blobs; // store all pixels of blobs

The relevant functions are:
void colorFindContours(Mat& image, string filename); //first method to get contours
void colorFindContours2(Mat& image, string filename);//second method to get contours --- much easier and cleaner than the first one

To find the border of each blob, I came up with two idea.

The first method is that the border pixel of a blob has a neighbour of its N8 whose label number is different with the border pixel's label number.
For example, the current border pixel has the label number 3 and its N8 could be:
After finding a border pixel, its position should be stored into the vector boderPixels.

The second method is that storing all pixels of each blob into the data structure vector>> blobs. This data structure is very useful. Each blob can be represented by vector> blob whose size is equal to the original image's and initialized by 0. Every pixel(r, c) of the current image whose approximate color is equal to the current blob's maps to the vector> blob at the same position and blob[r][c] = 1. Then the border of the current blob can get by
vector edge;
int cols = (int)blob[0].size();
int rows = (int)blob.size();

for(int i = 0; i < rows; i++){
int start = 0, end = (int)blob[i].size() - 1;
while (start < blob[i].size()) {
if(blob[i][start] == 1){
edge.push_back(Point(i, start));
while (end >= 0) {
if(blob[i][end] == 1){
edge.push_back(Point(i, end));

for(int j = 0; j < cols; j++){
int start = 0, end = rows - 1;
while (start < rows-1) {
if(blob[start][j] == 1){
edge.push_back(Point(start, j));
while(end >= 0){
if(blob[end][j] == 1){
edge.push_back(Point(end, j));

3. Distinguish different types of borders.
1. Borders that are a border of the whole image
Can be distinguished by its x == 0 or x == image.rows - 1 and its y == 0 or y == image.cols - 1.

2. Borders against the background and borders against another shape blob
By checking the approximate color of the current border pixel's N8, if the approximate color is equal to the background's approximate, then this pixel is on the border against the background. Otherwise, this pixel is on the border against another shape blob.

4. Distinguish different shapes by their convexities && 5. Recognize the blob's shape type

The relevant data structures are:
vector> statistic(3, make_pair(0, 0));

The relevant functions are:
void detectTheEdge(Mat& image); // the first method to judge the shape
void detectBlobsEdge(); // the second method to judge the shape

The main idea of distinguishing these three shapes by calculating slopes of tangents of each blob's borders against the background.
The slope of each tangent can be calculated by
Assume border pixel1(x1, y1), the next border pixel(x2, y2) (the next border pixel must be one of pixel1's N8)
slope = (y1 - y2) / (x1 - x2)
Firstly, the method of distinguishing circles and triangulars or squares is the rate named straightOrCircle which is defined as
Count(slopes) / Count(border pixels) - 1
If straightOrCircle is larger than a number, it means that there are a lot of different slopes and that border may be a curve. Otherwise, it may be a straight line.
After distinguishing curves and straight lines, triangulars and squares can be classified by the number of right angles.
Assume line1's slope is k1 and line2's slope is k2, if there is a right angle, then
k1 * k2 = -1
So, to count the number of right angles, there are several steps:
Store all different slopes into a hash table, the key is the slope, the value is the count of this slope.
Go through each slope, if there is -1/slope in the current hash table, then there is a right angle.
Calculate the rate named triangleOrRectangular which is defined as
Count(right angles) / Count(angles)
A special case is that line1 parallels to the x axis and line2 parallels to the y axis. So the slope of line1 is 0 and the slope of line2 is INF.


Describe your experiments, including the number of tests that you performed, and the relevant parameter values. To test the accuracy of the code, there are several steps:
Step 1
Finding the current blob from the folder annotations. The images in the folder annotations are black-white images. And some blobs with same shape are in the same image but some not. So I can't find the black-white image related to the current blob by calculating the centroid of the current blob and centroid of every white shape in every black-white image.
After discussing with my teammate, we thought we can the related black-white image by scanning every black-white image and count the number of mapping pixels. Then calculating the rate defined as
Count(mapping pixels) / Count(all of blob's pixels)
The black-white image which has the highest rate is the related image.
Step 2
The name of shape can be gotten by getting and splitting the filename of the related black-white image.

Relevant parameter values
Tp rate of Circle = Count(detect circle && test circle) / Count(detect circle)
Tp rate of Square = Count(detect square && test square) / Count(detect square)
Tp rate of Triangular = Count(detect triangular && test triangular) / Count(detect triangular)

The number of images in the folder shapes_train2018 is 500. The tp rate of circle is 0.970588, the tp rate of square is 0.507589 and the tp rate of triangular is 0.



Trial Source Image Result Image
Distinguish Colors
Find cotours method 1
Find cotours method 2
Result of one image


There is obviously something wrong with the detection of triangular. Then I calculated the average value of triangleOrRectangular and the average value of straightOrCircle of each kind of shape. I found that circle and triangular's average values of these two rates are similar. So every triangular is detected as a circle.
And unfortunatel, I can't find out where is the exactly place that I went wrong. I think that using slope is the right way to detect the shape but the way of calculating slopes may make some incorrect values.