The two segmentation methods we used were absolute thresholding and region growing. For absolute thresholding
we created a method that accepts an image, a rgb threshold, and a boolean for if it should be greyscale
threshold, and return a binary image with all the "objects" in white and the background in black
For region growing we dialated and eroded the image in order to remove holes in some objects, or we eroded then
dialated in order to get rid of small specks that should not have been considered objects. Afterwards, our
region dectection algorithm found the edges between an object and a background, and colored in the object.
For our data sets we used the aquarium, cells and bats. We explored a variety of thresholds with both of our
segmentation methods, and especially for the fish picures, attempted to use both grayscale and rgb thresholds
in order to pick out objects from the blackground. We looked at how many objects were actually the objects
we were looking for, and how much were background or other things in the image, with our ideal goal being all
the objects picked up actually belonging to the object set we were looking for, with nothing left out. We ran multiple trials,
adjusting the thresholds slightly each time and judging whether our adjustment had a positive or negative impact
on the objects picked up.
From the images, it can be seen that we had the greated success with the bat images, somewhat worse results
with the cell images, and terrible results with the fish images.
For the bats we used the greyscale pictures, and it was quite easy to pick out bats as they were white on a
back background, except for at the bottom of the picture, where there was a large area of white
washing out any bats inside that region. To determine if bats had their wings folded or spread
we could look at circularity, with higher being better, or perimeter / area, with lower being
For the cells, we were able to get the cells, but we had difficulties connecting pieces of the
cells together, as the center was about the same color as the background, so while we picked up the
edges between the cells and the outside well, it was hard for our algorithms to tell the inside
of the cells apart, and that some regions were different walls of the same cells and not new
objects. The overall level of light also especially hurt our absolute thresholding algorithm, as it was not changed while the level of light in the images changed over time, meaning that
sometimes more cells were picked up and sometimes less were picked up.
We had a lot of difficulties with the fish, as there were many bright things in the foreground, and
there were many small fish all along the bottom, with some larger fish in dim lighting above
them. We were unable to do much to differentiate fish from the bright plants. We tried to use
color thresholds instead of black and white thresholds, but we were unable to find a good rgb
conbination to seperate the fish (more blue) from plants (more green).
If we were given more time, we would like to implement an upper threshold in addition to a lower
threshold, as that would help us in the fish pictures to filter out the very bright plants and
also help with differntiating between the blueness of the fish and the greenness of the plants. We
would also like to play around more with dialiation and erosion, as it was very useful in the
cell images for clearing out many small specks, and in the bat images for clearing out small
objects that the algorithm picked up where the white background and black background met.
These simple thresholding operations can be suprisingly accurate when images have distict
boundaries between objects and background as in the bat images, but as the images become more
complex (fish in aquarium) or things like lighting differences become less distinct (cells),
accuracy takes a sharp dive, and either objects fail to be picked up or far too many things are
picked up instead. These methods can be good preliminary actions to remove background and objects
that are clearly not what we are searching for, but for fine tuning more sophisticated algorithms
Collaborators : Seunghun Oh
Sources : stackoverflow.com, docs.opencv.com, opencvexamples.blogspot.com, answers.opencv.org