This assignment tasked us with finding objects in a video feed.The video has
two tanks, one of which contains eels and the other which contains crabs.
We were tasked with finding a segmentation to display the eels and crabs.
In addition, we should be able to determine characteristics of the animals, such as
the head, body and tail of the eels, and the centeroid of the crabs.
To create bounding boxes we sharpened a frame of the video, applied a
Laplacian filter to it, applied a distance transformation to find the
peaks, and applied a watershed algorithm to find the boundaries of the boxes.
For the frame analysis we looked only at the area inside the bounding boxes.
To find the eels we used a motion dectection algorithm to find differences
between two sequential frames of the video, and from there we analyized whether
an object was entering or leaving a pixel, and tried to color it accordingly.
For our experiments we played around with various thresholds for dectecting
motion in order to try to capture as much of the animals' motion as possible while
limiting the amount of noise that was picked up. We tried methods to lower the delay
on processing each image, such as searching just the areas inside the bounding boxes
instead of the entire image, but there was not much of an improvement in processing
Our program was able to find bounding boxes for the two tanks, but it also picked
up a rectangle of light to the left of the tanks, as well as a large patch of white
wall on the far left of the video. Because we used the brightness of the cages to
find their boundaries, we also ended up picking up these other areas with very high
brightness values. For our object dectection we were able to pick up the eels when
they were moving, but if they moved very slowly or stayed still the threshold for
movement that we used would not pick up the eels. In addition, because our movement
dectection used an absolute difference of the rgb values of two images, when the eels
moved to the edges of the cage, some of the background matched the color of the eels
very well, and thus prevented our program from differentiating and catching the eel,
even though they were moving at a reasonable speed.
To try to get the head, body and tail of the eel, we tried to have two binary mats
that would hold the results of thresholding the absolute differencing of the current
and previous frames of the video. For each pixel we analyzed what happened. If a pixel
had movement in the current pixel but no movement in the previous, we assumed that
something moved into that pixel, and we assumed this was the head. If the current
frame had no movement and the previous frame had movement, it was assumed that something moved
out of the pixel, which would be the tail. If a pixel had motion in the previous
frame but none in the current frame we determined that something had moved in and
stayed, indicating the body. This is because the head leads the eel, so it would be
the first to move into the new pixel. Then the body would move into pixels that the
head had already moved into, producing no motion on our motion dectection algorithm.
When the tail leaves at the end it would produce motion.
This algorithm did not behave as expected, and we were unable to get int to work
properly. We could highlight the entire eel when it was moving, but we could not
get it to dectect the individual components of the eel, and we had some difficulties
keeping track of areas where there was no motion but the eel was in.
The coloring part also did not work properly, as we failed to take into account
that the differencing wasn't going to produce all move intos for the head area and
all move outs in the tail area, and with the mix of ins and outs we picked up
the final result became a mix of colors. We tried to resolve this with sampling
the surrounding pixels to see if a majority of them were all move ins or outs, but
that seemed to have overloaded the system's memory, as we had memory exception errors.
For the crabs, the difference in color between them and the background was too small
to pick up without introducing too much noise into the algorithms. We tried a few other
ways to dectect the crabs, but were unsuccessful in finding a way to pull the crabs out
of all the interference that occured.
Our algorithm was not the best fit for this, as using color as the main basis for
determining motion was alright for the eels, but terrible for the crabs. Even for the
eels, there were times where we were unable to get enough contrast to properly dectect
We believe that for objects with more pronounced differences between them and their
backgrounds, we would be able to have produced decent results from our algorithm
but for this situation, we chose a method that did not quite match the circumstances.