Carolyn Montague is taking CS585 and this is her web page.
September 12, 2017
The problem we were looking at was how to alter images based on pixel iteration
The algorithms used for loops to iterate through all the pixels in the image. The algorithm also uses color channels to alter each pixel to create different versions of the same image.
Experiments utilized included altering an image of the Boston skyline, changing the color channels to alter the overall image. Additionally, inserting my own image of myself (and a giraffe!) experiments were done regarding the averaging of channels for grayscale images, and the altering of pixels with regard to their 8 neighborhood to blur the image. For the stripes image, the color channels were changed in an order that produced vertical stripes on the image.
Images created from various algorithms in code.
The strenghts of the methods are that the images come out in the desired alteration. A weakness is that the code could be more concise and more easily edited to work for whatever is needed. The results show that the method is generally successful, however there are limitations in that the code only does a minute set of things. For the future, the potential of using this code is apparent in more image processing work. The code can be taken and edited to fit what is necessary, with some work.
Editing images is fun and interesting! It is also simpler than anticipated, in terms of the algorithmic aspects, however the use of C++ is an interesting new challenge.
Worked with Alana King (other CS585 student), also consulted Open Stacks when looking for ideas of how to create use the 8 neighbor concept. Open Stacks accessed September 13, 2017
September 21, 2017
The coding portion of this homework assignment was to use algorithms that take a video input from a webcam and detect the hand shape or gesture being made and input in the video. The goal was to use template matching and various other algorithms to detect hand shape and display what hand shape is being detected using template matching.
We looked at OpenCV to determine how to use the match template method, and how to create a rectangle around the video that detects a hand and its movement. We used some of the alogrithms described during lab, includiing skin color detection and we used the video HM, so that the time between frames was shorter, intending for this to make tracking of hand movement frame by frame easier. Our goal was to input four images that we took as our templates, and input those into our program so that when one of the hand motions or shapes was detected the bounding box around the video motion of the hand would say which shape was being made. While these were the goals of our algorithm, we were only able to make our code detect hand motion and track a hand as it moves across the screen. In our rectangle algorithm, we had five different methods that could be used and chosen by the user for the best match of the hand that is being detected.
The experiments we ran were just running our algorithm and testing if the bounding box would track a hand as it moved across the screen. When running these tests, the alogrithm worked and was doing what was intended, however we were unable to make our code complete the other goals of the project because of limited time and understanding of the proper methods of implementation and difficulties using C++.
The tempalte image we were able to use successfully was the one of just an open hand, however we had three more templates that we intended to use in template matching, including thumbs up, peace sign, and a pointint finger.
The five methods we were able to choose between for detecting and template matching that would follow the hand in a video are shown below.
Because we were unable to get our program to actaully detect particular gestures or hand shapes in the time given to complete the assignment, we were unable to create a proper Confusion Matrix for our algorithm because it would only show detection of an open hand, and not of any of the other three templates we were planning to use.
However, if we had been able to create a Confusion Matrix, we would have had four rows and four columns, one each for each of our templates, and wwith each test we would have counted the number of times the program recognized what the user was doing in the video as the proper template that matched it, and how often the program detected the incorrect template match.
The strengths of our method are that the rectangle algorithm follows the hand as it moves across the screen very closely and accurately. The weaknesses of our method are that we were unable to complete the goals of the program within the time alotted. Additionally, the screen that the rectangle tracks has an image of the hand template, that is trying to match with the input image of the hand that is being tracked on the video, however the template image is upside down for some reason on that screen.
Because of this, our code was generally successful in tracking hand movement, but it was not successful in detecting specific hand shape and labeling the hand shape correctly because our template matching algorithm had been computing successfully.
In terms of potential future work, it would be useful to continue working on the algorithm so that template matching is successful. We tried to determine why the template image was upside down, however were unable to figure that out or change it. Additionally, our method could be improved so that it correctly identifies and labels hand shape when the input video shows a particular hand shape or motion. If we had had more time we would have continued working on our algorithm so that it succeeded in completing these tasks.
In conclusion, if we had been alotted more time, we would have hopefully been able to get our algorithm to successfully complete the tasks we wanted it to.
Worked with Alana King, another CS585 student.
We looked at and adapted the OpenCV code for template matching and creating a trackbar and rectangle.