By Lang Gao and Seunghun Oh
Then, the bot will approximate the size of the current pot and the size of the players' pool in order to determine probabilities for the next action. Finally, the bot will attempt to identify the players' facial expression and/or body language to best interpret action and thoughts. After gathering all the data necessary, the bot will determine and print out its next move.
The desired outcome of this project is to show the cropped and labeled value of the cards; and label facial features that define happiness/sadness in a person playing poker. The correct facial expression and bot action should be output accordingly.
The following are two sample images of what angle the video will be shot from, and how the cards and chips will look like from the angle. The samples include bounding boxes for card suite, number, chip count, and face:
During our initial planning and development, Lang and I did some background research to see if some of the project methods were done previously. We first found an article by Black and Yacoob on Facial Expressions. This scholarly article details on selecting facial featuresm, such as the eyebrows and the mouths and analyzing their curvature for the person's mood. This technique (or perhaps another emotion detection method) will help determine the bot's next course of action.
The second article that we discovered discussed methods of an approach of recognizing cards and counting chips in a poker game environment. This article discusses what we want in the exact environment that we would like to have. This article was published by Martins, Reis, and Teofilo.
From the articles, we can clearly see that parts of what Lang and I have in plan have been attempted with good success. However, it is entirely up to us to implement the code accurately.