Creating an Android port of the CameraMouse suite proved far more challenging than I anticipated-- even with an extension, I was not able to fully port over the suite's functionality. That being said, the current state of the project is that facial template tracking is mostly working properly (some tuning is nescessary, as there is a bit of hardcoded magic in the Windows version of the suite that doesn't hold up very well and needs to be tuned for higher resolution video). As such, the current application demonstrates tracking by drawing a blue dot over the feature currently being tracked on the user's face.
The project faced numerous challenges and setbacks-- some of which were the result of my not fully grasping the scope of what I had set out to do, and some of which stemmed from poor documentation and bugs in Android Studio's native code support.
Initially, I had planned to simply port the cross-platform Qt version of the suite to Android, but quickly ran into several issues: first, I discovered that the suite was written for Qt4, while Android support didn't exist in the Qt project until Qt5. After porting the project from Qt4 to Qt5, I discovered that, while it now built, it also instantly segfaulted, owing to numerous issues with cleanly running Qt5 code on Android.
Even after debugging these issues, which seemed mostly to stem from the Qt support for binding actions to interface elements, I found that the application still failed to run as expected. After further debugging, I discovered that the Qt Camera implementation used in the project was failing to properly interface with the Android device camera; research online indicated that this is a somewhat known issue, and that no fix exists other than using an entirely different camera implementation, which I wasn't able to access via the C++ API.
At this point, I decided that attempting to further debug Qt was futile, and gave up on this particular approach.
Having failed to port the Qt application to Android, I decided the next best approach would be to strip out the actual computer vision logic from the CameraMouse source, and build that as a native code library for Android, which could be called into from a stub application using the Java Native Interface.
Building the native code library itself proved relatively simple, as it was mostly a matter of stripping out some Qt-specific dependencies and replacing them with standard library functions.
Unfortunately, at this point the project hit another snag-- although I had built the native code library, and even tested it against a standalone java application, getting it to properly integrate into an Android Studio project was far from simple, as there appear to be numerous bugs Android Studio's support for importing native code libraries, and there is virtually no up-to-date documentation on the topic.
I anticipated that this portion of the project would be relatively simple; unfortunately, once more I was overly optimistic. Logically speaking, the stub application doesn't have to do very much-- it simply needs to initialize the tracking logic via a native code call, call another native function every time a new video frame comes in from the camera, and update the display with the location of the tracked feature.
In practice, however, initializing the tracker properly involved creating logic to make the template files (which are packaged into the app and do not have a real location in the filesystem) accessible to native code (which can only easily see the local filesystem), which involved further delving into the arcane details of Android development.
Another challenge I faced was getting the images into a format that OpenCV could read, as the only camera output format guaranteed to be available on all Android devices is YUV N21, and while OpenCV does posess a function for converting that format into BGR, doing so given only a raw array of YUV bytes requires a moderately finicky process.
I haven't yet managed to get the trickier portions of the stub application finished-- right now it can serve as a demo, but cannot run itself in the background or send hardware input events, so the application has no real utilty currently.
There are two large pieces of work that would need to be completed before this project would be usable-- first off, the tracking code needs to be properly tuned for a higher resolution camera, as right now its performance leaves much to be desired. Some interesting optimizations of the tracking might be possible by using the hardware facial detection support in the device camera to generate a target region for template matching, but it isn't clear how much this would help. Additionally, as mentioned in the above section, the stub application needs more work.