Gesture control technology is already used in many devices today like the Xbox Kinect and Microsoft Surface. However, these devices still face key issues that limit the user experience. The Microsoft Surface and devices similar to it are limited to a very short range of a couple of inches, and although the Kinect provides the range, it still has delays in response time which can hinder the user experience. The key goal of this project is to design a device that utilizes gesture control technology to recognize users and their gestures at ranges up to two feet with minimal delays in response time.
Ultrasonic sensors will be used to capture the distance of a user to the device. If the user is detected to be too close or too far away, the program will prompt the user to move closer or step away. Once the user falls into the ideal range of the ultrasonic sensors, an array of powerful infrared LEDs will be turned on to illuminate the hand. The intensity of these light beams provided by the infrared LEDs will allow the hand to appear more illuminated than any other object around it.
After the hand has been illuminated, optical sensing will then be used on the Chameleon 2 camera to capture the illumination of this hand. This camera will have a band pass filter that will only pass the infrared band of the light spectrum. Once the camera captures the illumination, image processing is done by the Raspberry Pi 3 by implementing various motion capture and noise removal algorithms to clearly capture the motion of the hand without any delays in response.
5 fingers held up: User moves mouse cursor around with hand movement
4 fingers held up: User can "swipe" left and right to use the respective arrow keys as a way to browse through picture slideshows (not shown)
3 fingers held up: Performs a left click on the screen
3 fingers held for 3 or more seconds: User can drag-select with hand movement (not shown)