This is a study project I developed to create a real time Hand-Tracking program to better understand how to implement it in future projects.
Most of this code was taken as reference from this informative video created by Murtaza Hassan.
For this code, it was utilized the MediaPipe framework. MediaPipe is a framework developed by Google which contains some amazing models that allows us to quickly get started with some fundamental AI problems such as Face Detection, Object Detection, Hair Segmentation and much more!
Having said that, the model I'll be working with for this project is going to be the Hand Tracking. Basically, it combines two main modules: the Palm Detection Model and the Hand Landmark Model.
As the name suggests, the Palm Detection Model will detect the user's hands and provides a cropped image of the hand. From there, the Hand Landmark Model can find up to 21 different landmarks on this cropped image of the hand (like the image bellow):
HandTrackingMin.py contains the minimum code that is required to run the Hand Tracking program. This code can (and will!) be changed and improved on future projects.
HandTrackingModule was created because, next time I need to use a HandTracking program inside another project, I won't need to write all of it again! By "just" asking the 21 values of cx and cy and importing HandTrackingModule on the project, it is possible to make use of the algorithm (just like I did at HandTrackingWithModule.py):
import HandTrackingModule as htm
detector = htm.handDetector()
img = detector.findHands(img)
landmarksList = detector.findPositions(img)
As next steps, I pretend to implement this "simple" handtrack project in other more complex personal projects like:
- Finger Counter

- Gesture Control of the system's volume

- Gesture Control of the mouse
