Magic: Hand Tracking JavaScript for amazing hand fire effect
The magician walks on stage, holds his left palm out and – WHOOSH – a ball of fire appears! That’s a trick you usually see on stage, but today I am going to show you this magic in your webcam, empowered by the technique of hand tracking javascript library.
Let’s build a web app that can detect hands from webcam, and put your palms ON FIRE! Turn on your webcam, raise your hands, and let the flames dancing in your hands.
Hand Tracking JavaScript demo
If you are browsing through social media built in browsers, you would need to open the page in Sarafi (iPhone)/ Chrome (Android)
GitHub repository
You can download the complete code of the above demo in the link below:
Streaming webcam on desktop computer or mobile, there is function to switch back or front cameras on mobile
Implementation
It is so cool, right? Did you have fun and show this magic fire to your friends? I hope you like it, this demo utilized an advanced technique called Hand Tracking, which can identify human hands in an image or video stream.
Thanks to handtrack.js, an awesome javascript library that uses the TensorFlow.js object detection API to empower hand tracking in the browser. If you are interested in building your own hand tracking app, please follow me below for the journey of how I implemented it.
# Step 1 : Include handtrack.js
First of all, simply include the script handtrack.js
in the <head> section of the html file.
<html>
<head>
<script src="https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js"> </script>
</head>
Or you can install it via npm for use in a TypeScript / ES6 project
npm install --save handtrackjs
# Step 2 : Stream webcam to browser
To stream your webcam into the browser, I utilize the JavaScript library navigator.mediaDevices.getUserMedia
. To find out more details about that, please refer to my previous blog :
# Step 3 : Load HandTrack Model
In order to perform hand tracking, we first need to load the pre-trained HandTrack model, by calling the API of handTrack.load(modelParams)
. HandTrack comes with a few optional parameters of the model:
parameter | default value | description |
flipHorizontal | true | flip e.g for video |
imageScaleFactor | 0.7 | reduce input image size for gains in speed |
maxNumBoxes | 20 | maximum number of boxes to detect |
iouThreshold | 0.5 | ioU threshold for non-max suppression |
scoreThreshold | 0.99 | confidence threshold for predictions |
async function loadModel() {
$(".loading").removeClass('d-none');
var flipWebcam = (webcam.facingMode =='user') ? true: false
return new Promise((resolve, reject) => {
const modelParams = {
flipHorizontal: flipWebcam,
maxNumBoxes: 20,
iouThreshold: 0.5,
scoreThreshold: 0.8
}
handTrack.load(modelParams).then(mdl => {
model = mdl;
$(".loading").addClass('d-none');
resolve();
}).catch(err => {
reject(error);
});
});
}
# Step 4 : Hand detection
Next, we start to feed the webcam stream through the HandTrack model to perform hand detection, by calling the API of model.detect(video)
. It takes an input image element (can be an img
, video
, canvas
tag) and returns an array of bounding boxes with class name and confidence level.
function startDetection() {
model.detect(webcamElement).then(predictions => {
console.log("Predictions: ", predictions);
showFire(predictions);
cameraFrame = requestAnimFrame(startDetection);
});
}
Return of predictions would look like:
[{
bbox: [x, y, width, height],
class: "hand",
score: 0.8380282521247864
}, {
bbox: [x, y, width, height],
class: "hand",
score: 0.74644153267145157
}]
# Step 5 : Show magic fire
In the above function, we get the bounding box of the hand position, now we can use it to show the fire GIF image in your hand.
HTML
Overlay the canvas
layer on top of the webcam
element
<video id="webcam" autoplay playsinline width="640" height="480"></video>
<div id="canvas" width="640" height="480"></div>
JavaScript
Set the size and position of the fireElement
, and append it to the canvas
layer.
function showFire(predictions){
if(handCount != predictions.length){
$("#canvas").empty();
fireElements = [];
}
handCount = predictions.length;
for (let i = 0; i < predictions.length; i++) {
if (fireElements.length > i) {
fireElement = fireElements[i];
}else{
fireElement = $("<div class='fire_in_hand'></div>");
fireElements.push(fireElement);
fireElement.appendTo($("#canvas"));
}
var fireSizeWidth = fireElement.css("width").replace("px","");
var fireSizeHeight = fireElement.css("height").replace("px","");
var firePositionTop = hand_center_point[0]- fireSizeHeight;
var firePositionLeft = hand_center_point[1] - fireSizeWidth/2;
fireElement.css({top: firePositionTop, left: firePositionLeft, position:'absolute'});
}
}
CSS
set the background-image
to be the fire.gif
image
.fire_in_hand {
width: 300px;
height: 300px;
background-image: url(../images/fire.gif);
background-position: center center;
background-repeat: no-repeat;
background-size: cover;
}
That’s pretty much for the code! Now you should be good to start showing the magic fire in your hands!
Final result
Switch on the Webcam, then the browser will ask you for permission to access camera, click Allow.
Wait for a few seconds for the model to load. Then raise your hands in the webcam, and here we go! It is Magic Time!
Thank you for reading. If you like this article, please share on Facebook or Twitter. Let me know in the comment if you have any questions. Follow me on Medium, GitHub and Linkedin. Support me on Ko-fi.