Sign Language Recognition using Teachable Machine, ml5.js and p5.js
Approximately 200,000 - 500,000 Americans of all ages use Sign Language. I wanted to find out if Teachable Machine could be used to interpret sign language. A person who is already fluent in American Sign Language could grab a friend and help them learn sign language using this project. 
The goal of the project is to use a trained model to help interpret sign language as a way to bridge the communication gap between those fluent in ASL and those just learning it. Because I was limited to image classification, I started training the model by capturing images of me signing the alphabet. 
Technology
Teachable Machineis a web-based tool made by Google that allows users to create machine learning models easy for everyone. 
ml5.js  -ml5.js aims to make machine learning approachable for a broad audience of artists, creative coders, and students. The library provides access to machine learning algorithms and models in the browser, building on top of TensorFlow.js with no other external dependencies
p5.js - p5.js is a JavaScript library for creative coding, with a focus on making coding accessible and inclusive for artists, designers, educators, beginners, and anyone else! p5.js is free and open-source because we believe software, and the tools to learn it, should be accessible to everyone.
Step 1
The first step was to train the model to recognize the alphabet by creating images of each signed letter of the alphabet, making sure to minimize background distractions. 
Step 2
Once each signed alphabet was recorded and classified accordingly, I trained the model and exported it to be used in my p5.js sketch. 

Step 3 
Write the JavaScript code: 
https://editor.p5js.org/rdominguez7/full/96l2aRnuX
See full code here: https://editor.p5js.org/rdominguez7/sketches/96l2aRnuX
Step 4 
The Results - Interpreting sign language

Conclusion
 Although, I did discover a limitation in using Image classifier to train the model; the letters 'J' and 'Z' both require motion to sign, testing the prototype proved this is a promising tool for interpreting sign language.

To further develop this tool I would use a Pose Project instead of an Image project, coupled with ml5js's Handpose machine learning model. Handpose is a machine-learning model that allows for palm detection and hand-skeleton finger tracking in the browser. It can detect a maximum of one hand at a time and provides 21 3D hand key points that describe important locations on the palm and fingers. For full support of American Sign Language interpretation, the minimum one-hand detection must be expanded to two-handed detection. 

You may also like

Back to Top