Signify is Alexa for the deaf and mute.
Signify is an innovative home assistant platform designed to enhance accessibility through gesture recognition. It allows users to perform various tasks such as controlling lights, checking weather updates, and playing quizzes without the need for direct screen interaction.
Thumbs UpGesture: Select the page to control lights using gestures.Thumbs DownGesture: Select the page to play the quiz game using gestures.Thumbs UpGesture: Select the page to check the weather in your area.Rock and Roll Sign: Got to the selected page.
Thumbs UpGesture: Turn the lightON(OFFby default).Thumbs DownGesture: Turn the lightOFF.Closed FistGesture: Go back to the home page.
Thumbs UpGesture: Select thetrueoption.Thumbs DownGesture: Select thefalseoption.Open PalmGesture: Go to the next question if you have answered a question already.Closed FistGesture: Go back to the home page.
Closed FistGesture: Go back to the home page.
Utilizing MediaPipe for gesture recognition, Signify employs a two-part model approach:
- Hand Landmark Model Bundle:
- Detects hand presence and geometry.
- Utilizes a combination of palm detection and hand landmarks detection models.
- Trained on diverse datasets including real-world images and synthetic models.
- Gesture Classification Model Bundle:
- Identifies specific gestures from hand geometry.
- Supports common gestures like Closed Fist, Open Palm, Thumbs Up, etc.
- Frames are captured every 1.3 seconds and sent to the gesture recognition model.
- The first model component assesses hand presence, while the second classifies the gesture.
- A Flask backend processes recognized gestures.
- Includes functionalities like light control based on the gesture received.
- React-based frontend displays real-time gesture updates via WebSocket.
- Implements logic to respond to different gestures for controlling various functions.
Pre-trained models offer efficient processing with average latencies of 16.76ms (CPU) and 20.87ms (GPU) on Pixel 6 devices.
Run the operations below using your terminal. The directory should be the root directory of the Signify project.
-
Download Anaconda.
-
Create a conda environment with Python 3.9 as mediapipe works perfectly for this version.
conda create -n signify_environment python=3.9
-
Activate the conda environment
conda activate signify_environment
-
Download the conda packages
conda install -r conda_requirements.txt
-
Download the python packages using pip
pip install -r pip_requirements.txt
-
Create a new kernel with
signify_environment.conda install ipykernel python -m ipykernel install --user --name=signify --display-name "signify_environment" -
Run jupyter lab
jupyterlab
-
Once you open the notebook make sure the top right corner where it shows the kernel says
signify_environment. -
Run all the cells using
shift+enteruntil the open CV code starts running and you see the camera turn on. -
Press
qafter selecting the camera window if you want to stop code execution and quit the camera.
-
Navigate to the
apidirectory in theSignifyproject directory. -
Once you are in the
apidirectory, create a python3 virtual environment to seperate the dependencies that you install for this project from the rest of your system.python3 -m venv venv
-
Activate the virtual environment.
source venv/bin/activate -
Install all python dependencies using pip.
pip install -r requirements.txt
If there are any errors in this step then install the packages manually by referencing the code.
-
Start the backend flask server.
python run.py
-
Navigate to the
frontenddirectory within theSignifyproject directory. -
Install the packages using npm.
npm i
-
Run the react app.
npm start

