Real-time and video-based emotion recognition using deep learning (VGG/LeNet/GoogLeNet). Supports live webcam streams and pre-recorded videos.
Fork this repository and enter the root folder (which should be named Emotion_Detection)
conda create -n <venv_name> python=3.9.22
conda activate <venv_name>
Core Requirements:
- OpenCV 4.5+
- TensorFlow 2.6+
- Other packages from requirements.txt
pip install -r requirements.txt
💡 Conda Tip: For GPU support, install TensorFlow with Conda:
conda install -c conda-forge tensorflow-gpu
Prerequisites:
- You are in the project root directory
- Environment is set up (
requirements.txtinstalled)
python emo_rec/build_dataset.py
📁 Output:
- The training, validation, and testing datasets will be stored under
datasets/fer2013/hdf5
python emo_rec/train_emotion_detector.py -m <model_name>emotionvggnet(default)lenetminigooglenetminivggnetshallownet
python emo_rec/train_emotion_detector.py
📁 Outputs:
- Trained models:
emo_rec/built_models/ - Training logs:
emo_rec/training_logs/
python emo_rec/test_emotion_detector.py -m <model_name>
❗ Requirements:
- Model must be trained first (Step 1)
-mflag is mandatory
python emo_rec/test_emotion_detector.py -m minivggnetpython emo_rec/run_emotion_detector.py -m <model_name> -v <video_path>python emo_rec/run_emotion_detector.py -m emotionvggnet -v emo_rec/video/example.mp4python emo_rec/run_emotion_detector.py -m <model_name>
🖥️ Controls:
- Select the camera window
- Press:
- Mac: Cmd + Q
- Windows: Ctrl + Q
python emo_rec/run_emotion_detector.py -m lenet
🔴 Important:
- Model must be trained first (Step 1)
-m <model_name>is always required- Uses default camera device