Multilabel Facial Expression & Intensity Detection
Project Overview
This project aims at recognizing and estimating facial expressions along with their intensity levels. The system uses facial images to classify multiple emotions and gauge their intensity. The system is capable of recognizing and classifying multiple facial expressions (such as happy, sad, angry, surprised, etc.) simultaneously. This is done using a deep learning-based approach with ML-CNN and ResNet50 models. We achieved about 95% accuracy with the ML-CNN model, but ResNet50 did not provide proper accuracy; the accuracy was approximately 67%. We used fer2013new as our dataset. A CSV file named ‘fer2013new.csv’ is provided in the GitHub repository. It contains about 10 types of emotions, including intensity labels from 1-10.
Team Members
- Mehreen Tabassum Maliha
- Nahida Marzan
Features
Multilabel Facial Expression Recognition
- Emotion Recognition: Recognize and classify multiple facial expressions such as happiness, sadness, anger, surprise, disgust, and fear simultaneously.
- Multilabel Classification: Classify facial expressions into multiple labels at once, allowing the system to recognize complex emotions.
Intensity Estimation
- Emotion Intensity: Estimate the intensity of facial expressions on a scale from 1 to 10, providing a more detailed understanding of the emotions.
Deep Learning Models
- ML-CNN Model: Achieved about 95% accuracy in recognizing emotions and estimating intensity using a custom ML-CNN model with convolutional layers.
- ResNet50 Model: ResNet50 model provided an accuracy of around 67%, indicating that further fine-tuning or additional data may be needed to improve its performance.
Dataset
- fer2013new Dataset: The dataset used, 'fer2013new.csv,' includes approximately 10 types of emotions along with intensity labels ranging from 1 to 10.
- Data Preprocessing: The images in the dataset are preprocessed to ensure they are suitable for model training, including normalization and resizing.
Technologies Used
- Python