| dc.contributor.author | Ibañez, Manolo Sancho A. | |
| dc.date.accessioned | 2025-08-15T01:25:00Z | |
| dc.date.available | 2025-08-15T01:25:00Z | |
| dc.date.issued | 2025-06 | |
| dc.identifier.uri | http://dspace.cas.upm.edu.ph:8080/xmlui/handle/123456789/3131 | |
| dc.description.abstract | Facial Expression Recognition (FER) is a vital component in human-computer interaction, enabling machines to interpret human emotions. Deep learning models, particularly Convolutional Neural Networks (CNNs), have demonstrated high accuracy in FER tasks. However, these models are often computationally intensive and memory-demanding, limiting their deployment on low-end devices. This study explores the application of model compression techniques—quantization and network pruning—on deep CNN architectures including VGG16, ResNet50, and DenseNet121 using the FER-2013 dataset. The goal is to reduce model size and inference time while maintaining accuracy in classification. Experimental results indicate that 8-bit quantization of VGG16 achieved the best trade-off, reducing model size by over fourfold with negligible impact on accuracy. Pruning showed limited effectiveness on transfer learning models due to minimal size reduction but proved useful on simpler architectures. The findings provide valuable insights for deploying efficient FER systems on edge devices. | en_US |
| dc.subject | Facial Expression Recognition | en_US |
| dc.subject | Deep Learning Models | en_US |
| dc.subject | Convolutional Neural Networks | en_US |
| dc.subject | Model Compression Techniques | en_US |
| dc.subject | Quantization | en_US |
| dc.subject | Pruning | en_US |
| dc.subject | Human-Computer Interaction | en_US |
| dc.subject | Fer-2013 Dataset | en_US |
| dc.subject | VGG16 | en_US |
| dc.subject | ResNet50 | en_US |
| dc.subject | DenseNet121 | en_US |
| dc.title | Implementing Model Compression Techniques on Deep Learning Models for Facial Expression Recognition | en_US |
| dc.type | Thesis | en_US |