Automatic Multi-modal Emotion Recognition(ER) for AI Interaction

Li Shiyi 3035844405

Supervisor: Wu Chuan

This project aims to build a lightweight, end-to-end multimodal model for emotion recognition using only audio and visual input, and see how far we can push its performance under real-world constraints.

Project Plan

Interim Report

Final Report

1 Minute Introduction Video