This chapter addresses the aspects of facial expression quantification to detect low, medium, and high levels of expressions. It develops an automatic emotion classification technique for recognizing six different facial emotions-anger, disgust, fear, happiness, sadness, and surprise. The authors evaluated two different facial features for this purpose: facial deformation features and marker-based features for extracting facial expression features. The results show that the sectored volumetric difference function (SVDF/VDF) shape transformation features allow better quantification of facial expressions as compared to marker-based features. The further plans for this research will be to find better methods to fuse audiovisual information that can model the dynamics of facial expressions and speech. Segmental level acoustic information can be used to trace the emotions at a frame level.