JOURNAL OF MULTIMEDIA (JMM)
ISSN : 1796-2048
Volume : 3    Issue : 2    Date : June 2008

Extraction of Subject-Specific Facial Expression Categories and Generation of Facial Expression
Feature Space using Self-Mapping
Masaki Ishii, Kazuhito Sato, Hirokazu Madokoro, and Makoto Nishida
Page(s): 60-67
Full Text:
PDF (1,329 KB)


Abstract
This paper proposes a generation method of a subject-specific Facial Expression Map (FEMap)
using the Self-Organizing Maps (SOM) of unsupervised learning and Counter Propagation Networks
(CPN) of supervised learning together. The proposed method consists of two steps. In the first step,
the topological change of a face pattern in the expressional process of facial expression is learned
hierarchically using the SOM of a narrow mapping space, and the number of subject-specific facial
expression categories and the representative images of each category are extracted. Psychological
significance based on the neutral and six basic emotions (anger, sadness, disgust, happiness,
surprise, and fear) is assigned to each extracted category. In the latter step, the categories and the
representative images described above are learned using the CPN of a large mapping space, and
a category map that expresses the topological characteristics of facial expression is generated.
This paper defines this category map as an FEMap. Experimental results for six subjects show that
the proposed method can generate a subject-specific FEMap based on the topological
characteristics of facial expression appearing on face images.

Index Terms
facial image processing, facial expression recognition, self-organizing maps, counter propagation
networks