JOURNAL OF MULTIMEDIA (JMM)
ISSN : 1796-2048
Volume : 1    Issue : 1    Date : April 2006

Automated Gesturing for Virtual Characters: Speech-driven and Text-driven Approaches
Goranka Zoric, Karlo Smid and Igor S. Pandzic
Page(s): 62-68
Full Text:
PDF (412 KB)


Abstract
We present two methods for automatic facial gesturing of graphically embodied animated agents. In
one case, conversational agent is driven by speech in automatic Lip Sync process. By analyzing
speech input, lip movements are determined from the speech signal. Another method provides
virtual speaker capable of reading plain English text and rendering it in a form of speech
accompanied by the appropriate facial gestures. Proposed statistical model for generating virtual
speaker’s facial gestures can be also applied as addition to lip synchronization process in order to
obtain speech driven facial gesturing. In this case statistical model will be triggered with the input
speech prosody instead of lexical analysis of the input text.

Index Terms
facial gesturing, lip sync, visual TTS, embodied conversational agent