JOURNAL OF MULTIMEDIA (JMM)
ISSN : 1796-2048
Volume : 4    Issue : 5    Date : October 2009

Special Issue: Multimodal Multimedia Retrieval
Guest Editors: Mario Döller, Wo L. Chang, Jaime Delgado, and Lionel Brunie

Guest Editorial
Mario Döller, Wo L. Chang, Jaime Delgado, and Lionel Brunie
Page(s): 263-265
Full Text:
PDF (296 KB)


Abstract
Today, the popularity of multimedia demands efficient and intelligent strategies to cope with large amount of multimedia
data and the real time constraints of applications. Recent efforts in the area of Multimedia Retrieval Systems (MMRS) have
led to a growing research community and a number of projects on the international, national and industrial level.

Besides concentrating on single media retrieval systems (e.g., in which only images are considered), the latest
technologies target on multimodal and/or semantically rich retrieval engines. This development explicitly forms the
mainstream trend as queries such as “Show me the movie and related material for the given score available by melody and
text snippets” (maybe by humming) or “Give me all media (text, image, video, audio) containing information about the city of
Paris” come into vogue. In order to support those challenging requests, research needs to work on a) new (ontology-based)
semantic models for combining individual media models, b) new retrieval engines able to cross the media boundary during
search and c) new interfaces that can deal with various media data inputs and present complex multimedia information. For
instance, similarity metrics need to be developed/modified encompassing the media boundary, with the aim to discover
useful relationships among multimodal multimedia documents and to find a better way through-out the vast amount of
media information.

For this purpose, theories and techniques concerning multimodal information retrieval systems focusing on new
approaches for indexing, representing, organizing, integrating, clustering, querying and feature extraction of multimodal data
need to be investigated and evaluated.

The content of the special issue aims on providing a deeper look of current research in the area of Multimodal Multimedia
Retrieval including both theory and application oriented papers and new approaches in extraction and use of semantic
concepts in order to minimize the semantic gap in multimedia retrieval.

Using the MPEG Query Format for Cross-Modal Identification
Matthias Gruhne, Peter Dunker, Ruben Tous
This article demonstrates the new multimedia query language (MPEG Query Format) in a distributed cross modal retrieval
environment.

Bridging the Semantic Gap for Texture-based Image Retrieval and Navigation
Najlae Idrissi, Jos´e Martinez, Driss Aboutajdine
The authors in this paper propose a new approach for interpreting textures in natural terms in order to bridge the semantic
gap in image retrieval.

Semantic Restructuring of Natural Language Image Captions to Enhance Image Retrieval
Kraisak Kesorn, Stefan Poslad
Image captions provide useful information and hints for image retrieval. The article introduces a framework that combines
Natural Language Processing approaches with Ontologies and LSI in order to extract concepts in image captions.

Semantic Concept Mining Based on Hierarchical Event Detection for Soccer Video Indexing
Maheshkumar H. Kolekar, Kannappan Palaniappan, Somnath Sengupta, Gunasekaran Seetharaman
The detection of semantic concepts in the sports video domain is a challenging task. This article introduces a novel
hierarchical framework that supports event sequence detection, semantic concept allocation (e.g., goal scored) and
summarization.

A Multimodal Data Mining Framework for Revealing Common Sources of Spam Images
Chengcui Zhang, Wei-Bang Chen, Xin Chen, Richa Tiwari, Lin Yang, Gary Warner
Spamming is an overwhelming problem in the today’s communication flow. Related to this, the proposed framework
provides means for detecting and clustering spam images in order to track spam gangs.

Multimodal Preference Aggregation for Multimedia Information Retrieval
Eric Bruno, Stephane Marchand-Maillet
The authors present a novel information representation for multimodal data in combination with a machine-learning based
retrieval algorithm and highlight their improved efficiency in contrary to the SVM algorithm.

The editors want to thank all reviewers for their excellent work during the review process:

Beek, Peter van; Sharp Laps, USA  
Boll, Susanne; University of Oldenburg, Germany  
Böszörmenyi, Laszlo; Klagenfurt University, Austria  
Carreras, Anna; DMAG-UPC/UPF, Spain
Choi, Miran; ETRI, Korea   
Cordara, Giovanni; Telecom Italia Lab, Italy    
Gandhi, Bhavan; Motorola Labs, USA    
Granitzer, Michael; Know-Center ,Austria  
Gruhne, Matthias; Fraunhofer (IDMT), Germany  
Linaza, María Teresa; VICOMTech, Spain  
Mass, Yosi; IBM, Isreal    
Melby, Alan K.; Brigham Young University, USA
Oria, Vincent; NJIT, USA    
Pereira, Fernando; IST, Portugal  
Sang Kyun Kim; Samsung, South Korea
SooJun Park; ETRI, South Korea
Tous, Ruben; DMAG-UPC/UPF, Spain
Tsinaraki, Chrisa; Technical University of Crete, Greek   
Vetro, Anthony; Mitsubishi Electric Research Laboratories, USA  
Wolf, Ingo; T-Systems, Germany  
Yoon, Kyoungro; Konkuk University, Korea    
Zaharieva, Maia; TU Wien, Austria
Zhao, Jun; University of Oxford, UK    

Index Terms
Special Issue, Multimodal Multimedia Retrieval