Abstract
The majority of the content-based image retrieval (CBIR) systems are restricted to the representation of signal aspects, e.g. color, texture...without explicitly considering the semantic content of images. According to these approaches a sun, for example, is represented by an orange or yellow circle, but not by the term “sun”. The signal-oriented solutions are fully automatic, and thus easily usable on substantial amounts of data, but they do not fill the existing gap between the extracted low-level features and semantic descriptions. This obviously penalizes qualitative and quantitative performances in terms of recall and precision, and therefore users’ satisfaction. Another class of methods, which were tested within the framework of the Fermi-GC project, consisted in modeling the content of images following a sharp process of human-assisted indexing. This approach, based on an elaborate model of representation (the conceptual graph formalism) provides satisfactory results during the retrieval phase but is not easily usable on large collections of images because of the necessary human intervention required for indexing. The contribution of this paper is twofold: in order to achieve more efficiency as far as user interaction is concerned, we propose to highlight a bond between these two classes of image retrieval systems and integrate signal and semantic features within a unified conceptual framework. Then, as opposed to state-of-the-art relevance feedback systems dealing with this integration, we propose a representation formalism supporting this integration which allows us to specify a rich query language combining both semantic and signal characterizations. We will validate our approach through quantitative (recall-precision curves) evaluations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Berlin, B., Kay, P.: Basic Color Terms: Their universality and Evolution. UC Press (1991)
Carson, C., et al.: Blobworld: A System for Region-Based Image Indexing and Retrieval. In: Huijsmans, D.P., Smeulders, A.W.M. (eds.) VISUAL 1999. LNCS, vol. 1614, pp. 509–516. Springer, Heidelberg (1999)
Cox, I., et al.: The Bayesian Image Retrieval System, PicHunter: Theory, Implementation and Psychophysical Experiments. IEEE Trans. Image Processing 9(1), 20–37 (2000)
Gong, Y., Chuan, H., Xiaoyi, G.: Image Indexing and Retrieval Based on Color Histograms. Multimedia Tools and Applications II, 133–156 (1996)
Gupta, A., Weymouth, T., Jain, R.: Semantic queries with pictures: The VIMSYS model. In: VLDB, pp. 69–79 (1991)
Cascia, L., et al.: Combining Textual and Visual Cues for Content-Based Image Retrieval on the World Wide Web. In: IEEE Workshop on CB Access of Im. and Vid. Lib, pp. 24–28 (1998)
Lammens, J.M.: A computational model of color perception and color naming. PhD, State Univ. of New York, Buffalo (1994)
Lim, J.H.: Explicit query formulation with visual keywords. ACM MM, 407–412 (2000)
Lim, J.H., et al.: Home Photo Content Modeling for Personalized Event-Based Retrieval. IEEE Multimedia 10(4) (2003)
Lu, Y., et al.: A unified framework for semantics and feature based relevance feedback in image retrieval systems. ACM MM, 31–37 (2000)
Ma, W.Y., Manjunath, B.S.: NeTra: A toolbox for navigating large image databases. In: ICIC, pp. 568–571 (1997)
Mechkour, M.: EMIR2: An Extended Model for Image Representation and Retrieval. In: Revell, N., Tjoa, A.M. (eds.) DEXA 1995. LNCS, vol. 978, pp. 395–404. Springer, Heidelberg (1995)
Miyahara, M., Yasuhiro Yoshida, Y.: Mathematical Transform of (R,G,B) Color Data to Munsell (H,V,C) Color Data. In: SPIE, vol. 1001, pp. 650–657 (1988)
Mojsilovic, A., Rogowitz, B.: Capturing image semantics with low-level descriptors. In: ICIP, pp. 18–21 (2001)
Mulhem, P., Lim, J.H.: Symbolic photograph content-based retrieval. In: ACM CIKM, pp. 94–101 (2002)
Niblack, W., et al.: The QBIC project: Querying images by content using color, texture and shape. In: SPIE, Storage and Retrieval for Image and Video Databases, pp. 40–48 (1993)
Ounis, I., Pasca, M.: RELIEF: Combining expressiveness and rapidity into a single system. In: ACM SIGIR, pp. 266–274 (1998)
Smeulders, A.W.M., et al.: Content-based image retrieval at the end of the early years. IEEE PAMI 22(12), 1349–1380 (2000)
Smith, J.R., Chang, S.F.: VisualSEEk: A fully automated content-based image query system. ACM MM, 87–98 (1996)
Sowa, J.F.: Conceptual structures: information processing in mind and machine. Addison-Wesley publishing company, Reading (1984)
Zhou, X.S., Huang, T.S.: Unifying Keywords and Visual Contents in Image Retrieval. IEEE Multimedia 9(2), 23–33 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Belkhatir, M., Mulhem, P., Chiaramella, Y. (2004). Integrating Perceptual Signal Features within a Multi-facetted Conceptual Model for Automatic Image Retrieval. In: McDonald, S., Tait, J. (eds) Advances in Information Retrieval. ECIR 2004. Lecture Notes in Computer Science, vol 2997. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24752-4_20
Download citation
DOI: https://doi.org/10.1007/978-3-540-24752-4_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-21382-6
Online ISBN: 978-3-540-24752-4
eBook Packages: Springer Book Archive