A User-Oriented Image Retrieval System Based On Interactive Genetic Algorithm
A User-Oriented Image Retrieval System Based On Interactive Genetic Algorithm
A User-Oriented Image Retrieval System Based On Interactive Genetic Algorithm
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 10, OCTOBER 2011
I. I NTRODUCTION N RECENT years, rapid advances in science and technology have produced a large amount of image data in diverse areas, such as entertainment, art galleries, fashion design, education, medicine, industry, etc. We often need to efciently store and retrieve image data to perform assigned tasks and to make a decision. Therefore, developing proper tools for the retrieval image from large image collections is challenging. Two different types of approaches, i.e., text- and contentbased, are usually adopted in image retrieval. In the text-based system, the images are manually annotated by text descriptors and then used by a database management system to perform image retrieval. However, there are two limitations of using keywords to achieve image retrieval: the vast amount of labor required in manual image annotation and the task of describing image content is highly subjective. That is, the perspective of textual descriptions given by an annotator could be different
Manuscript received October 27, 2010; revised January 17, 2011; accepted February 8, 2011. Date of publication April 21, 2011; date of current version September 14, 2011. This paper was supported by the National Science Council, Taiwan, under Grants NSC 98-2221-E-390-027 and NSC 99-2221-E-390035. The Associate Editor coordinating the review process for this paper was Dr. Emil Petriu. The authors are with the Department of Electrical Engineering, National University of Kaohsiung, Kaohsiung 81148, Taiwan (e-mail:cclai@nuk.edu.tw). Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TIM.2011.2135010
from the perspective of a user. In other words, there are inconsistencies between user textual queries and image annotations or descriptions. To alleviate the inconsistency problem, the image retrieval is carried out according to the image contents. Such strategy is the so-called content-based image retrieval (CBIR). The primary goal of the CBIR system is to construct meaningful descriptions of physical attributes from images to facilitate efcient and effective retrieval [1], [2]. CBIR has become an active and fast-advancing research area in image retrieval in the last decade. By and large, research activities in CBIR have progressed in four major directions: global image properties based, region-level features based, relevance feedback, and semantic based. Initially, developed algorithms exploit the low-level features of the image such as color, texture, and shape of an object to help retrieve images. They are easy to implement and perform well for images that are either simple or contain few semantic contents. However, the semantics of an image are difcult to be revealed by the visual features, and these algorithms have many limitations when dealing with broad content image database. Therefore, in order to improve the retrieval accuracy of CBIR systems, regionbased image retrieval methods via image segmentation were introduced. These methods attempt to overcome the drawbacks of global features by representing images at object level, which is intended to be close to the perception of human visual system. However, the performance of these methods mainly relies on the results of segmentation. The difference between the users information need and the image representation is called the semantic gap in CBIR systems. The limited retrieval accuracy of imagecentric retrieval systems is essentially due to the inherent semantic gap. In order to reduce the gap, the interactive relevance feedback is introduced into CBIR. The basic idea behind relevance feedback is to incorporate human perception subjectivity into the query process and provide users with the opportunity to evaluate the retrieval results. The similarity measures are automatically rened on the basis of these evaluations. However, although relevance feedback can signicantly improve the retrieval performance, its applicability still suffers from a few drawbacks [3]. The semantic-based image retrieval methods try to discover the real semantic meaning of an image and use it to retrieve relevant images. However, understanding and discovering the semantics of a piece of information are highlevel cognitive tasks and thus hard to automate. A wide variety of CBIR algorithms has been proposed, but most of them focus on the similarity computation phase to efciently nd a specic image or a group of images that are similar to the given query. In order to achieve a better
LAI AND CHEN: USER-ORIENTED IMAGE RETRIEVAL SYSTEM BASED ON INTERACTIVE GENETIC ALGORITHM
3319
approximation of the users information need for the following search in the image database, involving users interaction is necessary for a CBIR system. In this paper, we propose a user-oriented CBIR system that uses the interactive genetic algorithm (GA) (IGA) [5] to infer which images in the databases would be of most interest to the user. Three visual features, color, texture, and edge, of an image are utilized in our approach. IGA provides an interactive mechanism to better capture users intention. There are very few CBIR systems considering humans knowledge, but [6] is the representative one. They considered the red, green, and blue (RGB) color model and wavelet coefcients to extract image features. In their system, the query procedure is based on association (e.g., the user browses an image collection to choose the most suitable ones). The main properties of this paper that are different from it can be identied as follows: 1) low-level image features color features from the hue, saturation, value (HSV) color space, as well as texture and edge descriptors, are adopted in our approach and 2) search techniqueour system adopts the query-by-example strategy (i.e., the user provides an image query). In addition, we hybrid the users subjective evaluation and intrinsic characteristics of the images in the image matching against only considering human judgment [6]. The remainder of this paper is organized as follows. Related works about CBIR are briey reviewed in Section II. Section III describes the considered image features and the concept of IGA. The proposed approach is presented in Section IV. Section V gives the experimental results and provides comparative performances. Finally, Section VI presents the conclusions of this paper. II. R ELATED W ORKS There are some literatures that survey the most important CBIR systems [7], [8]. Also, there are some papers that overview and compare the current techniques in this area [9], [10]. Since the early studies on CBIR, various color descriptors have been adopted. Yoo et al. [11] proposed a signature-based color-spatial image retrieval system. Color and its spatial distribution within the image are used for the features. In [12], a CBIR scheme based on the global and local color distributions in an image is presented. Vadivel et al. [13] have introduced an integrated approach for capturing spatial variation of both color and intensity levels and shown its usefulness in image retrieval applications. Texture is also an essential visual feature in dening highlevel semantics for image retrieval purposes. In [14], a novel, effective, and efcient characterization of wavelet subbands by bit-plane extractions in texture image retrieval was presented. In order to overcome some limitations, such as computational expensive approaches or poor retrieval accuracy, in a few texturebased image retrieval methods, Kokare et al. [15] concentrated on the problem of nding good texture features for CBIR. They designed 2-D rotated complex wavelet lters to efciently handle texture images and formulate a new texture-retrieval algorithm using the proposed lters. Pi and Li [16] combined fractal parameters and collage error to propose a set of new statistical fractal signatures. These signatures effectively extract
the statistical properties intrinsic in texture images to enhance retrieval rate. Liapis and Tziritas [17] explored image retrieval mechanisms based on a combination of texture and color features. Texture features are extracted using discrete wavelet frame analysis. Two- or one-dimensional histograms of the CIE Lab chromaticity coordinates are used as color features. Chun et al. [18] proposed a CBIR method based on an efcient combination of multiresolution color and texture features. As its color features, color autocorrelograms of the hue and saturation component images in HSV color space are used. As its texture features, block difference of inverse probabilities and block variation of local correlation coefcient moments of the value component image are adopted. The color and texture features are extracted in multiresolution wavelet domain and then combined. In order to well model the high-level concepts in an image and users subjectivity, recent approaches introduce humancomputer interaction into CBIR. Takagi et al. [4] evaluated the performance of the IGA-based image retrieval system that uses wavelet coefcients to represent physical features of images. Cho [19] applied IGA to solve the problems of fashion design and emotion-based image retrieval. He used wavelet transform to extract image features and IGA to search the image that the user has in mind. When the user gives appropriate tness to what he or she wants, the system provides the images selected based on the users evaluation. In [20], a new IGA framework incorporating relevance feedback for image retrieval was proposed. Arevalillo-Herrez et al. [21] introduced a new hybrid approach to relevance feedback CBIR. Their technique combines an IGA with an extended nearest neighbor approach to reduce the existing gap between the high-level semantic contents of images and the information provided by their lowlevel descriptors. Under the situation that the target is unknown, face image retrieval has particularity. Shi et al. [22] proposed an IGA-based approach which incorporates an adjust function and a support vector machine. Their method can prevent the optimal solution from losing, accelerate the convergence of IGA, and raise retrieval performance. III. I MAGE F EATURES AND IGA One of the key issues in querying image databases by similarity is the choice of appropriate image descriptors and corresponding similarity measures. In this section, we rst present a brief review of considered low-level visual features in our approach and then review the basic concept of the IGA. A. Color Descriptor A color image can be represented using three primaries of a color space. Since the RGB space does not correspond to the human way of perceiving the colors and does not separate the luminance component from the chrominance ones, we used the HSV color space in our approach. HSV is an intuitive color space in the sense that each component contributes directly to visual perception, and it is common for image retrieval systems [11], [23]. Hue is used to distinguish colors, whereas saturation
3320
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 10, OCTOBER 2011
gives a measure of the percentage of white light added to a pure color. Value refers to the perceived light intensity. The important advantages of HSV color space are as follows: good compatibility with human intuition, separability of chromatic and achromatic components, and possibility of preferring one component to other [24]. The color distribution of pixels in an image contains sufcient information. The mean of pixel colors states the principal color of the image, and the standard deviation of pixel colors can depict the variation of pixel colors. The variation degree of pixel colors in an image is called the color complexity of the image. We can use these two features to represent the global properties of an image. The mean () and the standard deviation ( ) of a color image are dened as follows: 1 = N =
N
B. Texture Descriptor Texture is an important attribute that refers to innate surface properties of an object and their relationship to the surrounding environment. If we could choose appropriate texture descriptors, the performance of the CBIR should be improved. We use a gray level co-occurrence matrix (GLCM), which is a simple and effective method for representing texture [26]. The GLCM represents the probability p(i, j ; d, ) that two pixels in an image, which are located with distance d and angle , have gray levels i and j . The GLCM is mathematically dened as follows: p(i, j ; d, ) = # {(x1 , y1 )(x2 , y2 )|g (x1 , y1 ) = i, g (x2 , y2 ) = j, |(x1 , y1 ) (x2 , y2 )| = d, ((x1 , y2 ), (x2 , y2 )) = } (7)
Pi
i=1 N 1/ 2
(1) (Pi )2
i=1
1 N 1
(2)
where = [H, S, V ]T and = [H, S, V ]T , each component of and indicates the HSV information, respectively, and Pi indicates the ith pixel of an image. In addition to the global property of an image, the local color properties in an image play also an important role to improve the retrieval performance. Hence, a feature called binary bitmap can be used to capture the local color information of an image. The basic concept of binary bitmap comes from the block truncation coding [25], which is a relatively simple image coding technique and has been successfully employed in many image processing applications. There are three steps to generate the image binary bitmap. This method rst divides an image into several nonoverlapping blocks. Let Bj = {b1 , b2 , . . . , bk } be the j th block of the image, where 1 j m; k represents the total number of pixels in the block, and m is the total number of blocks in the image. The second step is to compute the mean value for each block. Let Bj be the mean value of the block Bj , which is dened as follows: Bj = 1 k
k
where # denotes the number of occurrences inside the window, with i and j being the intensity levels of the rst pixel and the second pixel at positions (x1 , y1 ) and (x2 , y2 ), respectively. In order to simplify and reduce the computation effort, we computed the GLCM according to one direction (i.e., = 0 ) with a given distance d (= 1) and calculated the entropy, which is used most frequently in the literature. The entropy (E ) is used to capture the textural information in an image and is dened as follows: E=
i,j
(8)
where Ci,j is the GLCM. Entropy gives a measure of complexity of the image. Complex textures tend to have higher entropy.
C. Edge Descriptor Edges in images constitute an important feature to represent their content. Human eyes are sensitive to edge features for image perception. One way of representing such an important edge feature is to use a histogram. An edge histogram in the image space represents the frequency and the directionality of the brightness changes in the image. We adopt the edge histogram descriptor (EHD) [27] to describe edge distribution with a histogram based on local edge distribution in an image. The extraction process of EHD consists of the following stages. 1) An image is divided into 4 4 subimages. 2) Each subimage is further partitioned into nonoverlapping image blocks with a small size. 3) The edges in each image block are categorized into ve types: vertical, horizontal, 45 diagonal, 135 diagonal, and nondirectional edges. 4) Thus, the histogram for each subimage represents the relative frequency of occurrence of the ve types of edges in the corresponding subimage. 5) After examining all image blocks in the subimage, the ve-bin values are normalized by the total number of blocks in the subimage. Finally, the normalized bin values are quantized for the binary representation. These normalized and quantized bins constitute the EHD.
bi
i=1
(3)
where Bj = [HBj , SBj , VBj ]T . In the nal step, comparing Bj with the image mean value () is performed to determine the characteristic of the block Bj and to generate the image binary bitmap. Hence, suppose that I = [IH, IS, IV ] is the binary bitmap of the given image. Each component in I is expressed as IH = [IH1 , IH2 , . . . , IHm ], IS = [IS1 , IS2 , . . . , ISm ], and IV = [IV1 , IV2 , . . . , IVm ], respectively. The entries are represented by IHj = ISj = IVj = 1, 0, 1, 0, 1, 0, if HBj H otherwise if SBj S otherwise if VBj V otherwise. (4) (5) (6)
LAI AND CHEN: USER-ORIENTED IMAGE RETRIEVAL SYSTEM BASED ON INTERACTIVE GENETIC ALGORITHM
3321
D. IGA GAs [28], within the eld of evolutionary computation, are robust, computational, and stochastic search procedures modeled on the mechanics of natural genetic systems. GAs are well known for their abilities by efciently exploring the unexplored regions of the search space and exploiting the knowledge gained via search in the vicinity of known highquality solutions. In general, a GA contains a xed-size population of potential solutions over the search space. These potential solutions of the search space are encoded as binary or oating-point strings, called chromosomes. The initial population can be created randomly or based on the problemspecic knowledge. In each iteration, called a generation, a new population is created based on a preceding one through the following three steps: 1) evaluationeach chromosome of the old population is evaluated using a tness function and given a value to denote its merit; 2) selectionchromosomes with better tness are selected to generate the next population; and 3) matinggenetic operators such as crossover and mutation are applied to the selected chromosomes to produce new ones for the next generation. The aforementioned three steps are iterated for many generations until a satisfactory solution is found or a termination criterion is met. GAs have the following advantages over traditional search methods: 1) They directly work with a coding of the parameter set; 2) the search process is carried out from a population of potential solutions; 3) payoff information is used instead of derivatives or auxiliary knowledge; and 4) probabilistic transition rules are used instead of deterministic ones. Recently, since the computation abilities of computers have become enormously enhanced, GAs have been widely applied in many areas of engineering such as signal processing, system identication, and information mining problems [29][32]. In [33], GAs are applied to exercise difculty-level adaptation in schools and universities with very satisfactory results. Beligiannis et al. [34] applied GAs to the problem of intelligent medial diagnosis of male incompetence. Wu et al. [35] proposed a genetic-based solution for a coordinate transformation test of Global Positioning System positioning. Pan [36] designed robust D-stable IIR lters by using GAs with embedded stability criterion. On the other hand, GAs also have been successfully applied in the research of CBIR [37][39]. For a detailed description on the aforementioned approaches, interested readers may directly refer to them. IGA is a branch of evolutionary computation. The main difference between IGA and GA is the construction of the tness function, i.e., the tness is determined by the users evaluation and not by the predened mathematical formula. A user can interactively determine which members of the population will reproduce, and IGA automatically generates the next generation of content based on the users input. Through repeated rounds of content generation and tness assignment, IGA enables unique content to evolve that suits the users preferences. Based on this reason, IGA can be used to solve problems that are difcult or impossible to formulate a computational tness function, for example, evolving images, music, various artistic designs, and forms to t a users aesthetic preferences.
IV. P ROPOSED S YSTEM In general, an image retrieval system usually provides a user interface for communicating with the user. It collects the required information, including the query image, from the user and displays the retrieval results to him. However, as the images are matched based on low-level visual features, the target or the similar images may be far away from the query in the feature space, and they are not returned in the limited number of retrieved images of the rst display. Therefore, in some retrieval systems, there is a relevance feedback from the user, where human and computer can interact to increase retrieval performance. According to the aforementioned concept, we design a useroriented image retrieval system based on IGA, as shown in Fig. 1. Our system operates in four phases. 1) Querying: The user provides a sample image as the query for the system. 2) Similarity computation: The system computes the similarity between the query image and the database images according to the aforementioned low-level visual features. 3) Retrieval: The system retrieves and presents a sequence of images ranked in decreasing order of similarity. As a result, the user is able to nd relevant images by getting the top-ranked images rst. 4) Incremental search: After obtaining some relevant images, the system provides an interactive mechanism via IGA, which lets the user evaluates the retrieved images as more or less relevant to the query one, and the system then updates the relevance information to include as many user-desired images as possible in the next retrieval result. The search process is repeated until the user is satised with the result or results cannot be further improved. When we apply the IGA to develop a content-based color image retrieval system, we must consider the following components: 1) a genetic representation of solutions to the problem; 2) one way to create the initial population of solutions; 3) an evaluation function that rates all candidate solutions according to their tness; and 4) genetic operators that alter genetic composition of children during reproduction. Solution representation: In order to apply GA to a given problem, one has to make a decision to nd an appropriate
3322
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 10, OCTOBER 2011
genotype that the problem needs, i.e., the chromosome representation. In the proposed approach, a chromosome represents the considered three types of image features (i.e., color, texture, and edge) in an image. Initial population: The IGA requires a population of potential solutions to be initialized at the beginning of the GA process. Usually, the initialization process varies with the applications; here, we adopt the rst query results of a sample image as initial candidate images. Fitness function: The tness function is employed to evaluate the quality of the chromosomes in the population. The use of IGA allows the fusion of human and computer efforts for problem solving [5]. Since the objective of our system is to retrieve the images that are most satised to the users need, the evaluation might simultaneously incorporate users subjective evaluation and intrinsic characteristics of the images. Hence, in our approach, the quality of a chromosome C with relation to the query q is dened as F (q, C ) = w1 sim(q, C ) + w2 (9)
Fig. 2.
where sim(q, C ) represents the similarity measure between images, indicates the impact factor of humans judgment, the coefcients w1 and w2 determine the relative importance of them to calculate the tness, and wi = 1. In this paper, they are both set to 0.5. The similarity measure between images is dened as sim(q, C ) =
t{H,S,V }
spring that each selected chromosome produces. Here, we adopt the tournament selection method [40] because the time complexity of it is low. It does not require a global tness comparison of all individuals in a population; therefore, it can accelerate the evolution process. The crossover operator randomly pairs chromosomes and swaps parts of their genetic information to produce new chromosomes. We use the one-point crossover [40] in the proposed approach. Parts of the two chromosomes selected based on tness are swapped to generate trait-preserving offsprings. The mutation operator creates a new chromosome in order to increase the variability of the population. However, in order to speed up the evaluation process, we do not consider the mutation operator. V. E XPERIMENTAL R ESULTS To show the effectiveness of the proposed system, some experiments will be reported. Selecting a suitable image database is a critical and important step in designing an image retrieval system. At the present time, there is not a standard image database for this purpose. Also, there is no agreement on the type and the number of images in the database. Since most image retrieval systems are intended for general databases, it is reasonable to include various semantic groups of images in the database. In our experiments, we used the database of the SIMPLIcity project [41] covering a wide range of semantic categories from natural scenes to articial objects for experiments. The database is partitioned into ten categories, including African people and village, beach, buildings, buses, dinosaurs, elephants, owers, horses, mountains and glaciers, food, etc., and each category contains 100 images (Fig. 2). Partitioning of the database into semantic categories is determined by the creators and reects the human perception of image similarity. A. Practicability of System Demonstration
+
t{H,S,V }
tq tC
(10)
where tI and tI represent the normalized mean value and standard deviation of the image I in t color space, respectively. BM I means the image bitmap feature of the image I ; meanwhile, E I and EHDI represent the entropy and the EHD of the image I , respectively. For two images, the hamming distance used to evaluate the image bitmap similarity is dened by
m m q C IHj + IHj j =1 m j =1 q C ISj ISj
H (BM q , BM C ) =
+
j =1
IVjq IVjC .
(11)
A users preference is included in the tness evaluated by the user. We use an impact factor to indicate the humans judgment or preferences, and the values of the impact factor are carried out with constant range from 0.0 to 1.0 with an interval of 0.1. Genetic operators: The selection operator determines which chromosomes are chosen for mating and how many off-
At rst, we give an example to illustrate the practicability of our proposed system. A user submits an image containing a bus as the query image into the system, and then, the similarity measurement module of the system compares the query features with those images in the database and nds the most similar images to the query image. These images are ranked based on the similarity. Under each image, a slide bar is attached so that the user can tell the system which images are relevant or irrelevant. The amount of slider movement represents the degree of relevance. After the user evaluates these images, the
LAI AND CHEN: USER-ORIENTED IMAGE RETRIEVAL SYSTEM BASED ON INTERACTIVE GENETIC ALGORITHM
3323
Fig. 5. Retrieval average recall of the proposed approach. Fig. 3. Retrieval process. (a) Query image. (b) Retrieved results obtained by visual descriptors similarity criterion. Retrieved results after (c) the rst, (d) the second, (e) the third, and (f) the fourth generation of IGA.
system adjusts the similarity measure according to the users point of view and provides rened search results. The user can repeat this process until he/she is satised with the retrieval results. Fig. 3 shows the rst display of returned images and the retrieved results after applying the IGA process. The results are ranked in ascending order of similarity to the query image from left to right and then from top to bottom. From the results, we can nd that if the retrieval only considers the low-level features, some irrelevant images are retrieved. By adopting users subjective expectation, the retrieval results are effectively increased in very few generations. B. Retrieval Precision/Recall Evaluation To evaluate the effectiveness of the proposed approach, we examined how many relevant images to the query were retrieved. The retrieval effectiveness can be dened in terms of precision and recall rates. Experiments are run ve times, and average results are reported. In every experiment, an evaluation of the retrieval precision is performed so that ten images that were randomly selected from each specic category of the
database are used as query images. For each query image, relevant images are considered to be those and only those which belong to the same category as the query image. Based on this concept, the retrieval precision and recall are dened as precision = NA ( q ) NR ( q ) NA ( q ) recall = Nt (12) (13)
where NA(q) denotes the number of relevant images similar to the query, NR(q) indicates the number of images retrieved by the system in response to the query, and Nt represents the total number of relevant images available in the database. Here, we used the top 20 retrieved images to compute the precision and recall. When each precision and recall for ten images are obtained, we discard the best value and the worst one and then average these values to obtain the average precision and average recall. Figs. 4 and 5 show them, respectively. In Fig. 4, we observe that the tendency of average precision for the randomly selected images in each specic category is toward higher value with the proposed approach, and they can achieve 100% within few generations of IGA. The same phenomenon appears in Fig. 5.
3324
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 10, OCTOBER 2011
Fig. 6. Retrieval average precision of the proposed approach with the mutation operator.
C. Inuence of Mutation Operator In order to verify the inuence of the mutation operator, we employed it into our approach and evaluated its performance. The conditions and procedures of this experiment are the same as the previous one, and we used the precision as the performance measure. Fig. 6 shows the retrieval results. It is clear that the use of a mutation operator needs more generations to achieve good results. Meanwhile, it does not provide signicant improvement in the retrieval performance. The phenomenon results from that the mutation operator may be likely to disrupt good chromosomes (i.e., users preferred images) rather than improve them. D. Comparison With Other Methods In order to show the superiority of our approach, we compare our approach with those in [12] and [42]. Since the precision is achieved 100% and the recall is achieved 20% nally by using the IGA of the proposed approach, we use only the retrieval results obtained from low-level visual features in comparison with the aforementioned methods. The authors in [12] considered the RGB color space and adopted the color distributions, as well as the image bitmap, as the visual features. In [42], the YUV color space is used, and discrete wavelet transform is applied to extract four types of features (i.e., approximations, horizontal details, vertical details, and diagonal details) at each wavelet level. The experimental results are shown in Tables I and II. These comparisons reveal that the proposed approach outperforms other two methods. Exceptionally, in the case of owers category, the performance of our approach is slightly inferior to that obtained in [42]. This is because their feature sets derived from the discrete wavelet transform are possessed of the superiority in multiresolution analysis and spatial-frequency localization. Thus, it has more discriminating power in such images of this category. VI. C ONCLUSION This paper has presented a user-oriented framework in interactive CBIR system. In contrast to conventional approaches that are based on visual features, our method provides an interactive
mechanism to bridge the gap between the visual features and the human perception. The color distributions, the mean value, the standard deviation, and image bitmap are used as color information of an image. In addition, the entropy based on the GLCM and edge histogram are considered as texture descriptors to help characterize the images. In particular, the IGA can be considered and used as a semiautomated exploration tool with the help of a user that can navigate a complex universe of images. Experimental results of the proposed approach have shown the signicant improvement in retrieval performance. Further work considering more low-level image descriptors or high-level semantics in the proposed approach is in progress. ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their comments which were very helpful in improving the quality and presentation of this paper. R EFERENCES
[1] M. Antonelli, S. G. Dellepiane, and M. Goccia, Design and implementation of Web-based systems for image segmentation and CBIR, IEEE Trans. Instrum. Meas., vol. 55, no. 6, pp. 18691877, Dec. 2006. [2] N. Jhanwar, S. Chaudhuri, G. Seetharaman, and B. Zavidovique, Content based image retrieval using motif cooccurrence matrix, Image Vis. Comput., vol. 22, no. 14, pp. 12111220, Dec. 2004. [3] J. Han, K. N. Ngan, M. Li, and H.-J. Zhang, A memory learning framework for effective image retrieval, IEEE Trans. Image Process., vol. 14, no. 4, pp. 511524, Apr. 2005. [4] H. Takagi, S.-B. Cho, and T. Noda, Evaluation of an IGA-based image retrieval system using wavelet coefcients, in Proc. IEEE Int. Fuzzy Syst. Conf., 1999, vol. 3, pp. 17751780. [5] H. Takagi, Interactive evolutionary computation: Fusion of the capacities of EC optimization and human evaluation, Proc. IEEE, vol. 89, no. 9, pp. 12751296, Sep. 2001.
LAI AND CHEN: USER-ORIENTED IMAGE RETRIEVAL SYSTEM BASED ON INTERACTIVE GENETIC ALGORITHM
3325
[6] S.-B. Cho and J.-Y. Lee, A human-oriented image retrieval system using interactive genetic algorithm, IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 32, no. 3, pp. 452458, May 2002. [7] Y. Liu, D. Zhang, G. Lu, and W.-Y. Ma, A survey of content-based image retrieval with high-level semantics, Pattern Recognit., vol. 40, no. 1, pp. 262282, Jan. 2007. [8] A. W. M. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain, Content-based image retrieval at the end of the early years, IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 12, pp. 13491380, Dec. 2000. [9] S. Antani, R. Kasturi, and R. Jain, A survey of the use of pattern recognition methods for abstraction, indexing and retrieval of images and video, Pattern Recognit., vol. 35, no. 4, pp. 945965, Apr. 2002. [10] X. S. Zhou and T. S. Huang, Relevance feedback in content-based image retrieval: Some recent advances, Inf. Sci., vol. 148, no. 14, pp. 129137, Dec. 2002. [11] H.-W. Yoo, H.-S. Park, and D.-S. Jang, Expert system for color image retrieval, Expert Syst. Appl., vol. 28, no. 2, pp. 347357, Feb. 2005. [12] T.-C. Lu and C.-C. Chang, Color image retrieval technique based on color features and image bitmap, Inf. Process. Manage., vol. 43, no. 2, pp. 461472, Mar. 2007. [13] A. Vadivel, S. Sural, and A. K. Majumdar, An integrated color and intensity co-occurrence matrix, Pattern Recognit. Lett., vol. 28, no. 8, pp. 974983, Jun. 2007. [14] M. H. Pi, C. S. Tong, S. K. Choy, and H. Zhang, A fast and effective model for wavelet subband histograms and its application in texture image retrieval, IEEE Trans. Image Process., vol. 15, no. 10, pp. 30783088, Oct. 2006. [15] M. Kokare, P. K. Biswas, and B. N. Chatterji, Texture image retrieval using new rotated complex wavelet lters, IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, no. 6, pp. 11681178, Dec. 2005. [16] M. Pi and H. Li, Fractal indexing with the joint statistical properties and its application in texture image retrieval, IET Image Process., vol. 2, no. 4, pp. 218230, Aug. 2008. [17] S. Liapis and G. Tziritas, Color and texture image retrieval using chromaticity histograms and wavelet frames, IEEE Trans. Multimedia, vol. 6, no. 5, pp. 676686, Oct. 2004. [18] Y. D. Chun, N. C. Kim, and I. H. Jang, Content-based image retrieval using multiresolution color and texture features, IEEE Trans. Multimedia, vol. 10, no. 6, pp. 10731084, Oct. 2008. [19] S.-B. Cho, Towards creative evolutionary systems with interactive genetic algorithm, Appl. Intell., vol. 16, no. 2, pp. 129138, Mar. 2002. [20] S.-F. Wang, X.-F. Wang, and J. Xue, An improved interactive genetic algorithm incorporating relevant feedback, in Proc. 4th Int. Conf. Mach. Learn. Cybern., Guangzhou, China, 2005, pp. 29963001. [21] M. Arevalillo-Herrez, F. H. Ferri, and S. Moreno-Picot, Distance-based relevance feedback using a hybrid interactive genetic algorithm for image retrieval, Appl. Soft Comput., vol. 11, no. 2, pp. 17821791, Mar. 2011, DOI: 10.1016/j.asoc.2010.05.022. [22] S. Shi, J.-Z. Li, and L. Lin, Face image retrieval method based on improved IGA and SVM, in Proc. ICIC, vol. 4681, LNCS, D.-S. Huang, L. Heutte, and M. Loog, Eds., 2007, pp. 767774. [23] H. Nezamabadi-pour and E. Kabir, Image retrieval using histograms of uni-color and bi-color and directional changes in intensity gradient, Pattern Recognit. Lett., vol. 25, no. 14, pp. 15471557, Oct. 2004. [24] K. N. Plantniotis and A. N. Venetsanopoulos, Color Image Processing and Applications. Heidelberg, Germany: Springer-Verlag, 2000. [25] E. J. Delp and O. R. Mitchell, Image coding using block truncation coding, IEEE Trans. Commun., vol. COM-27, no. 9, pp. 13351342, Sep. 1979. [26] R. M. Haralick and L. G. Shapiro, Computer and Robot Vision: Volume I . Reading, MA: Addison-Wesley, 1992. [27] T. Sikora, The MPEG-7 visual standard for content descriptionAn overview, IEEE Trans. Circuits Syst. Video Technol., vol. 11, no. 6, pp. 696702, Jun. 2001. [28] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, MA: Addison-Wesley, 1989. [29] G. Beligiannis, L. Skarlas, and S. Likothanassis, A generic applied evolutionary hybrid technique for adaptive system modeling and information mining, IEEE Signal Process. Mag.Special Issue on Signal Processing for Mining Information, vol. 21, no. 3, pp. 2838, May 2004. [30] G. N. Beligiannis, L. V. Skarlas, S. D. Likothanassis, and K. G. Perdikouri, Nonlinear model structure identication of complex biomedical data using a genetic-programming-based technique, IEEE Trans. Instrum. Meas., vol. 54, no. 6, pp. 21842190, Nov. 2005. [31] C.-Y. Chang and D.-R. Chen, Active noise cancellation without secondary path identication by using an adaptive genetic algorithm, IEEE Trans. Instrum. Meas., vol. 59, no. 9, pp. 23152327, Sep. 2010.
[32] S. Osowski, R. Siroic, T. Markiewicz, and K. Siwek, Application of support vector machine and genetic algorithm for improved blood cell recognition, IEEE Trans. Instrum. Meas., vol. 58, no. 7, pp. 21592168, Jul. 2009. [33] C. Koutsojannis, G. Beligiannis, I. Hatzilygeroudis, C. Papavlasopoulos, and J. Prentzas, Using a hybrid AI approach for exercise difculty level adaptation, Int. J. Continuing Eng. Educ. Life-Long Learn., vol. 17, no. 4/5, pp. 256272, 2007. [34] G. Beligiannis, I. Hatzilygeroudis, C. Koutsojannis, and J. Prentzas, A GA driven intelligent system for medical diagnosis, in Proc. KES, vol. 4251. Heidelberg, Germany: Springer-Verlag, 2006, pp. 968975. [35] C.-H. Wu, H.-J. Chou, and W.-H. Su, A genetic approach for coordinate transformation test of GPS positioning, IEEE Geosci. Remote Sens. Lett., vol. 4, no. 2, pp. 297301, Apr. 2007. [36] S.-T. Pan, Design of robust D-stable IIR lters using genetic algorithms with embedded stability criterion, IEEE Trans. Signal Process., vol. 57, no. 8, pp. 30083016, Aug. 2009. [37] G. Paravati, A. Sanna, B. Pralio, and F. Lamberti, A genetic algorithm for target tracking in FLIR video sequences using intensity variation function, IEEE Trans. Instrum. Meas., vol. 58, no. 10, pp. 34573467, Oct. 2009. [38] S. F. da Silva, M. A. Batista, and C. A. Z. Barcelos, Adaptive image retrieval through the use of a genetic algorithm, in Proc. 19th IEEE Int. Conf. Tools With Artif. Intell., 2007, pp. 557564. [39] Z. Steji, Y. Takama, and K. Hirota, Genetic algorithm-based relevance feedback for image retrieval using local similarity patterns, Inf. Process. Manage., vol. 39, no. 1, pp. 123, Jan. 2003. [40] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms. Hoboken, NJ: Wiley, 2001. [41] J. Z. Wang, J. Li, and G. Wiederhold, SIMPLIcity: Semantic sensitive integrated matching for picture libraries, IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 9, pp. 947963, Sep. 2001. [42] T.-W. Chiang and T.-W. Tsai, Content-based image retrieval via the multiresolution wavelet features of interest, J. Inf. Technol. Appl., vol. 1, no. 3, pp. 205214, Dec. 2006.
Chih-Chin Lai (M09) received the B.S. degree in computer science and information engineering from National Chiao Tung University, Hsinchu, Taiwan, in 1993 and the Ph.D. degree in computer science and information engineering from National Central University, Chungli, Taiwan, in 1999. He is currently an Associate Professor with the Department of Electrical Engineering, National University of Kaohsiung, Kaohsiung, Taiwan. His research interests include visual information computing, evolutionary computation, and information retrieval. Dr. Lai is a member of Computational Intelligence, Instrumentation and Measurement, Systems, Man, and Cybernetics, and Signal Processing Societies.
Ying-Chuan Chen received the B.S. degree in computer science and information engineering from Shu-Te University, Kaohsiung, Taiwan, in 2007 and the M.S. degree from the National University of Tainan, Tainan, Taiwan, in 2009. His research interests include articial intelligence and image processing.