Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

On multiobjective selection for multimodal optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Multiobjective selection operators are a popular and straightforward tool for preserving diversity in evolutionary optimization algorithms. One application area where diversity is essential is multimodal optimization with its goal of finding a diverse set of either globally or locally optimal solutions of a single-objective problem. We therefore investigate multiobjective selection methods that identify good quality and diverse solutions from a larger set of candidates. Simultaneously, unary quality indicators from multiobjective optimization also turn out to be useful for multimodal optimization. We focus on experimentally detecting the best selection operators and indicators in two different contexts, namely a one-time subset selection and an iterative application in optimization. Experimental results indicate that certain design decisions generally have advantageous tendencies regarding run time and quality. One such positive example is using a concept of nearest better neighbors instead of the common nearest-neighbor distances.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Bandaru, S., Deb, K.: A parameterless-niching-assisted bi-objective approach to multimodal optimization. In: IEEE Congress on Evolutionary Computation (CEC), pp. 95–102 (2013)

  2. Basak, A., Das, S., Tan, K.C.: Multimodal optimization using a biobjective differential evolution algorithm enhanced with mean distance-based selection. IEEE Trans. Evol. Comput. 17(5), 666–685 (2013)

    Article  Google Scholar 

  3. Beasley, D., Bull, D.R., Martin, R.R.: A sequential niche technique for multimodal function optimization. Evol. Comput. 1(2), 101–125 (1993)

    Article  Google Scholar 

  4. Bringmann, K., Friedrich, T.: Don’t be greedy when calculating hypervolume contributions. In: Proceedings of the tenth ACM SIGEVO workshop on Foundations of genetic algorithms, FOGA ’09, pp. 103–112. ACM (2009)

  5. Brockhoff, D., Friedrich, T., Hebbinghaus, N., Klein, C., Neumann, F., Zitzler, E.: Do additional objectives make a problem harder? In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO ’07, pp. 765–772. ACM (2007)

  6. Bui, L.T., Abbass, H.A., Branke, J.: Multiobjective optimization for dynamic environments. IEEE Congr. Evol. Comput. 3, 2349–2356 (2005)

    Google Scholar 

  7. Coello Coello, C.A., Cruz Cortés, N.: Solving multiobjective optimization problems using an artificial immune system. Genet. Program. Evolvable Mach. 6(2), 163–190 (2005)

    Article  Google Scholar 

  8. Danna, E., Woodruff, D.L.: How to select a small set of diverse solutions to mixed integer programming problems. Oper. Res. Lett. 37(4), 255–260 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  9. Das, S., Maity, S., Qu, B.Y., Suganthan, P.N.: Real-parameter evolutionary multimodal optimization—a survey of the state-of-the-art. Swarm Evol. Comput. 1(2), 71–88 (2011)

    Article  Google Scholar 

  10. De Jong, K.A.: An analysis of the behavior of a class of genetic adaptive systems. Ph.D. Thesis, University of Michigan (1975)

  11. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  Google Scholar 

  12. Deb, K., Saha, A.: Multimodal optimization using a bi-objective evolutionary algorithm. Evol. Comput. 20(1), 27–62 (2012)

    Article  Google Scholar 

  13. Eiben, A.E., Jelasity, M.: A critical note on experimental research methodology in EC. IEEE Congr. Evol. Comput. 1, 582–587 (2002)

    Google Scholar 

  14. Emmerich, M.T.M., Deutz, A.H., Kruisselbrink, J.W.: On quality indicators for black-box level set approximation. In: EVOLVE— A Bridge between Probability, Set Oriented Numerics and Evolutionary Computation. Studies in Computational Intelligence, vol. 447, pp. 157–185. Springer, (2013)

  15. Epitropakis, M.G., Plagianakos, V.P., Vrahatis, M.N.: Finding multiple global optima exploiting differential evolution’s niching capability. In: IEEE Symposium on Differential Evolution (SDE) (2011)

  16. Glover, F.: Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 13(5), 533–549 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  17. Handl, J., Lovell, S.C., Knowles, J.: Multiobjectivization by decomposition of scalar cost functions. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) Parallel Problem Solving from Nature PPSN X. Lecture Notes in Computer Science, vol. 5199, pp. 31–40. Springer (2008)

  18. de Jong, E.D., Watson, R.A., Pollack, J.B.: Reducing bloat and promoting diversity using multi-objective methods. In: Spector, L. (ed.) Proceedings of the Genetic and Evolutionary Computation Conference, pp. 11–18. Morgan Kaufman (2001)

  19. Kukkonen, S., Deb, K.: Improved pruning of non-dominated solutions based on crowding distance for bi-objective optimization problems. In: IEEE Congress on Evolutionary Computation, pp. 1179–1186 (2006)

  20. Lehman, J., Stanley, K.O.: Abandoning objectives: evolution through the search for novelty alone. Evol. Comput. 19(2), 189–223 (2011)

    Article  Google Scholar 

  21. Lehman, J., Stanley, K.O., Miikkulainen, R.: Effective diversity maintenance in deceptive domains. In: Proceeding of the Fifteenth Annual Conference on Genetic and Evolutionary Computation Conference, GECCO ’13, pp. 215–222. ACM (2013)

  22. Li, J.P., Balazs, M.E., Parks, G.T., Clarkson, P.J.: A species conserving genetic algorithm for multimodal function optimization. Evol. Comput. 10(3), 207–234 (2002)

    Article  Google Scholar 

  23. Li, X., Engelbrecht, A., Epitropakis, M.G.: Benchmark functions for CEC’2013 special session and competition on niching methods for multimodal function optimization. RMIT University, Evolutionary Computation and Machine Learning Group, Australia, Tech. rep. (2013)

  24. Mahfoud, S.W.: Niching methods for genetic algorithms. Ph.D. Thesis, University of Illinois at Urbana-Champaign (1995)

  25. Meinl, T., Ostermann, C., Berthold, M.R.: Maximum-score diversity selection for early drug discovery. J. Chem. Inf. Model. 51(2), 237–247 (2011)

    Article  Google Scholar 

  26. Miettinen, K.: Introduction to multiobjective optimization: noninteractive approaches. In: Branke, J., Deb, K., Miettinen, K., Sowiski, R. (eds.) Multiobjective Optimization. Lecture Notes in Computer Science, vol. 5252, pp. 1–26. Springer (2008)

  27. Mouret, J.B.: Novelty-based multiobjectivization. In: Doncieux, S., Bredche, N., Mouret, J.B. (eds.) New Horizons in Evolutionary Robotics. Studies in Computational Intelligence, vol. 341, pp. 139–154. Springer (2011)

  28. Pétrowski, A.: A clearing procedure as a niching method for genetic algorithms. In: T. Fukuda, T. Furuhashi, D.B. Fogel (eds.) Proceedings of 1996 IEEE International Conference on Evolutionary Computation (ICEC ’96), pp. 798–803. IEEE Press (1996)

  29. Preuss, M., Lasarczyk, C.: On the importance of information speed in structured populations. In: Parallel Problem Solving from Nature—PPSN VIII, Lecture Notes in Computer Science, vol. 3242, pp. 91–100. Springer (2004)

  30. Preuss, M., Rudolph, G., Tumakaka, F.: Solving multimodal problems via multiobjective techniques with application to phase equilibrium detection. In: IEEE Congress on Evolutionary Computation, pp. 2703–2710 (2007)

  31. Preuss, M., Schönemann, L., Emmerich, M.: Counteracting genetic drift and disruptive recombination in \((\mu +/, \lambda )\)-EA on multimodal fitness landscapes. In: Proceedings of the 2005 Conference on Genetic and evolutionary computation, GECCO ’05, pp. 865–872. ACM (2005)

  32. Preuss, M., Wessing, S.: Measuring multimodal optimization solution sets with a view to multiobjective techniques. In: EVOLVE – A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation IV. Advances in Intelligent Systems and Computing, vol. 227, pp. 123–137. Springer (2013)

  33. Qu, B.Y., Suganthan, P.N., Liang, J.J.: Differential evolution with neighborhood mutation for multimodal optimization. IEEE Trans. Evol. Comput. 16(5), 601–614 (2012)

    Article  Google Scholar 

  34. Rönkkönen, J., Li, X., Kyrki, V., Lampinen, J.: Simulated Evolution and Learning. In: A Generator for Multimodal Test Functions with Multiple Global Optima. Lecture Notes in Computer Science, vol. 5361, pp. 239–248. Springer (2008)

  35. Saha, A., Deb, K.: A bi-criterion approach to multimodal optimization: Self-adaptive approach. In: Simulated Evolution and Learning. Lecture Notes in Computer Science, vol. 6457, pp. 95–104. Springer (2010)

  36. Schütze, O., Esquivel, X., Lara, A., Coello Coello, C.A.: Using the averaged Hausdorff distance as a performance measure in evolutionary multiobjective optimization. IEEE Trans. Evol. Comput. 16(4), 504–522 (2012)

    Article  Google Scholar 

  37. Segredo, E., Segura, C., León, C.: Analysing the robustness of multiobjectivisation parameters with large scale optimisation problems. In: IEEE Congress on Evolutionary Computation (CEC), pp. 1–8 (2012)

  38. Segura, C., Coello Coello, C.A., Miranda, G., León, C.: Using multi-objective evolutionary algorithms for single-objective optimization. 4OR 11(3), 201–228 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  39. Segura, C., Coello Coello, C.A., Segredo, E., Miranda, G., León, C.: Improving the diversity preservation of multi-objective approaches used for single-objective optimization. In: IEEE Congress on Evolutionary Computation (CEC), pp. 3198–3205 (2013)

  40. Shir, O.M.: Niching in evolution strategies. In: Beyer, H.G. (ed.) GECCO ’05: Proceedings of the 2005 Conference on Genetic and Evolutionary Computation, pp. 865–872. ACM Press, New York (2005)

  41. Solow, A.R., Polasky, S.: Measuring biological diversity. Environ. Ecol. Stat. 1(2), 95–103 (1994)

    Article  Google Scholar 

  42. Stoean, C., Preuss, M., Stoean, R., Dumitrescu, D.: Multimodal optimization by means of a topological species conservation algorithm. IEEE Trans. Evol. Comput. 14(6), 842–864 (2010)

    Article  Google Scholar 

  43. Thomsen, R.: Multimodal optimization using crowding-based differential evolution. IEEE Congr. Evol. Comput. 2, 1382–1389 (2004)

    Google Scholar 

  44. Toffolo, A., Benini, E.: Genetic diversity as an objective in multi-objective evolutionary algorithms. Evol. Comput. 11(2), 151–167 (2003)

    Article  Google Scholar 

  45. Törn, A., Ali, M.M., Viitanen, S.: Stochastic global optimization: problem classes and solution techniques. J. Glob. Optim. 14(4), 437–447 (1999)

    Article  MATH  Google Scholar 

  46. Tran, T.D., Brockhoff, D., Derbel, B.: Multiobjectivization with NSGA-II on the noiseless BBOB testbed. In: Proceeding of the Fifteenth Annual Conference Companion on Genetic and Evolutionary Computation Conference Companion, GECCO ’13 Companion, pp. 1217–1224. ACM (2013)

  47. Ulrich, T., Bader, J., Thiele, L.: Defining and optimizing indicator-based diversity measures in multiobjective search. In: Parallel Problem Solving from Nature, PPSN XI. Lecture Notes in Computer Science, vol. 6238, pp. 707–717. Springer (2010)

  48. Ulrich, T., Thiele, L.: Maximizing population diversity in single-objective optimization. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, GECCO ’11, pp. 641–648. ACM (2011)

  49. Ursem, R.K.: Multinational evolutionary algorithms. In: Angeline, P.J. (ed.) Proceedings of the Congress of Evolutionary Computation (CEC 99), vol. 3, pp. 1633–1640. IEEE Press (1999)

  50. Wessing, S.: Repair methods for box constraints revisited. In: Esparcia-Alcázar, A.I. (ed.) Applications of Evolutionary Computation. Lecture Notes in Computer Science, vol. 7835, pp. 469–478. Springer (2013)

  51. Wessing, S., Preuss, M., Rudolph, G.: Niching by multiobjectivization with neighbor information: trade-offs and benefits. In: IEEE Congress on Evolutionary Computation (CEC), pp. 103–110 (2013)

  52. Zitzler, E., Knowles, J., Thiele, L.: Quality assessment of pareto set approximations. In: Multiobjective Optimization. Lecture Notes in Computer Science. vol. 5252, pp. 373–404. Springer (2008)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simon Wessing.

Appendix: Quality indicators

Appendix: Quality indicators

Throughout the section, \({\mathcal {P}} = \{{\varvec{x}}_1, \ldots , {\varvec{x}}_\mu \}\), \(\mu < \infty \), denotes the approximation set that is to be assessed.

1.1 Indicators without problem knowledge

Solow–Polasky diversity (SPD) Solow and Polasky [41] developed an indicator to measure a population’s biological diversity and showed that it has superior theoretical properties compared to SD and other indicators. Ulrich et al. [47] discovered its applicability to multiobjective optimization. They also verified the inferiority of SD experimentally by directly optimizing the indicator values. To compute this indicator for \({\mathcal {P}}\), it is necessary to build a \(\mu \times \mu \) correlation matrix \({\mathbf {C}}\) with entries \(c_{ij} = \exp (-\theta d({\varvec{x}}_i, {\varvec{x}}_j))\). The indicator value is then the scalar resulting from \({\text {SPD}}({\mathcal {P}}) := {\varvec{e}}^\top {\mathbf {C}}^{-1}{\varvec{e}}\), where \({\varvec{e}} = (1, \ldots , 1)^\top \). As the matrix inversion requires time \(O(\mu ^3)\), the indicator is only applicable to relatively small sets. It also requires a user-defined parameter \(\theta \), which depends on the size of the search space. We chose \(\theta = 1/n\) throughout this paper.

Sum of distances (SD) The sum of distances \({\text {SD}}({\mathcal {P}}) := \sqrt{\sum _{i=1}^\mu \sum _{j=i+1}^\mu d({\varvec{x}}_i, {\varvec{x}}_j)}\) is criticized by [25, 41, 47] as being inappropriate for a diversity measure, because it only rewards the spread, but not the diversity of a population. The figure should therefore not be used. However, if it is used, we suggest to take the square root of the sum, to obtain indicator values of reasonable magnitude.

Sum of distances to nearest neighbor (SDNN) As [47] showed that SD has some severe deficiencies, we also consider the sum of distances to the nearest neighbor \({\text {SDNN}}({\mathcal {P}}) := \sum _{i=1}^{\mu } d_{{\mathrm {nn}}}({\varvec{x}}_i, {\mathcal {P}})\). In contrast to SD, SDNN penalizes the clustering of solutions, because only the nearest neighbor is considered. Emmerich et al. [14] mention the arithmetic mean gap \(\frac{1}{\mu }{\text {SDNN}}({\mathcal {P}})\) and two other similar variants. We avoid the averaging here to reward larger sets. However, it is still possible to construct situations where adding a new point to the set decreases the indicator value.

Average objective value (AOV) The sample mean of objective values is \({\text {AOV}}({\mathcal {P}}) := \frac{1}{\mu } \sum _{i=1}^\mu f({\varvec{x}}_i)\).

1.2 Indicators requiring knowledge of the optima

Peak ratio (PR) Ursem [49] defined the number of found optima \(\ell = |\{{\varvec{z}} \in {\mathcal {Q}} \mid d_{{\mathrm {nn}}}({\varvec{z}}, {\mathcal {P}}) \le \epsilon \}|\) divided by the total number of optima as peak ratio \({\text {PR}}({\mathcal {P}}) := \ell /m\). The indicator requires some constant \(\epsilon \) to be defined by the user, to decide if an optimum has been approximated appropriately.

Peak distance (PD) This indicator simply calculates the average distance \({\text {PD}}({\mathcal {P}}) := \frac{1}{m}\sum _{i=1}^{m} d_{{\mathrm {nn}}}({\varvec{z}}_i, {\mathcal {P}})\) of a member of the reference set \({\mathcal {Q}}\) to the nearest individual in \({\mathcal {P}}\). A first version of this indicator (without the averaging) was presented by [42] as “distance accuracy”. With the 1 / m part, peak distance is analogous to the indicator inverted generational distance [7], which is computed in the objective space of multiobjective problems.

Peak inaccuracy (PI) Thomsen [43] proposed the basic variant of the indicator \({\text {PI}}({\mathcal {P}}) := \frac{1}{m}\sum _{i=1}^{m} |f({\varvec{z}}_i) - f({\text {nn}}({\varvec{z}}_i, {\mathcal {P}}))|\) under the name “peak accuracy”. To be consistent with PR and PD, we also add the 1 / m term here. It is also relabeled to peak inaccuracy, because speaking of accuracy is a bit misleading as the indicator must be minimized.

Averaged Hausdorff distance (AHD) This indicator can be seen as an extension of PD due to its relation to the inverted generational distance. It was defined by [36] as

$$\begin{aligned} \Delta _p({\mathcal {P}},{\mathcal {Q}})&= \max \left\{ \left( \textstyle \frac{1}{m}\sum _{i=1}^{m} d_{{\mathrm {nn}}}({\varvec{z}}_i, {\mathcal {P}})^p\right) ^{1/p}, \left( \textstyle \frac{1}{\mu }\sum _{i=1}^{\mu } d_{{\mathrm {nn}}}({\varvec{x}}_i, {\mathcal {Q}})^p\right) ^{1/p} \right\} \!. \end{aligned}$$

The definition contains a parameter p that controls the influence of outliers on the indicator value (the more influence the higher p is). For \(1 \le p < \infty \), AHD has the property of being a semi-metric [36]. We chose \(p = 1\) throughout this paper, analogously to [14]. The practical effect of the indicator is that it rewards the approximation of the optima (as PD does), but as well penalizes any unnecessary points in remote locations.

1.3 Indicators requiring knowledge of the basins

For the implementation of indicators in this section, we assume the existence of a function

$$\begin{aligned} b({\varvec{x}}, {\varvec{z}}) = {\left\{ \begin{array}{ll} 1 &{} {\text {if }}{\varvec{x}} \in {\text {basin}}({\varvec{z}}),\\ 0 &{} {\text {else}}. \end{array}\right. } \end{aligned}$$

Basin ratio (BR) The number of covered basins is calculated as

$$\begin{aligned} \textstyle \ell = \sum _{i=1}^m \min \{1, \sum _{j=1}^\mu b({\varvec{x}}_j, {\varvec{z}}_i)\}\,. \end{aligned}$$

The basin ratio is then \({\text {BR}}({\mathcal {P}}) := \ell /m\), analogous to PR. This indicator can only assume \(m+1\) distinct values, and in lower dimensions it should be quite easy to obtain a perfect score by a simple random sampling of the search space. It makes sense especially when not all of the existing optima are relevant. Then, its use can be justified by the common assumption in global optimization that the actual optima can be found relatively easily with a hill climber, once there is a start point in each respective basin [45].

Basin inaccuracy (BI) This combination of BR and PI was proposed by [32]. It is defined as

$$\begin{aligned} {\text {BI}}({\mathcal {P}}) := \frac{1}{m}\sum _{i=1}^{m} {\left\{ \begin{array}{ll} \min \left\{ |f({\varvec{z}}_i) - f({\varvec{x}})| \mid {\varvec{x}} \in {\mathcal {P}} \wedge b({\varvec{x}}, {\varvec{z}}_i) \right\} &{} \exists {\varvec{x}} \in {\text {basin}}({\varvec{z}}_i)\,, \\ f_{\max } &{} {\text {else}}, \end{array}\right. } \end{aligned}$$

where \(f_{\max }\) denotes the difference between the global optimum and the worst possible objective value. For each optimum, the indicator calculates the minimal difference in objective values between the optimum and all solutions that are located in it’s basin. If no solution is present in the basin, a penalty value is assumed for it. Finally, all the values are averaged. The rationale behind this indicator is to enforce a good basin coverage, while simultaneously measuring the deviation of objective values \(f({\varvec{x}})\) from \(f({\varvec{z}}_i)\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wessing, S., Preuss, M. On multiobjective selection for multimodal optimization. Comput Optim Appl 63, 875–902 (2016). https://doi.org/10.1007/s10589-015-9785-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-015-9785-x

Keywords