Abstract
This article addresses the problem of derivative-free (single- or multi-objective) optimization subject to multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive to evaluate. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited, as in complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed. This new criterion is naturally adapted to the search of a feasible point when none is available, and reduces to existing Bayesian sampling criteria—the classical Expected Improvement (EI) criterion and some of its constrained/multi-objective extensions—as soon as at least one feasible point is available. The calculation and optimization of the criterion are performed using Sequential Monte Carlo techniques. In particular, an algorithm similar to the subset simulation method, which is well known in the field of structural reliability, is used to estimate the criterion. The method, which we call BMOO (for Bayesian Multi-Objective Optimization), is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization.
Similar content being viewed by others
Notes
Mockus [52, Section 2.5] heuristically introduces a modification of (3) to compensate for the fact that subsequent evaluation results are not taken into account in the myopic strategy and thus enforce a more global exploration of the search domain. In this work, we consider a purely myopic strategy as in Jones et al. [42].
This is the most common modeling assumption in the Bayesian optimization literature, when several objective functions, and possibly also several constraint functions, have to be dealt with. See the VIPER algorithm of Williams et al. [76] for an example of an algorithm based on correlated Gaussian processes.
Note that this modified EHVI criterion remains well defined even when \(H_n = \emptyset \), owing to the introduction of an upper bound \(y^{\mathrm{upp}}\) in the definition of \({\mathbb {B}}\). Its single-objective counterpart introduced earlier [see Eq. (15)], however, was only well defined under the assumption that at least one feasible point is known. Introducing an upper bound \(y^{\mathrm{upp}}\) is of course also possible in the single-objective case.
The same remark holds for the variant (see, e.g., Gelbart et al. [29]) which consists in using the probability of feasibility as a sampling criterion when no feasible point is available. This is indeed equivalent to using the loss function \(\varepsilon _n(\underline{X}, f) = - {\mathbbm {1}}_{\exists i \le n, X_i \in C}\) in the search for feasible points.
Equation (30) does not hold exactly for \(A = G_{n + 1}\) since, conditionally on \(X_1\), \(\xi (X_1)\), ..., \(X_{n}\), \(\xi (X_{n})\), the set \(G_{n+1}\) is a random set and is not independent of \({\mathcal {Y}}_n\). Indeed, \(G_{n+1}\) depends on \(\xi (X_{n+1})\) and \(X_{n + 1}\) is chosen by minimization of the approximate expected improvement, which in turn is computed using \({\mathcal {Y}}_n\).
Optimization toolbox v7.1, MATLAB R2014b.
This volume has been obtained using massive runs of the gamultiobj algorithm of Matlab. It might be slightly under-estimated.
An implementation of the EMMI criterion is available in the STK. An implementation of the WCPI sampling crtiterion for bi-objective problems is distributed alongside with Forrester et al.’s [27]book.
References
Andrieu, C., Roberts, G.O.: The pseudo-marginal approach for efficient monte carlo computations. Ann. Stat. 37(2), 697–725 (2009)
Andrieu, C., Thoms, J.: A tutorial on adaptive mcmc. Stat. Comput. 18(4), 343–373 (2008)
Archetti, F., Betrò, B.: A probabilistic algorithm for global optimization. CALCOLO 16(3), 335–343 (1979)
Au, S.-K., Beck, J.L.: Estimation of small failure probabilities in high dimensions by subset simulation. Probab. Eng. Mech 16(4), 263–277 (2001)
Bader, J., Zitzler, E.: Hype: an algorithm for fast hypervolume-based many-objective optimization. Evolut. Comput. 19(1), 45–76 (2011)
Bautista, D.C.: A Sequential Design for Approximating the Pareto Front Using the Expected Pareto Improvement Function. PhD thesis, The Ohio State University (2009)
Bect, J., Ginsbourger, D., Li, L., Picheny, V., Vazquez, E.: Sequential design of computer experiments for the estimation of a probability of failure. Stat. Comput 22(3), 773–793 (2012)
Bect, J., Vazquez, E. et al.: STK: a Small (Matlab/Octave) Toolbox for Kriging. Release 2.4 (to appear), (2016). URL http://kriging.sourceforge.net
Benassi, R.: Nouvel Algorithme d’optimisation Bayésien Utilisant une Approche Monte-Carlo séquentielle. PhD thesis, Supélec (2013)
Benassi, R., Bect, J., Vazquez, E.: Bayesian optimization using sequential Monte Carlo. In: Learning and Intelligent Optimization. 6th International Conference, LION 6, Paris, France, 16–20 January 2012, Revised Selected Papers, volume 7219 of Lecture Notes in Computer Science, pp. 339–342. Springer (2012)
Beume, N.: S-metric calculation by considering dominated hypervolume as klee’s measure problem. Evolut. Comput. 17(4), 477–492 (2009)
Binois, M., Picheny, V.: GPareto: Gaussian Processes for Pareto Front Estimation and Optimization, 2015. URL http://CRAN.R-project.org/package=GPareto. R package version 1.0.1
Box, G.E.P., Cox, D.R.: An analysis of transformations. J. Roy. Stat. Soc. Series B (Methodological) 26(2), 211–252 (1964)
Cérou, F., Del Moral, P., Furon, T., Guyader, A.: Sequential Monte Carlo for rare event estimation. Stat. Comput. 22(3), 795–808 (2012)
Chafekar, D., Xuan, J., Rasheed, K.: Constrained multi-objective optimization using steady state genetic algorithms. In: Genetic and Evolutionary Computation-GECCO 2003, pp. 813–824. Springer (2003)
Chevalier, C., Bect, J., Ginsbourger, D., Vazquez, E., Picheny, V., Richet, Y.: Fast parallel kriging-based stepwise uncertainty reduction with application to the identification of an excursion set. Technometrics 56(4), 455–465 (2014)
Conn, A.R., Gould, N.I.M., Toint, P.: A globally convergent augmented lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28(2), 545–572 (1991)
Couckuyt, I., Deschrijver, D., Dhaene, T.: Fast calculation of multiobjective probability of improvement and expected improvement criteria for pareto optimization. J. Glob. Optim. 60(3), 575–594 (2014)
Damianou, A., Lawrence, N.: Deep gaussian processes. In: Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, pp. 207–215 (2013)
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. Evolut. Comput., IEEE Trans. 6(2), 182–197 (2002)
Del Moral, P., Doucet, A., Jasra, A.: Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 68(3), 411–436 (2006)
Douc, R., Cappé, O.: Comparison of resampling schemes for particle filtering. In: Image and Signal Processing and Analysis, 2005. ISPA 2005. Proceedings of the 4th International Symposium on, pp. 64–69. IEEE (2005)
Emmerich, M.: Single- and Multi-Objective Evolutionary Design Optimization Assisted by Gaussian Random Field Metamodels. PhD thesis, Technical University Dortmund (2005)
Emmerich, M., Klinkenberg, J.W.: The Computation of the Expected Improvement in Dominated Hypervolume of Pareto Front Approximations, Technical report. Leiden University (2008)
Emmerich, M., Giannakoglou, K.C., Naujoks, B.: Single- and multi-objective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans. Evolut. Comput. 10(4), 421–439 (2006)
Fonseca, C.M., Fleming, P.J.: Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation. IEEE Trans. Syst., Man Cybern. Part A Syst. Hum. 28(1), 26–37 (1998)
Forrester, A.I.J., Sobester, A., Keane, A.J.: Engineering Design via Surrogate Modelling: a Practical Guide. Wiley, Chichester (2008)
Gelbart, M.A.: Constrained Bayesian Optimization and Applications. PhD thesis, Harvard University, Graduate School of Arts and Sciences (2015)
Gelbart, M.A., Snoek, J., Adams, R.P.: Bayesian optimization with unknown constraints. arXiv preprint arXiv:1403.5607 (2014)
Ginsbouger, D., Le Riche, R.: Towards Gaussian process-based optimization with finite time horizon. In: Invited talk at the 6th Autumn Symposium of the “Statistical Modelling” Research Training Group, 21 November (2009)
Gramacy, R.B., Lee, H.: Optimization under unknown constraints. In: Bayesian Statistics 9. Proceedings of the Ninth Valencia International Meeting, pp. 229–256. Oxford University Press (2011)
Gramacy, R.B., Gray, G.A., Le Digabel, S., Lee, H.K.H., Ranjan, P., Wells, G., Wild, S.M.: Modeling an augmented lagrangian for blackbox constrained optimization. Technometrics, arXiv preprint arXiv:1403.4890 (2015)
Hernández-Lobato, D., Hernández-Lobato, J.M., Shah, A., Adams, R.P.: Predictive entropy search for multi-objective bayesian optimization. arXiv preprint arXiv:1511.05467 (2015a)
Hernández-Lobato, J.M., Gelbart, M.A., Hoffman, M.W., Adams, R.P., Ghahramani, Z.: Predictive entropy search for bayesian optimization with unknown constraints. In: Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37 (2015b)
Hernández-Lobato, J.M., Gelbart, M.A., Adams, R.P., Hoffman, M.W., Ghahramani, Z.: A general framework for constrained bayesian optimization using information-based search. arXiv preprint arXiv:1511.09422 (2015)
Horn, D., Wagner, T., Biermann, D., Weihs, C., Bischl, B.: Model-based multi-objective optimization: taxonomy, multi-point proposal, toolbox and benchmark. In: Evolutionary Multi-Criterion Optimization, pp. 64–78. Springer (2015)
Hupkens, I., Emmerich, M., Deutz, A.: Faster computation of expected hypervolume improvement. arXiv preprint arXiv:1408.7114 (2014)
Jeong, S., Obayashi, S.: Efficient global optimization (ego) for multi-objective problem and data mining. In: Evolutionary Computation, 2005. The 2005 IEEE Congress on, vol. 3, pp. 2138–2145 (2005)
Jeong, S., Minemura, Y., Obayashi, S.: Optimization of combustion chamber for diesel engine using kriging model. J. Fluid Sci. Technol. 1(2), 138–146 (2006)
Jin, Y.: Surrogate-assisted evolutionary computation: recent advances and future challenges. Swarm Evolut. Comput. 1(2), 61–70 (2011)
Johnson, S.G.: The nlopt nonlinear-optimization package (version 2.3). URL http://ab-initio.mit.edu/nlopt (2012)
Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Glob. Optim. 13(4), 455–492 (1998)
Keane, A.J.: Statistical improvement criteria for use in multiobjective design optimization. AIAA J. 44(4), 879–891 (2006)
Knowles, J.: Parego: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. Evolut. Comput., IEEE Trans. 10(1), 50–66 (2006)
Knowles, J., Hughes, E.J.: Multiobjective optimization on a budget of 250 evaluations. In: Evolutionary Multi-Criterion Optimization, pp. 176–190. Springer (2005)
Kushner, H.J.: A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Fluids Eng. 86(1), 97–106 (1964)
Li, L.: Sequential Design of Experiments to Estimate a Probability of Failure. PhD thesis, Supélec (2012)
Li, L., Bect, J., Vazquez, E.: Bayesian subset simulation: a kriging-based subset simulation algorithm for the estimation of small probabilities of failure. In: Proceedings of PSAM 11 & ESREL 2012, 25–29 June 2012, Helsinki. IAPSAM (2012)
Liu, J.S.: Monte Carlo Strategies in Scientific Computing. Springer, New York (2001)
Loeppky, J.L., Sacks, J., Welch, W.J.: Choosing the sample size of a computer experiment: A practical guide. Technometrics 51(4) (2009)
Mockus, J.: On Bayesian methods of optimization. In: Towards Global Optimization, pp. 166–181. North-Holland (1975)
Mockus, J.: Bayesian Approach to Global Optimization: Theory and Applications, vol. 37. Kluwer, Dordrecht (1989)
Mockus, J., Tiesis, V., Žilinskas, A.: The application of Bayesian methods for seeking the extremum. In: Dixon, L .C .W., Szegö, G.P. (eds.) Towards Global Optimization, vol. 2, pp. 117–129. North Holland, New York (1978)
Oyama, A., Shimoyama, K., Fujii, K.: New constraint-handling method for multi-objective and multi-constraint evolutionary optimization. Trans. Jpn. Soc. Aeronaut. Space Sci. 50(167), 56–62 (2007)
Parr, J.M., Keane, A.J., Forrester, A.I.J., Holden, C.M.E.: Infill sampling criteria for surrogate-based optimization with constraint handling. Eng. Optim. 44(10), 1147–1166 (2012)
Picheny, V.: A stepwise uncertainty reduction approach to constrained global optimization. In: Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), 2014, Reykjavik, Iceland, vol. 33, pp. 787–795. JMLR: W&CP (2014a)
Picheny, V.: Multiobjective optimization using Gaussian process emulators via stepwise uncertainty reduction. Stat. Comput. (2014b). doi:10.1007/s11222-014-9477-x:1-16
Ponweiser, W., Wagner, T., Biermann, D., Vincze, M.: Multiobjective optimization on a limited budget of evaluations using model-assisted \({\cal S}\)-metric selection. In: Parallel Problem Solving from Nature (PPSN X), vol. 5199 of Lecture Notes in Computer Science, pp. 784–794. Springer (2008)
Powell, M.J.D.: A direct search optimization method that models the objective and constraint functions by linear interpolation. In: Advances in Optimization and Numerical Analysis, pp. 51–67. Springer (1994)
Ray, T., Tai, K., Seow, K.C.: Multiobjective design optimization by an evolutionary algorithm. Eng. Optim. 33(4), 399–424 (2001)
Regis, R.G.: Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points. Eng. Optim. 46(2), 218–243 (2014)
Robert, C., Casella, G.: Monte Carlo Statistical Methods, 2nd edn. Springer, New York (2004)
Roberts, G.O., Rosenthal, J.S.: Examples of adaptive mcmc. J. Comput. Graph. Stat. 18(2), 349–367 (2009)
Santner, T.J., Williams, B.J., Notz, W.: The design and Analysis of Computer Experiments. Springer, New York (2003)
Sasena, M.J.: Flexibility and Efficiency Enhancements for Constrained Global Design Optimization with Kriging Approximations. PhD thesis, University of Michigan (2002)
Sasena, M.J., Papalambros, P., Goovaerts, P.: Exploration of metamodeling sampling criteria for constrained global optimization. Eng. Optim. 34(3), 263–278 (2002)
Schonlau, M., Welch, W.J., Jones, D.R.: Global versus local search in constrained optimization of computer models. In: New Developments and Applications in Experimental Design: Selected Proceedings of a 1997 Joint AMS-IMS-SIAM Summer Conference, vol. 34 of IMS Lecture Notes-Monographs Series, pp. 11–25. Institute of Mathematical Statistics (1998)
Shimoyama, K., Sato, K., Jeong, S., Obayashi, S.: Updating kriging surrogate models based on the hypervolume indicator in multi-objective optimization. J. Mech. Des. 135(9), 094503 (2013)
Snelson, E., Rasmussen, C.E., Ghahramani, Z.: Warped Gaussian processes. Adv. Neural Inf. Process. Syst. 16, 337–344 (2004)
Stein, M.L.: Interpolation of Spatial Data: Some Theory for Kriging. Springer, New York (1999)
Svenson, J.D., Santner, T.J.: Multiobjective optimization of expensive black-box functions via expected maximin improvement. Technical report, Tech. rep., 43210, Ohio University, Columbus (2010)
Toal, D.J.J., Keane, A.J.: Non-stationary kriging for design optimization. Eng. Optim. 44(6), 741–765 (2012)
Vazquez, E., Bect, J.: A new integral loss function for Bayesian optimization. arXiv preprint arXiv:1408.4622, (2014)
Villemonteix, J., Vazquez, E., Walter, E.: An informational approach to the global optimization of expensive-to-evaluate functions. J. Glob. Optim. 44(4), 509–534 (2009)
Wagner, T., Emmerich, M., Deutz, A., Ponweiser, W.: On expected-improvement criteria for model-based multi-objective optimization. In: Parallel Problem Solving from Nature, PPSN XI. 11th International Conference, Krakov, Poland, 11–15 September 2010, Proceedings, Part I, vol. 6238 of Lecture Notes in Computer Science, pp. 718–727. Springer (2010)
Williams, B.J., Santner, T.J., Notz, W.I., Lehman, J.S.: Sequential design of computer experiments for constrained optimization. In: Kneib, T., Tutz, G. (eds.) Statistical Modelling and Regression Structures, pp. 449–472. Physica-Verlag, HD (2010)
Williams, C.K.I., Rasmussen, C.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)
Zhang, Q., Liu, W., Tsang, E., Virginas, B.: Expensive multiobjective optimization by MOEA/D with gaussian process model. Evolut. Comput., IEEE Trans. 14(3), 456–474 (2010)
Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization. In K. C. Giannakoglou et al., (ed.) Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems (EUROGEN 2001), pp. 95–100. International Center for Numerical Methods in Engineering (CIMNE) 2002
Acknowledgments
This research work has been carried out within the Technological Research Institute SystemX, using public funds from the French Programme Investissements d’Avenir.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: On the bounded hyper-rectangles \({\mathbb {B}}_{\mathrm{o}}\) and \({{\mathbb {B}}}_{\mathrm{c}}\)
We have assumed in Sect. 3 that \({\mathbb {B}}_{\mathrm{o}}\) and \({{\mathbb {B}}}_{\mathrm{c}}\) are bounded hyper-rectangles; that is, sets of the form
for some \(y^{\mathrm{low}}_{\mathrm{o}}\), \(y^{\mathrm{upp}}_{\mathrm{o}} \in {\mathbb {Y}}_{\mathrm{o}}\) and \(y^{\mathrm{low}}_\mathrm{c}\), \(y^{\mathrm{upp}}_\mathrm{c}\in {\mathbb {Y}}_{\mathrm{c}}\), with the additional assumption that \(y^{\mathrm{low}}_{\mathrm{c}, j} < 0 < y^{\mathrm{upp}}_{\mathrm{c}, j}\) for all \(j \le q\). Remember that upper bounds only where required in the unconstrained case discussed in Sect. 2.2. To shed some light on the role of these lower and upper bounds, let us now compute the improvement \(I_1(X_1) = \left| H_1 \right| \) brought by a single evaluation.
If \(X_1\) is not feasible, then
where \(\gamma _j = {\mathbbm {1}}_{\xi _{\mathrm{c},j}(X_1) \le 0}\). It is clear from the right-hand side of (34) that both \({\mathbb {B}}_{\mathrm{o}}\) and \({{\mathbb {B}}}_{\mathrm{c}}\) have to be bounded if we want \(\left| H_1 \right| < +\infty \) for any \(\gamma = \left( \gamma _1,, \ldots ,\, \gamma _q \right) \in \{ 0, 1 \}^q\). Note, however, that only the volume of \({\mathbb {B}}_{\mathrm{o}}\) actually matters in this expression, not the actual values of \(y^{\mathrm{low}}_\mathrm{o}\) and \(y^{\mathrm{upp}}_\mathrm{o}\). Equation (34) also reveals that the improvement is a discontinuous function of the observations: indeed, the jth term in the product jumps from \(y^{\mathrm{upp}}_{\mathrm{c},j}\) to \(y^{\mathrm{upp}}_{\mathrm{c},j} - y^{\mathrm{low}}_{\mathrm{c},j} > y^{\mathrm{upp}}_{\mathrm{c},j}\) when \(\xi _{\mathrm{c},j}(X_1)\) goes from \(0^+\) to 0. The increment \(- y^{\mathrm{low}}_{\mathrm{c},j}\) can be thought of as a reward associated to finding a point which is feasible with respect to the jth constraint.
The value of \(\left| H_1 \right| \) when \(X_1\) is feasible is
where \( \left| {{\mathbb {B}}}_{\mathrm{c}}^{-}\right| = \prod _{j=1}^q \left| y^{\mathrm{low}}_{\mathrm{c},j} \right| \) is the volume of the feasible subset of \({{\mathbb {B}}}_{\mathrm{c}}\), \({{\mathbb {B}}}_{\mathrm{c}}^{-}= {{\mathbb {B}}}_{\mathrm{c}}\cap ] -\infty ; 0 ]^q\). The first term in the right-hand side of (35) is the improvement associated to the domination of the entire unfeasible subset of \({\mathbb {B}}= {\mathbb {B}}_{\mathrm{o}}\times {{\mathbb {B}}}_{\mathrm{c}}\); the second term measures the improvement in the space of objective values.
Appendix 2: An adaptive procedure to set \({\mathbb {B}}_{\mathrm{o}}\) and \({{\mathbb {B}}}_{\mathrm{c}}\)
This section describes the adaptive numerical procedure that is used, in our numerical experiments, to define the hyper-rectangles \({\mathbb {B}}_{\mathrm{o}}\) and \({{\mathbb {B}}}_{\mathrm{c}}\). As said in Sect. 3.3, these hyper-rectangles are defined using estimates of the range of the objective and constraint functions, respectively. To this end, we will use the available evaluations results, together with posterior quantiles provided by our Gaussian process models on the set of candidate points \({\mathcal {X}}_n\) (defined in Sect. 4.2).
More precisely, assume that n evaluation results \(\xi (X_i)\), \(1 \le i \le n\), are available. Then, we define the corners of \({\mathbb {B}}_{\mathrm{o}}\) by
for \(1 \le i \le p\), and the corners of \({{\mathbb {B}}}_{\mathrm{c}}\) by
for \(1 \le j \le q\), where \(\lambda _\mathrm{o}\) and \(\lambda _\mathrm{c}\) are positive numbers.
Appendix 3: Mono-objective benchmark result tables
In Sect. 5.3, only the best results for both the “Local” and the “Regis” groups of algorithms were shown. In Appendix 3, we present the full results. Tables 11 and 12, and Tables 13 and 14 present respectively the results obtained with the local optimization algorithms and the results obtained by Regis [61] on the single-objective benchmark test problems (see Table 1). Tables 11 and 12 show the performances for finding feasible solutions and for reaching the targets specified in Table 1 for the COBYLA, Active-Set, Interior-Point and SQP algorithms. Similarly, Tables 13 and 14 show the performances for finding feasible solutions and for reaching the targets for the COBRA-Local, COBRA-Global and Extended-ConstrLMSRBF algorithms of Regis [61].
Appendix 4: Modified g3mod, g10 and PVD4 test problems
We detail here the modified formulations of the g3mod, g10 and PVD4 problems that were used in Sect. 5.3 to overcome the modeling problems with BMOO. Our modifications are shown in boldface. The rationale of the modifications is to smooth local jumps.
-
modified-g3mod problem
$$\begin{aligned} \left\{ \begin{array}{lcl} f(x) &{}=&{} -\text {plog}((\sqrt{d})^d{\prod }_{i=1}^d x_i)^{\mathbf{0.1}}\\ c(x) &{}=&{} ({\sum }_{i=1}^d x_i^2) - 1 \end{array} \right. \end{aligned}$$ -
modified-g10 problem
$$\begin{aligned} \left\{ \begin{array}{lcl} f(x) &{}=&{} x_1 + x_2 + x_3\\ c_1(x) &{}=&{} 0.0025(x_4+x_6) - 1\\ c_2(x) &{}=&{} 0.0025(x_5+x_7-x_4) - 1\\ c_3(x) &{}=&{} 0.01(x_8-x_5) - 1\\ c_4(x) &{}=&{} \text {plog}(100x_1 - x_1x_6 + 833.33252x_4 - 83333.333)^{\mathbf{7}}\\ c_5(x) &{}=&{} \text {plog}(x_2x_4 - x_2x_7 -1250x_4 + 1250x_5)^{\mathbf{7}}\\ c_6(x) &{}=&{} \text {plog}(x_3x_5 - x_3x_8 -2500x_5 + 1250000)^{\mathbf{7}} \end{array} \right. \end{aligned}$$ -
modified-PVD4 problem
$$\begin{aligned} \left\{ \begin{array}{lcl} f(x) &{}=&{} 0.6224x_1x_3x_4 + 1.7781x_2x_3^2 + 3.1661x_1^2x_4 + 19.84x_1^2x_3\\ c_1(x) &{}=&{} -x_1 + 0.0193x_3\\ c_2(x) &{}=&{} -x_2 + 0.00954x_3\\ c_3(x) &{}=&{} \text {plog}(-\pi x_3^2x_4 - 4/3\pi x_3^3 + 1296000)^{\mathbf{7}} \end{array} \right. \end{aligned}$$
Note that the above defined problems make use of the plog function defined below (see Regis [61]).
Rights and permissions
About this article
Cite this article
Feliot, P., Bect, J. & Vazquez, E. A Bayesian approach to constrained single- and multi-objective optimization. J Glob Optim 67, 97–133 (2017). https://doi.org/10.1007/s10898-016-0427-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-016-0427-3