Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Available online at www.sciencedirect.com
Procedia Computer Science 00 (2011) 000–000
Procedia
Computer
Science
www.elsevier.com/locate/procedia
____________
* Corresponding author, email: xy227@cam.ac.uk
International Conference on Computational Science, ICCS 2011
Computational optimization, modelling and simulation:
Recent advances and overview
Xin-She Yanga,*
, Slawomir Kozielb
, and Leifur Leifssonb
a
Mathematics and Scientific Computing, National Physical Laboratory, Teddington, Middlesex TW11 0LW, UK
b
Engineering Optimization and Modeling Center, School of Science and Engineering, Reykjavik University, 101 Reykjavik, Iceland.
Abstract
Computational optimization is becoming increasingly important in engineering design and industrial applications. Products and
services are often concerned with the maximization of profits and reduction of cost, but also aim at being more energy-efficient,
environment-friendly and safety-ensured; at the same time they are limited by resources, time and money. This second workshop
on Computational Optimization, Modelling and Simulation (COMS 2011) at ICCS 2011 will further summarize the latest
developments of optimization and modelling and their applications in science, engineering and industry.
Keywords: algorithm; black-box modelling; computational optimization; derivative-free method; optimization algorithm; modelling; nonlinear
optimization; surragate-based optimization; simulation;
1. Introduction
Computational optimization and modeling is an important paradigm itself with a wide range of applications. In
almost all applications in engineering and industry, we are always trying to optimize something – whether to
minimize the cost and energy consumption, or to maximize the profit, output, performance and efficiency. In reality,
resources, time and money are always limited; consequently, optimization is far more important [1,2,3]. The optimal
use of available resources of any sort requires a paradigm shift in scientific thinking. This is because most real-
world applications have complicated factors and parameters affecting the system behavior; subsequently, it is not
always possible to find the optimal solutions. We have to settle for suboptimal solutions or even feasible solutions
which are good enough and practically achievable in a reasonable time scale.
Many real-world problems are highly nonlinear under complex constraints, and they are often NP-hard. That is,
the solution time for finding optimal solutions is exponential in terms of problem size. For such problems, there is
no efficient algorithm of polynomial time exists in general. On the other hand, contemporary engineering design is
heavily based on computer simulations. This introduces additional difficulties to optimization. Growing demand for
accuracy and ever-increasing complexity of structures and systems results in the simulation process being more and
Yang and Koziel / Procedia Computer Science 00 (2011) 000–000
more time consuming. In many engineering fields, the evaluation of a single design can take as long as from several
hours to several days or even weeks. Also, simulation-based objective functions are inherently noisy, which makes
the optimization process even more difficult. Still, simulation-driven design becomes a must for a growing number
of areas, which creates a need for robust and efficient optimization methodologies that can yield satisfactory designs
even at the presence of analytically intractable objectives and limited computational resources.
This second workshop on Computational Optimization, Modelling and Simulations (COMS 2011) at the ICCS
2011 provides an opportunity to review and discuss the latest developments concerning optimization and computer
modelling with a focus on applications in science, engineering and industry. In the rest of this short summary paper,
we will briefly review the latest trends and optimization techniques. We will then introduce the topics and papers in
this workshop.
2. Simulation and Optimization
Three important components of an optimization process are: model representation, simulation and optimization.
Any optimization problem has to be represented correctly in a mathematical and/or computer model so that we can
be sure that we are solving the right problem. For any combination of the parameters to be optimized, we have to
able to evaluate the solutions correctly and this requires a correct and efficient simulator. This simulator can be
simply a function call, or a more complicated numerical toolbox, or a black-box type, external solver such as a finite
element or finite volume software package. As the number of parameter combinations can be extremely large, an
efficient optimization algorithm should be used to generate better solutions from a given set of current parameter
combinations.
Model representation is usually an easy step, while an efficient optimizer is crucially important to ensure the
optimal solution can be found. In reality, global optimality is not always achievable; in this case, we have to ensure
that a set of good quality solutions can be found and some near-optimal and locally optimal solutions can be
obtained quickly. Fortunately, there are a wide range of optimization algorithms in the literature, and most are
efficient for certain types of problems. Conventional algorithms are well-established, such as gradient-based
algorithms and some derivative-free algorithms, and modern algorithms tend to be heuristic and metaheuristic,
ranging from evolutionary strategy to genetic algorithms, and from particle swarm optimization to cuckoo search.
Gradient-based algorithms such as hill-climbing require the problem functions (constraints and objective) are
relatively smooth. If there is some discontinuity, we have to use gradient-free or derivative-free methods such as
Nelder-Mead downhill simplex method and pattern search methods. However, all these methods are local search
methods. As most optimization problems are nonlinear and multimodal, global search methods should be used.
Global search algorithms are mostly heuristic and metaheuristic with stochastic components. The choice of an
algorithm largely depends on the type of the problem of interest, the time and resource constraints, the required
solution quality, availability and ease of the implementation, and the expertise of decision makers.
Simulation tools are also important, as most black-box problems are limited by the efficiency of the simulator,
which subsequently controls the overall quality and efficiency of the optimization. Simulation tends to be the most
computationally expensive part, and even with the most efficient optimization algorithm, a large number of
simulation runs are practically impossible. In these cases, techniques exploiting surrogate models become the most
suitable approaches. This led to the important development of surrogate-based techniques such as space-mapping [4,
5, 7].
3. Metaheuristics and Surrogate-Based Optimization
For some optimization problems, especially the linear problems and/or problems with a few design variables,
many classical algorithms such as Nelder-Mead down hill simplex method and trust-region methods work very well
[3,6]. For slightly more complicated problems with nonlinear constraints, algorithms such as interior-point and
active-set methods work are sufficient. But for large-scale, nonlinear global optimization problems, there is no
agreed algorithm. In fact, such type of global optimization does not have any efficient algorithm. However, a very
promising trend is to use metaheuristic algorithms.
Yang and Koziel / Procedia Computer Science 00 (2011) 000–000
Metaheuristic algorithms are mostly nature-inspired or biologically inspired algorithms, mimicking certain
successful characteristics of natural systems. There are a wide range of metaheuristic algorithms, including ant
colony optimization, bee algorithms, cuckoo search, artificial immune algorithms, genetic algorithms, differential
evolution, harmony search, particle swarm optimization, firefly algorithm, simulated annealing, Tabu search, and
monkey search. Most of these algorithms are population-based algorithms using multiple agents, and the only
exception is the simulated annealing which uses a trajectory-based approach. However, parallel simulated annealing
is also population-based.
Two important characteristics of metaheuristics are intensification and diversification, or exploitation and
exploration. Intensification focuses on the local search which exploits the information from local landscape, while
diversification often uses certain randomization techniques to explore the search space more aggressively and
extensively. Different algorithms use slightly different approaches for constructing these two components, but they
all tend to achieve some tradeoff between local intensive search and global exploratory search [3]. Recent advances
and developments have focused on the development of new algorithms and improvement of existing algorithms by
adding new features. Test and validation of various algorithms are also another important part of new developments.
Computer simulations are ubiquitous in numerous fields of engineering and science today. As the complexity of
the physical systems considered by designers and scientists is constantly growing, and various interactions between
devices and their surroundings have to be accounted for, analytical description is no longer possible so that
numerical simulation becomes the only option. In particular, computer simulations (electromagnetic, FEM,
computational fluid dynamics, etc.) of the system/structure under design can be employed to implement a black-
box-like objective functions. In many cases, optimization of such objectives in a straightforward way, i.e., by
applying optimization routines directly to these functions, is impractical. One reason is that simulation-based
objective functions are often analytically intractable (discontinuous, non-differentiable, and inherently noisy). Also,
sensitivity information is usually unavailable, or too expensive to compute. Another, and in many cases even more
important reason is the high computational cost of measurement/simulations. Simulation times of several hours,
days or even weeks per objective function evaluation are not uncommon in contemporary engineering, despite the
increase of available computing power. Feasible handling of these unmanageable functions can be accomplished
using surrogate models: the optimization of the original objective is replaced by iterative re-optimization and
updating of the analytically tractable and computationally cheap surrogate.
Surrogate-based optimization (SBO) [10] has been suggested as an effective approach for the design with time-
consuming computer models. The basic concept of SBO is that the direct optimization of the computationally
expensive model is replaced by an iterative process that involves the creation, optimization and updating of a fast
and analytically tractable surrogate model. The surrogate should be a reasonably accurate representation of the high-
fidelity model, at least locally. The design obtained through optimizing the surrogate model is verified by evaluating
the high-fidelity model. The high-fidelity model data obtained in this verification process is then used to update the
surrogate. SBO proceeds in this predictor-corrector fashion iteratively until some termination criterion is met.
Because most of the operations are performed on the surrogate model, SBO reduces the computational cost of the
optimization process when compared to optimizing the high-fidelity model directly, without resorting to any
surrogate. Variety of techniques for creating computationally cheap surrogate models are available including
approximation techniques such as polynomial regression or kriging, as well as methods exploiting physically-based
low-fidelity models. Several SBO algorithms have been developed such as space mapping [11] or surrogate
management framework [12], to name just a few.
4. Recent Advances
Applications of optimization in engineering and industry are diverse. This is reflected in the papers submitted
to this workshop. The responses and interests to our call for papers are overwhelming; however, due to limited space
and time slots for presentations, many high-quality papers cannot be included in the workshop.
The accepted papers of this workshop COMS2011 at ICCS 2011 have spanned a wide range of applications and
reflect the state-of-the-art developments in computational optimization, modeling and simulation. Betts introduces a
robust approximation for optimizing target inventory levels in a capacity-constrained production model. Fowler et
al. provides a detailed study of an asynchronous parallel hybrid approach for MINLPs, showing the improvement
the capabilities of genetic algorithms and providing a speed-up analysis on the standard mixed-integer test problem
Yang and Koziel / Procedia Computer Science 00 (2011) 000–000
with equality and inequality constraints. Koziel et al. develop a simulation-driven design approach for designing
antennas using coarse-discretization electromagnetic models. On the other hand, AbuBekr et al. discuss sequential
optimization of paths in directed graphs concerning different cost functions. Nakao et al. discuss a real-coded
estimation of distribution algorithms using probabilistic models with multiple learning rates. Thaher and Takaoka
study an efficient algorithm of computing the k-overlapping maximum convex-sum problems. Liu and Xu present a
study on a path associativity congestion control problem and throughput model with multi-path TCP. Zadeh presents
an application of catalog segmentation by using mixed-integer programming and genetic algorithm. Another
interesting application is the 3D off-line path planning, by Jaishankar and Pralhad, for aerial vehicle using a distance
transform technique.
In addition to optimization studies, modeling and simulations are integrated with optimization with various
applications. Leifsson et al. presents an inverse design of transonic airfoils using variable-resolution modeling and
pressure distribution alignment. He et al. describe a domain decomposition method for modeling a PEM fuel cell. In
addition, Mohamad et al. present results on the nearest neighbor approach for histogram-based feature extraction,
while Lin et al. apply an approximate fuzzy GERT to evaluate two-unit standby redundant system reliability.
Furthermore, Jen and Wang improve the semiconductor process control by integrating the fault detection method
and run-to-run control, while Jang et al. present an application of mobile personalized translation listening system
for age groups.
Despite these active advances, some questions remain unanswered, and this is special true in the area
metaheuristics. For example, the convergence analysis of most metaheuristic algorithms lacks behind, and only
limited results exist about a few metaheuristics. In addition, the No-Free-Lunch (NFL) theorems were proved for
finite domains for single objective optimization [8], and it remains unsolved for multiobjective optimization in
continuous domains. A recent significant development is that for continuous optimization or optimization with
infinite domains, NFL does not hold [9]. This means there is some free lunch for continuous optimization, which
needs more extensive study. In the area of surrogate-based optimization, several issues are the subjects of on-going
research. These include the development of more efficient surrogate modeling methodologies, improving the
robustness of the surrogate-based optimization algorithms, as well as the proper selection of low-fidelity models for
variable-fidelity and variable-resolution optimization techniques.
From the above studies, we can see that the applications of computational optimization and modeling are diverse
and wide-ranging. There is no doubt that more and more applications will appear in the near future. This workshop
provides a timely platform for further discussions and development in optimization and modeling.
References
1. K. Deb, Optimization for Engineering Design: Algorithms and Examples, Prentice-Hall, New Delhi, 1995.
2. E. G. Talbi, Metaheuristics: From Design to Implementation, John Wiley & Sons, 2009.
3. X. S. Yang, Engineering Optimization: An Introduction with Metaheuristic Applications, John Wiley & Sons, (2010).
4. S. Koziel, J. W. Bandler and K. Madsen, Quality assessment of coarse models and surrogates for space mapping optimization,
Optimization and Engineering, 9(4), 375-391 (2008).
5. L. Leifsson and S. Koziel, Multi-fidelity design optimization of transonic airfoils using physics-based surrogate modelling and
shape-preserving response prediction, J. Comp. Science, 1(2), 98-106 (2010).
6. X. S. Yang, Introduction to Computational Mathematics, World Scientific Publishing, Singapore, (2008).
7. S. Koziel and X. S. Yang, Computational Optimization and Applications in Engineering and Industry, Springer, (2011).
8. D. H. Wolpert and W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Computation, 1, 67-82 (1997)
9. A. Auger and O. Teytaud, Continuous lunches are free plus the design of optimal optimization algorithms, Algorithmica, 57(1), 121-
146 (2010).
10. A.I.J. Forrester, A.J. Keane, A.J., Recent advances in surrogate-based optimization, Prog. in Aerospace Sciences, 45 (1-3), 50-79,
2009.
11. J.W. Bandler, Q.S. Cheng, S.A. Dakroury, A.S. Mohamed, M.H. Bakr, K. Madsen, and J. Sondergaard, Space mapping: the state of
the art, IEEE Trans. Microwave Theory Tech., 52(1), 337-361, 2004.
12. A.J. Booker, J.E. Dennis Jr., P.D. Frank, D.B. Serafini, V. Torczon, and M.W. Trosset, A rigorous framework for optimization of
expensive functions by surrogates, Structural Optimization, 17(1), 1-13, 1999.

More Related Content

Computational optimization, modelling and simulation: Recent advances and overview

  • 1. Available online at www.sciencedirect.com Procedia Computer Science 00 (2011) 000–000 Procedia Computer Science www.elsevier.com/locate/procedia ____________ * Corresponding author, email: xy227@cam.ac.uk International Conference on Computational Science, ICCS 2011 Computational optimization, modelling and simulation: Recent advances and overview Xin-She Yanga,* , Slawomir Kozielb , and Leifur Leifssonb a Mathematics and Scientific Computing, National Physical Laboratory, Teddington, Middlesex TW11 0LW, UK b Engineering Optimization and Modeling Center, School of Science and Engineering, Reykjavik University, 101 Reykjavik, Iceland. Abstract Computational optimization is becoming increasingly important in engineering design and industrial applications. Products and services are often concerned with the maximization of profits and reduction of cost, but also aim at being more energy-efficient, environment-friendly and safety-ensured; at the same time they are limited by resources, time and money. This second workshop on Computational Optimization, Modelling and Simulation (COMS 2011) at ICCS 2011 will further summarize the latest developments of optimization and modelling and their applications in science, engineering and industry. Keywords: algorithm; black-box modelling; computational optimization; derivative-free method; optimization algorithm; modelling; nonlinear optimization; surragate-based optimization; simulation; 1. Introduction Computational optimization and modeling is an important paradigm itself with a wide range of applications. In almost all applications in engineering and industry, we are always trying to optimize something – whether to minimize the cost and energy consumption, or to maximize the profit, output, performance and efficiency. In reality, resources, time and money are always limited; consequently, optimization is far more important [1,2,3]. The optimal use of available resources of any sort requires a paradigm shift in scientific thinking. This is because most real- world applications have complicated factors and parameters affecting the system behavior; subsequently, it is not always possible to find the optimal solutions. We have to settle for suboptimal solutions or even feasible solutions which are good enough and practically achievable in a reasonable time scale. Many real-world problems are highly nonlinear under complex constraints, and they are often NP-hard. That is, the solution time for finding optimal solutions is exponential in terms of problem size. For such problems, there is no efficient algorithm of polynomial time exists in general. On the other hand, contemporary engineering design is heavily based on computer simulations. This introduces additional difficulties to optimization. Growing demand for accuracy and ever-increasing complexity of structures and systems results in the simulation process being more and
  • 2. Yang and Koziel / Procedia Computer Science 00 (2011) 000–000 more time consuming. In many engineering fields, the evaluation of a single design can take as long as from several hours to several days or even weeks. Also, simulation-based objective functions are inherently noisy, which makes the optimization process even more difficult. Still, simulation-driven design becomes a must for a growing number of areas, which creates a need for robust and efficient optimization methodologies that can yield satisfactory designs even at the presence of analytically intractable objectives and limited computational resources. This second workshop on Computational Optimization, Modelling and Simulations (COMS 2011) at the ICCS 2011 provides an opportunity to review and discuss the latest developments concerning optimization and computer modelling with a focus on applications in science, engineering and industry. In the rest of this short summary paper, we will briefly review the latest trends and optimization techniques. We will then introduce the topics and papers in this workshop. 2. Simulation and Optimization Three important components of an optimization process are: model representation, simulation and optimization. Any optimization problem has to be represented correctly in a mathematical and/or computer model so that we can be sure that we are solving the right problem. For any combination of the parameters to be optimized, we have to able to evaluate the solutions correctly and this requires a correct and efficient simulator. This simulator can be simply a function call, or a more complicated numerical toolbox, or a black-box type, external solver such as a finite element or finite volume software package. As the number of parameter combinations can be extremely large, an efficient optimization algorithm should be used to generate better solutions from a given set of current parameter combinations. Model representation is usually an easy step, while an efficient optimizer is crucially important to ensure the optimal solution can be found. In reality, global optimality is not always achievable; in this case, we have to ensure that a set of good quality solutions can be found and some near-optimal and locally optimal solutions can be obtained quickly. Fortunately, there are a wide range of optimization algorithms in the literature, and most are efficient for certain types of problems. Conventional algorithms are well-established, such as gradient-based algorithms and some derivative-free algorithms, and modern algorithms tend to be heuristic and metaheuristic, ranging from evolutionary strategy to genetic algorithms, and from particle swarm optimization to cuckoo search. Gradient-based algorithms such as hill-climbing require the problem functions (constraints and objective) are relatively smooth. If there is some discontinuity, we have to use gradient-free or derivative-free methods such as Nelder-Mead downhill simplex method and pattern search methods. However, all these methods are local search methods. As most optimization problems are nonlinear and multimodal, global search methods should be used. Global search algorithms are mostly heuristic and metaheuristic with stochastic components. The choice of an algorithm largely depends on the type of the problem of interest, the time and resource constraints, the required solution quality, availability and ease of the implementation, and the expertise of decision makers. Simulation tools are also important, as most black-box problems are limited by the efficiency of the simulator, which subsequently controls the overall quality and efficiency of the optimization. Simulation tends to be the most computationally expensive part, and even with the most efficient optimization algorithm, a large number of simulation runs are practically impossible. In these cases, techniques exploiting surrogate models become the most suitable approaches. This led to the important development of surrogate-based techniques such as space-mapping [4, 5, 7]. 3. Metaheuristics and Surrogate-Based Optimization For some optimization problems, especially the linear problems and/or problems with a few design variables, many classical algorithms such as Nelder-Mead down hill simplex method and trust-region methods work very well [3,6]. For slightly more complicated problems with nonlinear constraints, algorithms such as interior-point and active-set methods work are sufficient. But for large-scale, nonlinear global optimization problems, there is no agreed algorithm. In fact, such type of global optimization does not have any efficient algorithm. However, a very promising trend is to use metaheuristic algorithms.
  • 3. Yang and Koziel / Procedia Computer Science 00 (2011) 000–000 Metaheuristic algorithms are mostly nature-inspired or biologically inspired algorithms, mimicking certain successful characteristics of natural systems. There are a wide range of metaheuristic algorithms, including ant colony optimization, bee algorithms, cuckoo search, artificial immune algorithms, genetic algorithms, differential evolution, harmony search, particle swarm optimization, firefly algorithm, simulated annealing, Tabu search, and monkey search. Most of these algorithms are population-based algorithms using multiple agents, and the only exception is the simulated annealing which uses a trajectory-based approach. However, parallel simulated annealing is also population-based. Two important characteristics of metaheuristics are intensification and diversification, or exploitation and exploration. Intensification focuses on the local search which exploits the information from local landscape, while diversification often uses certain randomization techniques to explore the search space more aggressively and extensively. Different algorithms use slightly different approaches for constructing these two components, but they all tend to achieve some tradeoff between local intensive search and global exploratory search [3]. Recent advances and developments have focused on the development of new algorithms and improvement of existing algorithms by adding new features. Test and validation of various algorithms are also another important part of new developments. Computer simulations are ubiquitous in numerous fields of engineering and science today. As the complexity of the physical systems considered by designers and scientists is constantly growing, and various interactions between devices and their surroundings have to be accounted for, analytical description is no longer possible so that numerical simulation becomes the only option. In particular, computer simulations (electromagnetic, FEM, computational fluid dynamics, etc.) of the system/structure under design can be employed to implement a black- box-like objective functions. In many cases, optimization of such objectives in a straightforward way, i.e., by applying optimization routines directly to these functions, is impractical. One reason is that simulation-based objective functions are often analytically intractable (discontinuous, non-differentiable, and inherently noisy). Also, sensitivity information is usually unavailable, or too expensive to compute. Another, and in many cases even more important reason is the high computational cost of measurement/simulations. Simulation times of several hours, days or even weeks per objective function evaluation are not uncommon in contemporary engineering, despite the increase of available computing power. Feasible handling of these unmanageable functions can be accomplished using surrogate models: the optimization of the original objective is replaced by iterative re-optimization and updating of the analytically tractable and computationally cheap surrogate. Surrogate-based optimization (SBO) [10] has been suggested as an effective approach for the design with time- consuming computer models. The basic concept of SBO is that the direct optimization of the computationally expensive model is replaced by an iterative process that involves the creation, optimization and updating of a fast and analytically tractable surrogate model. The surrogate should be a reasonably accurate representation of the high- fidelity model, at least locally. The design obtained through optimizing the surrogate model is verified by evaluating the high-fidelity model. The high-fidelity model data obtained in this verification process is then used to update the surrogate. SBO proceeds in this predictor-corrector fashion iteratively until some termination criterion is met. Because most of the operations are performed on the surrogate model, SBO reduces the computational cost of the optimization process when compared to optimizing the high-fidelity model directly, without resorting to any surrogate. Variety of techniques for creating computationally cheap surrogate models are available including approximation techniques such as polynomial regression or kriging, as well as methods exploiting physically-based low-fidelity models. Several SBO algorithms have been developed such as space mapping [11] or surrogate management framework [12], to name just a few. 4. Recent Advances Applications of optimization in engineering and industry are diverse. This is reflected in the papers submitted to this workshop. The responses and interests to our call for papers are overwhelming; however, due to limited space and time slots for presentations, many high-quality papers cannot be included in the workshop. The accepted papers of this workshop COMS2011 at ICCS 2011 have spanned a wide range of applications and reflect the state-of-the-art developments in computational optimization, modeling and simulation. Betts introduces a robust approximation for optimizing target inventory levels in a capacity-constrained production model. Fowler et al. provides a detailed study of an asynchronous parallel hybrid approach for MINLPs, showing the improvement the capabilities of genetic algorithms and providing a speed-up analysis on the standard mixed-integer test problem
  • 4. Yang and Koziel / Procedia Computer Science 00 (2011) 000–000 with equality and inequality constraints. Koziel et al. develop a simulation-driven design approach for designing antennas using coarse-discretization electromagnetic models. On the other hand, AbuBekr et al. discuss sequential optimization of paths in directed graphs concerning different cost functions. Nakao et al. discuss a real-coded estimation of distribution algorithms using probabilistic models with multiple learning rates. Thaher and Takaoka study an efficient algorithm of computing the k-overlapping maximum convex-sum problems. Liu and Xu present a study on a path associativity congestion control problem and throughput model with multi-path TCP. Zadeh presents an application of catalog segmentation by using mixed-integer programming and genetic algorithm. Another interesting application is the 3D off-line path planning, by Jaishankar and Pralhad, for aerial vehicle using a distance transform technique. In addition to optimization studies, modeling and simulations are integrated with optimization with various applications. Leifsson et al. presents an inverse design of transonic airfoils using variable-resolution modeling and pressure distribution alignment. He et al. describe a domain decomposition method for modeling a PEM fuel cell. In addition, Mohamad et al. present results on the nearest neighbor approach for histogram-based feature extraction, while Lin et al. apply an approximate fuzzy GERT to evaluate two-unit standby redundant system reliability. Furthermore, Jen and Wang improve the semiconductor process control by integrating the fault detection method and run-to-run control, while Jang et al. present an application of mobile personalized translation listening system for age groups. Despite these active advances, some questions remain unanswered, and this is special true in the area metaheuristics. For example, the convergence analysis of most metaheuristic algorithms lacks behind, and only limited results exist about a few metaheuristics. In addition, the No-Free-Lunch (NFL) theorems were proved for finite domains for single objective optimization [8], and it remains unsolved for multiobjective optimization in continuous domains. A recent significant development is that for continuous optimization or optimization with infinite domains, NFL does not hold [9]. This means there is some free lunch for continuous optimization, which needs more extensive study. In the area of surrogate-based optimization, several issues are the subjects of on-going research. These include the development of more efficient surrogate modeling methodologies, improving the robustness of the surrogate-based optimization algorithms, as well as the proper selection of low-fidelity models for variable-fidelity and variable-resolution optimization techniques. From the above studies, we can see that the applications of computational optimization and modeling are diverse and wide-ranging. There is no doubt that more and more applications will appear in the near future. This workshop provides a timely platform for further discussions and development in optimization and modeling. References 1. K. Deb, Optimization for Engineering Design: Algorithms and Examples, Prentice-Hall, New Delhi, 1995. 2. E. G. Talbi, Metaheuristics: From Design to Implementation, John Wiley & Sons, 2009. 3. X. S. Yang, Engineering Optimization: An Introduction with Metaheuristic Applications, John Wiley & Sons, (2010). 4. S. Koziel, J. W. Bandler and K. Madsen, Quality assessment of coarse models and surrogates for space mapping optimization, Optimization and Engineering, 9(4), 375-391 (2008). 5. L. Leifsson and S. Koziel, Multi-fidelity design optimization of transonic airfoils using physics-based surrogate modelling and shape-preserving response prediction, J. Comp. Science, 1(2), 98-106 (2010). 6. X. S. Yang, Introduction to Computational Mathematics, World Scientific Publishing, Singapore, (2008). 7. S. Koziel and X. S. Yang, Computational Optimization and Applications in Engineering and Industry, Springer, (2011). 8. D. H. Wolpert and W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Computation, 1, 67-82 (1997) 9. A. Auger and O. Teytaud, Continuous lunches are free plus the design of optimal optimization algorithms, Algorithmica, 57(1), 121- 146 (2010). 10. A.I.J. Forrester, A.J. Keane, A.J., Recent advances in surrogate-based optimization, Prog. in Aerospace Sciences, 45 (1-3), 50-79, 2009. 11. J.W. Bandler, Q.S. Cheng, S.A. Dakroury, A.S. Mohamed, M.H. Bakr, K. Madsen, and J. Sondergaard, Space mapping: the state of the art, IEEE Trans. Microwave Theory Tech., 52(1), 337-361, 2004. 12. A.J. Booker, J.E. Dennis Jr., P.D. Frank, D.B. Serafini, V. Torczon, and M.W. Trosset, A rigorous framework for optimization of expensive functions by surrogates, Structural Optimization, 17(1), 1-13, 1999.