Hypervolume (HV) and inverted generational distance (IGD) have been frequently used as performanc... more Hypervolume (HV) and inverted generational distance (IGD) have been frequently used as performance indicators to evaluate the quality of solution sets obtained by evolutionary multiobjective optimization (EMO) algorithms. They have also been used in indicator-based EMO algorithms. In some studies on many-objective problems, only the IGD indicator was used due to a large computation load of HV calculation. However, the IGD indicator is not Pareto compliant. This means that a better solution set in terms of the Pareto dominance relation can be evaluated as being worse. Recently the IGD plus (IGD+) indicator has been proposed as a weakly Pareto compliant version of IGD. In this paper, we compare these three indicators from the viewpoint of optimal distributions of solutions. More specifically, we visually demonstrate similarities and differences among the three indicators by numerically calculating near-optimal distributions of solutions to optimize each indicator for some test problems. Our numerical analysis shows that IGD+ is more similar to HV than IGD whereas the formulations of IGD and IGD+ are almost the same.
We often notice that different experimental results of the same algorithm were reported in the li... more We often notice that different experimental results of the same algorithm were reported in the literature. Usually this is because computational experiments were performed under different settings. However, even when almost the same settings are used, similar results are not always obtained. Especially when we use our own implementation, it is very difficult to obtain similar experimental results to the reported ones in the literature. Of course, due to the stochastic nature of an evolutionary algorithm, its experimental results can be different in each run. However, its average results over many runs should be similar. Unfortunately, this is not always the case. We often encounter the situation where our own experimental results are clearly different from the reported ones in the literature. In this paper, we report our investigation results on the following question: why totally different results of MOEA/D were often reported in the literature. More specifically, we show that totally different results can be obtained when we generate random real numbers in a slightly different manner such as the normalization of integers in [0, 253 – 1] by dividing them by 253 or (253 – 1).
A variety of fuzzy genetics-based machine learning algorithms have been proposed in the framework... more A variety of fuzzy genetics-based machine learning algorithms have been proposed in the frameworks of Michigan and Pittsburgh approaches. Since each individual is a single rule, Michigan-style algorithms need much less computation time than Pittsburgh-style algorithms where each individual is a rule set. For the same reason, Michigan-style algorithms cannot directly optimize rule sets. Rule set optimization is indirectly performed by optimizing each rule. In this paper, we propose the use of the (1+1)-ES generation update in Michigan-style algorithms. This is for directly performing rule set optimization without losing their high computational efficiency. We also propose a multi-pattern-based rule generation method to generate a fuzzy rule from multiple patterns in a heuristic manner. We demonstrate high efficiency and high generalization ability of our newly proposed Michigan-style algorithm through computational experiments on 19 data sets with 4–310 attributes and 2–15 classes.
Hypervolume (HV) and inverted generational distance (IGD) have been frequently used as performanc... more Hypervolume (HV) and inverted generational distance (IGD) have been frequently used as performance indicators to evaluate the quality of solution sets obtained by evolutionary multiobjective optimization (EMO) algorithms. They have also been used in indicator-based EMO algorithms. In some studies on many-objective problems, only the IGD indicator was used due to a large computation load of HV calculation. However, the IGD indicator is not Pareto compliant. This means that a better solution set in terms of the Pareto dominance relation can be evaluated as being worse. Recently the IGD plus (IGD+) indicator has been proposed as a weakly Pareto compliant version of IGD. In this paper, we compare these three indicators from the viewpoint of optimal distributions of solutions. More specifically, we visually demonstrate similarities and differences among the three indicators by numerically calculating near-optimal distributions of solutions to optimize each indicator for some test problems. Our numerical analysis shows that IGD+ is more similar to HV than IGD whereas the formulations of IGD and IGD+ are almost the same.
We often notice that different experimental results of the same algorithm were reported in the li... more We often notice that different experimental results of the same algorithm were reported in the literature. Usually this is because computational experiments were performed under different settings. However, even when almost the same settings are used, similar results are not always obtained. Especially when we use our own implementation, it is very difficult to obtain similar experimental results to the reported ones in the literature. Of course, due to the stochastic nature of an evolutionary algorithm, its experimental results can be different in each run. However, its average results over many runs should be similar. Unfortunately, this is not always the case. We often encounter the situation where our own experimental results are clearly different from the reported ones in the literature. In this paper, we report our investigation results on the following question: why totally different results of MOEA/D were often reported in the literature. More specifically, we show that totally different results can be obtained when we generate random real numbers in a slightly different manner such as the normalization of integers in [0, 253 – 1] by dividing them by 253 or (253 – 1).
A variety of fuzzy genetics-based machine learning algorithms have been proposed in the framework... more A variety of fuzzy genetics-based machine learning algorithms have been proposed in the frameworks of Michigan and Pittsburgh approaches. Since each individual is a single rule, Michigan-style algorithms need much less computation time than Pittsburgh-style algorithms where each individual is a rule set. For the same reason, Michigan-style algorithms cannot directly optimize rule sets. Rule set optimization is indirectly performed by optimizing each rule. In this paper, we propose the use of the (1+1)-ES generation update in Michigan-style algorithms. This is for directly performing rule set optimization without losing their high computational efficiency. We also propose a multi-pattern-based rule generation method to generate a fuzzy rule from multiple patterns in a heuristic manner. We demonstrate high efficiency and high generalization ability of our newly proposed Michigan-style algorithm through computational experiments on 19 data sets with 4–310 attributes and 2–15 classes.
Uploads
Papers by Yusuke Nojima