Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2725494acmconferencesBook PagePublication PagesfogaConference Proceedingsconference-collections
FOGA '15: Proceedings of the 2015 ACM Conference on Foundations of Genetic Algorithms XIII
ACM2015 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
FOGA '15: Foundations of Genetic Algorithms XIII Aberystwyth United Kingdom January 17 - 22, 2015
ISBN:
978-1-4503-3434-1
Published:
17 January 2015
Sponsors:
Recommend ACM DL
ALREADY A SUBSCRIBER?SIGN IN

Reflects downloads up to 24 Dec 2024Bibliometrics
Skip Abstract Section
Abstract

FOGA, the ACM SIGEVO Workshop on Foundations of Genetic Algorithms, started in 1990 and has, in the past 25 years, established itself as the premier event in the theory of all kinds of randomized search heuristics. Its latest installment, the 13th of its kind, is no exception.

FOGA 2015 is special not only because of the quarter of a century anniversary but also because it is the first FOGA to take place in the United Kingdom. Four organizers from all parts of Great Britain joined forces to bring the event to Aberystwyth in Wales. We had 27 participants from seven countries from four continents of the world. They brought with them 16 presentations for accepted papers, carefully selected from 26 submissions. An hour was allocated for each of the presentations to allow ample time to discuss ideas and inspect details. Following the FOGA tradition all papers have undergone another round of reviewing and rewriting after being presented and passionately discussed at the workshop. This ensures that what you find in these postproceedings is the best and best polished current research in the field.

The presented papers cover many topics of current research in theory of evolutionary algorithms and other randomized search heuristics. This includes discussion of their limits and potentials, either from the perspective of black-box complexity (Golnaz Badkobeh, Per Kristian Lehre, Dirk Sudholt: Black-box complexity of parallel search with distributed populations; Thomas Jansen: On the black-box complexity of example functions: the real jump function) or from the perspective of adversarial optimization (Alan Lockett: Insights from adversarial fitness functions). A very important aspect of current research are investigations of the performance of specific evolutionary algorithms on specific problems or problem classes. Such work includes further investigations of the very well-known and simple (1+1) evolutionary algorithm (Timo Kötzing, Andrei Lissovoi, Carsten Witt: (1+1) EA on generalized dynamic OneMax; Johannes Lengler, Nick Spooner: Fixed budget performance of the (1+1) EA on linear functions), studies of the performance of evolutionary algorithms when confronted with noisy problems (Duc-Cuong Dang, Per-Kristian Lehre: Efficient optimization of noisy fitness functions with population-based evolutionary algorithms; Adam Prügel-Bennett, Jonathan Rowe, Jonathan Shapiro: Run-time analysis of population-based algorithms in noisy environments; Sandra Astete-Morales, Marie-Liesse Cauwet, Olivier Teytaud: Evolution strategies with additive noise: a convergence rate lower bound), studies of parallel evolutionary algorithms (Eric Scott, Kenneth De Jong: Understanding simple asynchronous evolutionary algorithms; Marie-Liesse Cauwet, Shih-Yuan Chiu, Kuo-Min Lin, David Saint-Pierre, Fabien Teytaud, Olivier Teytaud, Shi-Jim Yen: Parallel evolutionary algorithms performing pairwise comparisons) and studies concerned with improving the performance of evolutionary algorithms in applications (Mathys C. Du Plessis, Andries Engelbrecht, Andre Calitz: Self-adapting the Brownian radius in a differential evolution algorithm for dynamic environments; Oswin Krause, Christian Igel: A more efficient rank-one covariance matrix update for evolution strategies; Renato Tinos, Darrell Whitley, Francisco Chicano: Partition crossover for pseudo-Boolean optimization). FOGA also remains the best place to present fundamental observations about the way evolutionary algorithms work (Luigi Malagò, Giovanni Pistone: Information geometry of the Gaussian distribution in view of stochastic optimization; Keki Burjorjee: Hypomixability elimination in evolutionary systems) as well as studies of other complex systems like co-adapting agents (Richard Mealing, Jonathan Shapiro: Convergence of strategies in simple co-adapting games). We are confident that every reader with an interest in theory of randomized search heuristics will find something that he or she finds interesting, challenging and inspiring.

The National Library of Wales in Aberystwyth provided a splendid setting not only for the talks presenting the accepted submissions but also for our invited talk, presented by Professor Leslie Ann Goldberg from the University of Oxford, who gave an inspiring overview of evolutionary dynamics in graphs in the form of the Moran process. On Sunday, when the National Library is closed, the Department of Computer Science of Aberystwyth University kindly donated a seminar room and we are thankful for the support.

Skip Table Of Content Section
SESSION: Invited Talk
research-article
Evolutionary Dynamics on Graphs: Invited Talk
SESSION: Regular Papers
research-article
Black-box Complexity of Parallel Search with Distributed Populations

Many metaheuristics such as island models and cellular evolutionary algorithms use a network of distributed populations that communicate search points along a spatial communication topology. The idea is to slow down the spread of information, reducing ...

research-article
On the Black-Box Complexity of Example Functions: The Real Jump Function

Black-box complexity measures the difficulty of classes of functions with respect to optimisation by black-box algorithms. Comparing the black-box complexity with the worst case performance of a best know randomised search heuristic can help to assess ...

research-article
Insights From Adversarial Fitness Functions

The performance of optimization is usually studied in specific settings where the fitness functions are highly constrained with static, stochastic or dynamic properties. This work examines what happens when the fitness function is a player engaged with ...

research-article
(1+1) EA on Generalized Dynamic OneMax

Evolutionary algorithms (EAs) perform well in settings involving uncertainty, including settings with stochastic or dynamic fitness functions. In this paper, we analyze the (1+1) EA on dynamically changing OneMax, as introduced by Droste (2003). We re-...

research-article
Fixed Budget Performance of the (1+1) EA on Linear Functions

We present a fixed budget analysis of the (1+1) evolutionary algorithm for general linear functions, considering both the quality of the solution after a predetermined 'budget' of fitness function evaluations (a priori) and the improvement in quality ...

research-article
Efficient Optimisation of Noisy Fitness Functions with Population-based Evolutionary Algorithms

Population-based EAs can optimise pseudo-Boolean functions in expected polynomial time, even when only partial information about the problem is available [7]. In this paper, we show that the approach used to analyse optimisation with partial information ...

research-article
Run-Time Analysis of Population-Based Evolutionary Algorithm in Noisy Environments

This paper analyses a generational evolutionary algorithm using only selection and uniform crossover. With a probability arbitrarily close to one the evolutionary algorithm is shown to solve onemax in O(n log2(n)) function evaluations using a population ...

research-article
Evolution Strategies with Additive Noise: A Convergence Rate Lower Bound

We consider the problem of optimizing functions corrupted with additive noise. It is known that Evolutionary Algorithms can reach a Simple Regret O(1/√n) within logarithmic factors, when n is the number of function evaluations. Here, Simple Regret at ...

research-article
Understanding Simple Asynchronous Evolutionary Algorithms

In many applications of evolutionary algorithms, the time required to evaluate the fitness of individuals is long and variable. When the variance in individual evaluation times is non-negligible, traditional, synchronous master-slave EAs incur idle time ...

research-article
Parallel Evolutionary Algorithms Performing Pairwise Comparisons

We study mathematically and experimentally the convergence rate of differential evolution and particle swarm optimization for simple unimodal functions. Due to parallelization concerns, the focus is on lower bounds on the runtime, i.e. upper bounds on ...

research-article
Self-Adapting the Brownian Radius in a Differential Evolution Algorithm for Dynamic Environments

Several algorithms aimed at dynamic optimisation problems have been developed. This paper reports on the incorporation of a self-adaptive Brownian radius into competitive differential evolution (CDE). Four variations of a novel technique to achieving ...

research-article
A More Efficient Rank-one Covariance Matrix Update for Evolution Strategies

Learning covariance matrices of Gaussian distributions is at the heart of most variable-metric randomized algorithms for continuous optimization. If the search space dimensionality is high, updating the covariance or its factorization is computationally ...

research-article
Partition Crossover for Pseudo-Boolean Optimization

A partition crossover operator is introduced for use with NK landscapes, MAX-kSAT and for all k-bounded pseudo-Boolean functions. By definition, these problems use a bit representation. Under partition crossover, the evaluation of offspring can be ...

research-article
Information Geometry of the Gaussian Distribution in View of Stochastic Optimization

We study the optimization of a continuous function by its stochastic relaxation, i.e., the optimization of the expected value of the function itself with respect to a density in a statistical model. We focus on gradient descent techniques applied to ...

research-article
Hypomixability Elimination In Evolutionary Systems

Hypomixability Elimination is an intriguing form of computation thought to underlie general-purpose, non-local, noise-tolerant adaptation in recombinative evolutionary systems. We demonstrate that hypomixability elimination in recombinative evolutionary ...

research-article
Convergence of Strategies in Simple Co-Adapting Games

Simultaneously co-adapting agents in an uncooperative setting can result in a non-stationary environment where optimisation or learning is difficult and where the agents' strategies may not converge to solutions. This work looks at simple simultaneous-...

Contributors
  • Nottingham Trent University
  • Aberystwyth University
  • University of Stirling
  • Aberystwyth University
Index terms have been assigned to the content through auto-classification.

Recommendations

Acceptance Rates

FOGA '15 Paper Acceptance Rate 16 of 26 submissions, 62%;
Overall Acceptance Rate 72 of 131 submissions, 55%
YearSubmittedAcceptedRate
FOGA '21211048%
FOGA '19311548%
FOGA '17231357%
FOGA '15261662%
FOGA '09301860%
Overall1317255%