Accepted tutorials for PPSN2016:

Click on the titles to download the slides of the tutorial

Darrell  Whitley Gray  Box  Optimization  in  Theory  
Benjamin Doerr Theory of evolutionary computation
Julian Miller, Patricia Ryser-Welch Graph-based and Cartesian Genetic Programming
Dirk Sudholt Theory of Parallel Evolutionary Algorithms
Giovanni Squillero, Alberto Tonda Promoting Diversity in Evolutionary Optimization: Why and How
Dimo Brockhoff Evolutionary Multiobjective Optimization
Enrique Alba Intelligent Systems for Smart Cities
Mike Preuss, Michael G. Epitropakis Advances on Multi-modal optimization
Stjepan Picek Evolutionary Computation in Cryptography
Jacqueline Heinerman, Gusz Eiben, Evert Haasdijk, Julien Hubert Evolutionary robotics – a practical guide to experiment with real hardware
Nelishia Pillay Evolutionary Algorithms and Hyper-Heuristics
Luigi Malago A Bridge between Optimization over Manifolds and Evolutionary Computation
JJ Merelo Implementing evolutionary algorithms in the cloud
Carlos Fonseca, Andreia Guerreiro The Attainment Function Approach to Performance Evaluation in EMO
Per Kristian Lehre, Pietro Oliveto Runtime Analysis of Evolutionary Algorithms: Basic Introduction
Boris Naujoks, Jörg Stork, Martin Zaefferer, Thomas Bartz-Beielstein Meta-Model Assisted (evolutionary) optimization

 

Gray Box Optimization in Theory and Practice

Darrell Whitley, Colorado State University (USA)

Abstract

This tutorial will cover Gray Box Complexity and Gray Box Optimization for kbounded pseudo-Boolean optimization. These problems can also be referred to at Mk Landscapes, and included problems such as MAX-kSAT, spin glass problems and NK Landscapes. Mk Landscape problems are a linear combination of M subfunctions, where each subfunction accepts at most k variables. Under Gray Box optimization, the optimizer is given access to the set of M subfunctions. If the set of subfunctions is k-bounded and separable, the Gray Box optimizer is guaranteed to return the global optimum with 1 evaluation. If a problem is not deceptive, the Gray Box optimizer also returns the global optimum after 1 evaluation. This means that simple test problems from ONEMAX to “Trap Functions” are solved in 1 evaluation in O(n) time under Gray Box Optimization. If a tree decomposition exists with a fixed bounded tree width, then the problem can be solved using dynamic programming in O(n) time. If the tree decomposition is bounded by lg(n), then the problem can be solved by dynamic programming in O(n^2) time. Even for those problems that are not trivially solved, Gray Box optimization also makes it possible to exactly compute Hamming distance 1 improving moves in constant time. Thus, neither mutation nor enumeration of the Hamming neighborhood are necessary. Under many conditions it is possible to calculate the location of improving moves in a Hamming distance radius r neighborhood, thus selecting improving moves several moves ahead. This also can be done in constant time. There also exists deterministic forms of recombination that provably return the best possible offspring from a reachable set of offspring. Partition Crossover relies on localized problem decomposition, and is invariant to the order of the bits in the representation. The methods identify partitions of nonlinear interaction between variables. Variables within a partition must be inherited together. However, bits in different partitions can be linearly recombined. Given p partitions, recombination can be done in O(n) time such that crossover returns the best solutions out of 2^p offspring. The offspring can also be proven to be locally optimal in the largest hyperplane subspace in which the two parents reside. Thus, Partition Crossover is capable of directly moving from known local optima to new, high quality local optima in O(n) time. These innovations will fundamentally change both Local Search and Evolutionary Algorithms. Empirical results show that combining smart local search with Partition Crossover results in search algorithms that are capable of finding globally optimal solutions for nonlinear problems with a million variables in less than 1 minute.

Short Biography

Prof Whitley has published more than 200 papers, and his work has garnered more than 18,000 citations. He has served as Chair of ACM SIGEVO, and as Editor-in-chief of the journal Evolutionary Computation.

Theory of Evolutionary Computation

Benjamin Doerr, Ecole Polytechnique de Paris (France)

Abstract

Theoretical research has always accompanied the development and analysis of evolutionary algorithms, both by explaining observed phenomena in a very rigorous manner and by creating new ideas. Since the methodology of theory research is very different from experimental or applied research, non-theory researcher occasionally find it hard to understand and profit from theoretical research. Overcoming this gap in our research field is the target of this tutorial. Independent of particular theoretical subdisciplines or methods like runtime analysis or landscape theory, we aim at making theory accessible to researchers having little exposure to theory research previously. In particular,
– we describe what theory research in EC is, what it aims at, and showcase some of key findings of the last 15 years,
– we discuss the particular strengths and limitations of theory research,
– we show how to read, understand, interpret, and profit from theory results.

Short Bio

Benjamin Doerr is a full professor at the Ecole Polytechnique (France). His research area is the theory both of problem-specific algorithms and of randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for evolutionary algorithms and ant colony optimizers, as well as the further development of the drift analysis method, in particular, multiplicative and adaptive drift. In the young area of black-box complexity, he proved several of the current best bounds.

Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO, served as its co-chair 2007-2009 and served again in 2014. He is a member of the editorial boards of “Evolutionary Computation”, “Natural Computing”, “Theoretical Computer Science”, “Information Processing Letters”, and “Journal of Complexity”.

Graph-based and Cartesian Genetic Programming

Julian Miller and Patricia Ryser-Welch, University of York (UK)

Abstract

Genetic Programming is often associated with a tree representation for encoding expressions and algorithms. However, graphs are also very useful and flexible program representations which can be applied to many domains (e.g. electronic circuits, neural networks, algorithms).

Over the years a variety of representations of graphs have been explored such as: Parallel Distributed Genetic Programming (PDGP) , Linear-Graph Genetic Programming, Enzyme Genetic Programming, Graph Structured Program Evolution (GRAPE) and Cartesian Genetic Programming (CGP).

Cartesian Genetic Programming (CGP) is probably the best known form of graph-based Genetic Programming. It was developed by Julian Miller in 1999-2000. In its classic form, it uses a very simple integer address-based genetic representation of a program in the form of a directed graph. CGP has been adopted by a large number of researchers in many domains.

In a number of studies, CGP has been shown to be comparatively efficient to other GP techniques. It is also very simple to program. Since its original formulation, the classical form of CGP has also undergone a number of developments which have made it more useful, efficient and flexible in various ways. These include the addition of automatically defined functions (modular CGP), self-modification operators (self-modifying CGP), the encoding of artificial neural networks (GCPANNs) and evolving iterative programs (iterative CGP).

Short Bio

Julian. F. Miller, has a BSc in Physics (Lond), a PhD in Nonlinear Mathematics (City) and a PGCLTHE (Bham) in Teaching.  He is an Honorary Fellow in the Department of Electronics at the University of York. He has chaired or co-chaired sixteen international workshops, conferences and conference tracks in Genetic Programming (GP), Evolvable Hardware. He is a former associate editor of IEEE Transactions on Evolutionary Computation and is currently an associate editor of the Journal of Genetic Programming and Evolvable Machines and Natural Computing. He is on the editorial board of the journals: Evolutionary Computation, International Journal of Unconventional Computing and Journal of Natural Computing Research.  He has publications in genetic programming, evolutionary computation, quantum computing, artificial life, evolvable hardware, computational development, and nonlinear mathematics. He is a highly cited author with over 6000 citations and over 250 publications in related areas.  He has given twelve tutorials on genetic programming and evolvable hardware at leading conferences in evolutionary computation. He received the prestigious EvoStar award in 2011 for outstanding contribution to the field of evolutionary computation. He is the inventor of a highly cited method of genetic programming known as Cartesian Genetic Programming and edited the first book on the subject in 2011.

Patricia Ryser-Welch, has a “Diplome d’informatique de gestion” from the ENIG (La Chaux-de-Fonds), a BSc (Honours) in mathematical science (Open University), A Cert-Ed in education (Sunderland) and a MSc in Computing for Commerce and Industry (Open University). She is currently a Phd student in the Department of Electronics at the University of York. She is currently under the supervision of Julian Miller, Martin Trefzer, and Jerry Swan. Having an international career in the computing and the education sector, for more than 20 years, she is currently focusing her study on the discovery of human-readable algorithms. She is currently an associate lecturer at the Open University teaching programming, engineering and human computer interaction.

 

 

Theory of Parallel Evolutionary Algorithms

Dirk Sudholt, University of Sheffield (UK)

Abstract

Evolutionary algorithms (EAs) have given rise to many parallel variants, fuelled by the rapidly increasing number of CPU cores and the ready availability of computation power through GPUs and cloud computing. A very popular approach is to parallelize evolution in island models, or coarse-grained EAs, by evolving different populations on different processors. These populations run independently most of the time, but they periodically communicate genetic information to coordinate search. Many applications have shown that island models can speed up computation time significantly, and that parallel populations can further increase solution diversity. However, there is little understanding of when and why island models perform well, and what impact fundamental parameters have on performance.

This tutorial will give an overview of recent theoretical results on the runtime of parallel evolutionary algorithms. These results give insight into the fundamental working principles of parallel EAs, assess the impact of parameters and design choices on performance, and contribute to the design of more effective parallel EAs.

Short Bio

Dirk Sudholt is a Lecturer at the University of Sheffield, UK. and is heading the newly established Algorithms research group. He obtained his Diplom (Master’s) degree in 2004 and PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. He has held postdoc positions at the International Computer Science Institute in Berkeley, California, working in the Algorithms group led by Prof. Richard M. Karp and at the University of Birmingham, UK, working with Prof. Xin Yao. Dirk’s research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms, including hybrid and parallel variants, and swarm intelligence algorithms like ant colony optimization and particle swarm optimization. He has published more than 60 refereed papers in international conferences and journals and has won 8 best paper awards at GECCO and PPSN. He is an editorial board member of the Evolutionary Computation journal, associate editor for Natural Computing and co-chair of the GECCO 2016 theory track.

 

Promoting Diversity in Evolutionary Optimization: Why and How

Giovanni Squillero, Politecnico di Torino (Italy)

Alberto Tonda, INRA (France)

Abstract

Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of diversity: all candidate solutions eventually crowd the very same areas in the search space. Such a “lack of speciation” has been pointed out in the seminal work of Holland in 1975, and nowadays is well known among scholars. It has different effects on the different search algorithms, but almost all are quite deleterious. The problem is usually labeled with the oxymoron “premature convergence”, that is, the tendency of an algorithm to convergence toward a point where it was not supposed to converge to in the first place. Scientific literature contains several efficient diversity-preservation methodologies that ranged from general techniques to problem-dependent heuristics. However, the fragmentation of the field and the difference in terminology led to a general dispersion of this important corpus of knowledge in many small, hard-to-track research lines.

Upon completion of this tutorial, attendees will understand the root causes and dangers of “premature convergence”. They will know the main research lines in the area of “diversity promotion”. They will be able to choose an effective solution from the literature, or design a new one more tailored to their specific needs.

Short Bio

Giovanni Squillero is an assistant professor in Politecnico di Torino, Torino, Italy. His research interests mix the whole spectrum of bio-inspired metaheuristics with electronic CAD, and selected topics in computational intelligence, games, and multi-agent systems. His activities focus on developing techniques able to achieve “good” solutions while requiring an “acceptable” amount of resources, with main applications in real, industrial problems. Squillero is a member of the “IEEE Computational Intelligence Society Games Technical Committee”. He organized the “EvoHOT” workshops on evolutionary hardware optimization techniques, and he is currently a member of the editorial board of “Genetic Programming and Evolvable Machines”. He is the coordinator of “EvoApplications” for 2015. Since 1997, he published 3 books (one didactic); about 20 journal articles, 10 book chapters, and more than 100 papers in conference proceedings. He also edited a dozen books.

Evolutionary Multiobjective Optimization

Dimo Brockhoff, INRIA Lille (France)

Abstract

Many optimization problems are multiobjective, i.e., multiple, conflicting criteria need to be considered simultaneously. Due to conflicts between the objectives, usually no single optimum solution exists. Instead, a set of so-called Pareto-optimal solutions, for which no other solution has better function values in all objectives, does emerge.

In practice, Evolutionary Multiobjective Optimization (EMO) algorithms are widely used for solving multiobjective optimization problems. As stochastic blackbox optimizers, EMO approaches cope with nonlinear, nondifferentiable, or noisy objective functions. By inherently working on sets of solutions, they allow the Pareto-optimal set to be approximated in one algorithm run – opposed to classical techniques for multicriteria decision making (MCDM), which aim for single solutions.

Defining problems in a multiobjective way has two further advantages:

  • The set of Pareto-optimal solutions may reveal shared design principles (innovization)
  • Singleobjective problems may become easier to solve if auxiliary objectives are added (multiobjectivization).

Within this tutorial, we comprehensively introduce the field of EMO and present selected research results in more detail. More specifically, we

  • explain the basic principles of EMO algorithms in comparison to classical approaches,
  • show a few practical examples motivating the use of EMO, and
  • present a general overview of state-of-the-art algorithms and selected recent research results.

 

Short Bio

Dimo Brockhoff received his diploma in computer science from University of Dortmund, Germany in 2005 and his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. Afterwards, he held two postdoctoral research positions in France at INRIA Saclay Ile-de-France (2009-2010) and at Ecole Polytechnique (2010-2011). Since November 2011, he has been a permanent researcher at INRIA Lille – Nord Europe, France. His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on theoretical aspects of indicator-based search and on the benchmarking of blackbox algorithms in general.

 

Intelligent Systems for Smart Cities

Enrique Alba, University of Málaga (Spain)

Abstract

The concept of Smart Cities can be understood as a holistic approach to improve the level of development and management of the city in a broad range of services by using information and communication technologies.

It is common to recognize six axes of work in them: i) Smart Economy, ii) Smart People, iii) Smart Governance, iv) Smart Mobility, v) Smart Environment, and vi) Smart Living. In this tutorial we first focus on a capital issue: smart mobility. European citizens and economic actors need a transport system which provides them with seamless, high-quality door-to-door mobility. At the same time, the adverse effects of transport on the climate, the environment and human health need to be reduced. We will show many new systems based in the use of bio-inspired techniques to ease the road traffic flow in the city, as well as allowing a customized smooth experience for travellers (private and public transport).

This tutorial will then discuss on potential applications of intelligent systems for energy (like adaptive lighting in streets), environmental applications (like mobile sensors for air pollution), smart building (intelligent design), and several other applications linked to smart living, tourism, and smart municipal governance.

Short Biography

Prof. Enrique Alba had his degree in engineering and PhD in Computer Science in 1992 and 1999, respectively, by the University of Málaga (Spain). He works as a Full Professor in this university with different teaching duties: data communications, distributed programing, software quality, and also evolutionary algorithms, bases for R+D+i and smart cities at graduate and master/doctoral programs. Prof. Alba leads an international team of researchers in the field of complex optimization/learning with applications in smart cities, bioinformatics, software engineering, telecoms, and others. In addition to the organization of international events (ACM GECCO, IEEE IPDPS-NIDISC, IEEE MSWiM, IEEE DS-RT, …) Prof. Alba has offered dozens postgraduate courses, multiple seminars in more than 30 international institutions, and has directed several research projects (7 with national funds, 5 in Europe, and numerous bilateral actions). Also, Prof. Alba has directed 7 projects for innovation and transference to the industry (OPTIMI, Tartessos, ACERINOX, ARELANCE, TUO, INDRA, AOP) and presently he also works as invited professor at INRIA, the Univ. of Luxembourg, and Univ. of Ostrava. He is editor in several international journals and book series of Springer-Verlag and Wiley, as well as he often reviews articles for more than 30 impact journals. He has published 87 articles in journals indexed by Thomson ISI, 17 articles in other journals, 40 papers in LNCS, and more than 250 refereed conferences. Besides that, Prof. Alba has published 11 books, 39 book chapters, and has merited 6 awards to his professional activities. Pr. Alba’s H index is 44, with more than 9000 cites to his work.

POTENTIAL AUDIENCE: all attendees of PPSN and other invited people from Edinburgh, UK and the world

Advances on Multi-modal optimization

Mike Preuss, University of Dortmund (Germany) 

Michael G. Epitropakis, Lancaster University (UK)

Abstract

Multimodal optimization is currently getting established as a research direction that collects approaches from various domains of operational research and evolutionary computation that strive for delivering multiple very good solutions at once. We discuss several scenarios and list currently employed and potentially available performance measures. Furthermore, many
state-of-the-art as well as older methods are compared and put into a rough taxonomy. We also discuss recent relevant competitions and their results and outline the possible future developments in this area.

Short Biography

Mike Preuss is Research Associate at ERCIS, the European Research Center for Information Systems, at the University of Muenster, Germany. Previsouly, he was with the Chair of Algorithm Engineering at the Computer Science Department, TU Dortmund, Germany, where he received his Diploma degree in 1998 and his PhD in 2013. His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization and the experimental methodology for (non-deterministic) optimization algorithms. He is currently working on the adaptability and applicability of computational intelligence techniques for various engineering domains and computer games, pushing forward modern approaches of experimental analysis as the Exploratory Landscape Analysis (ELA) and innovative uses of surrogate models. He was involved in founding the EvoGames track at Evo* and the Digital Entertainment Technologies and Arts (DETA) track at GECCO. Within the games field, he is mainly interested in AI for realtime strategy (RTS) games and procedural content generation (PCG).

Michael G. Epitropakis received his B.S., M.S., and Ph.D. degrees from the Department of Mathematics, University of Patras, Patras, Greece. Currently, he is a Lecturer in Foundations of Data Science at the Data Science Institute and the Department of Management Science, Lancaster University, Lancaster, UK. His current research interests include computational intelligence,
evolutionary computation, swarm intelligence, machine learning and search­ based software engineering. He has published more than 30 journal and conference papers. He is an active researcher on Multi­modal Optimization and a co­-organized of the special session and competition series on Niching Methods for Multimodal Optimization. He is a member of the IEEE Computational Intelligence Society and the ACM SIGEVO.

Target participants and audience :

The tutorial is intended for researchers interested in multimodal optimization and niching methods, their algorithmic development, and their practical applicability. It provides an overview and many links to potentially interesting research areas.

Evolutionary Computation in Cryptography

Stjepan Picek , KU Leuven, Belgium and University of Zagreb, Croatia

Abstract

Evolutionary Computation (EC) has been used with great success on various real-world problems. One domain abundant with numerous difficult problems is cryptology. Cryptology can be divided into cryptography and cryptanalysis where although not always in an obvious way, EC can be applied to problems from both domains. This tutorial will first give a brief introduction to cryptology intended for general audience. Afterwards, we concentrate on several topics from cryptography that are successfully tackled up to now with EC and discuss why those topics are suitable to apply EC. However, care must be taken since there exists a number of problems that seem to be impossible to solve with EC and one needs to realize the limitations of the heuristics. We will discuss the choice of appropriate EC techniques (GA, GP, CGP, ES, multi-objective optimization) for various problems and evaluate on the importance of that choice. Furthermore, we will discuss the gap between the cryptographic community and EC community and what does that mean for the results. By doing that, we give a special emphasis on the perspective that cryptography presents a source of benchmark problems for the EC community.

This tutorial will also present some live demos of EC in action when dealing with cryptographic problems.

Short Bio

Stjepan Picek is a postdoctoral researcher at KU Leuven, Belgium, within the COSIC group. He holds an MSc in Electrical Engineering and a double doctorate in computer science from University of Zagreb, Croatia and Radboud University Nijmegen, The Netherlands. His current research interests evolutionary computation, cryptology, and machine learning. He is particularly interested in the applications of evolutionary computation to cryptology. He has published more than 30 journal and conference papers. He serves as a reviewer in numerous journals and conferences.

Evolutionary robotics – a practical guide to experiment with real hardware

Jacqueline Heinerman, Gusz Eiben, Evert Haasdijk and Julien Hubert, VU University Amsterdam(Netherlands)

Abstract

Evolutionary robotics aims to evolve the controllers, the morphologies, or both, for real and/or simulated autonomous robots. Most research in evolutionary robotics is partly or completely carried in simulation. Although simulation has advantages, e.g., it is cheaper and it can be faster, it suffers from the notorious reality gap. Recently, affordable and reliable robots became commercially available. Hence, setting up a population of real robots is within reach for a large group of research groups today. This tutorial focuses on the know-how required to utilise such a population for running evolutionary experiments. To this end we use Thymio II robots with Raspberry Pi extensions (including a camera). The tutorial explains and demonstrates the work-flow from beginning to end, by going through a case study of a group of Thymio II robots evolving their neural network controllers to learn collecting objects on-the-fly. Besides the methodology and lessons learned, we spend time on how to code.

Short Bio

Jacqueline Heinerman is a PhD student in the Computational Intelligence group at VU Amsterdam since 2015. She obtained her MSc cum laude in 2012 in Econometrics and is specialized in optimisation algorithms. She worked as a data consultant at PwC before starting her PhD employed on the DREAM project. Within this project she is responsible for the knowledge exchange between robots for a better and faster consolidation of experiences.
Besides the DREAM project, she supervises a project called “Ethical Implications of Self-Improving Intelligent Robots” where they look at the ethical implication and desirability of self-improving robots and the possibilities and necessity to control open-ended learning.

Gusz Eiben is Full Professor of Computational Intelligence at the Computer Science Department of the VU Amsterdam and Visiting Professor in the Department of Electronics of the University of York. Anecdotic fact: he is the first author of the first paper of the first European conference in the area, the PPSN-1990. Since than he has published several research papers and co-authored the first comprehensive book on Evolutionary Computing. He has been organizing committee member of practically all major international evolutionary conferences and editorial board member of related international journals. A significant part of his work concerns the design and calibration of evolutionary algorithms in an off-line (parameter tuning) and in an on-line fashion (parameter control). Over the last couple of years he became interested in embodied evolutionary processes, hence in evolutionary robotics. This research is driven by the grand vision of the Evolution of Things.

Evert Haasdijk is assistant professor in the Computational Intelligence group at VU. He has been with the computational intelligence group at VU since 2008, researching on-line evolution in robots. Before that, he was research assistant at Tilburg University, researching social learning in populations of software agents. Dr Haasdijk has ample experience in evolutionary computation, stretching back to the successful PAPAGENA project in 1992, where he participated as an industry partner. He has served as member of program committees of well-established conferences in the field of evolutionary computation (CEC, GECCO), was local chair for GECCO 2013 and (co-)organised various workshops and track such as the EvoROBOT track at EvoSTAR conferences and the International Workshop on the Evolution of Physical Systems at ECAL and ALIFE conferences. Dr Haasdijk was guest editor for the Special Issue on Evolutionary Robotics of the Evolutionary Intelligence journal and invited speaker at the PPSN XIII Workshop on Nature-Inspired Techniques for Robotics.

Julien Hubert is a postdoctoral researcher in the Computational Intelligence Group at the VU University Amsterdam. Originally a computer scientist, his interest lead him to study cognitive science and evolutionary robotics. His main research themes are embodied learning and time perception, which he studied in simulated and real robots while completing his PhD at the University of Tokyo. He is particularly interested in the evolution of time perception and learning strategies in embodied dynamical systems.
At the CI group, Dr. Hubert is part of the DREAM project where he focuses on developing strategies for the acquisition and exchange of information within a group of robots with the aim of improving their learning performance.

 

Evolutionary Algorithms and Hyper-Heuristics

Nelishia Pillay,University of KwaZulu-Natal, (South Africa)

Abstract

Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics, especially generation hyper-heuristics. Evolutionary algorithm hyper-heuristics have been successful applied to  solving problems in various domains including packing problems, educational timetabling,    vehicle routing, permutation flowshop and financial forecasting amongst others. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be  provided. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation   perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper-heuristic to provide researchers with a foundation to start their own research in this area. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational  intelligence techniques. The tutorial will look at the synergistic relationship between     evolutionary algorithms and hyper-heuristics in this area. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in  evolutionary  algorithms  and hyper-heuristics.

Short Biography

Nelishia Pillay is an Associate Professor in the School of Mathematics, Statistics and Computer Science at the University of KwaZulu-Natal. She holds a Phd in Computer  Science from the University of KwaZulu-Natal. Her research areas include hyper-heuristics, combinatorial optimization, genetic programming, genetic algorithms and other biologically-inspired methods. She has published in these areas in journals, national and international conference proceedings. She is a member of the IEEE Task Force on Hyper-Heuristics with the Technical Committee of Intelligent Systems and Applications at IEEE Computational Intelligence Society and the Technical Committee on Soft Computing under the IEEE Systems, Man, and Cybernetics Society. She has served on program committees for numerous national and international conferences and is a reviewer for various international journals. She is an active researcher in the field of evolutionary algorithm hyper-heuristics for combinatorial optimization and design. This is one of the focus areas of the NICOG(Nature Inspired Computing Optimization) research group which she has established.

A Bridge between Optimization over Manifolds and Evolutionary
Computation

Luigi Malagò, Shinshu University, Japan

Abstract

The aim of this tutorial is to explore the promising connection between the well-consolidated field of optimization over manifolds and evolutionary computation. In mathematics, optimization over manifolds deals with the design and analysis of algorithms for the optimization over search spaces with admit a non-Euclidean geometry. One of the simplest examples is probably the sphere, where the shortest path between two points is given by a curve, and not a straight line. Manifolds may appear in evolutionary computation in at least two contexts. The simplest one is the case when an evolutionary algorithm is employed to optimize a fitness function defined over a manifold, such as in the case of the sphere, the cone of positive-definite matrices, the set of rotation matrices, and many others. The second one is more subtle, and is related to the stochastic relaxation of a fitness function. A common approach in model-based evolutionary computation is to search for the optimum of a function by sampling populations from a sequence of probability distributions. For instance, this is the case of evolutionary strategies, probabilistic model-building genetic algorithms, estimation of distribution algorithms and similar techniques, both in the continuous and in the discrete domain. A strictly related paradigm which can be used to describe the behavior of model-based search algorithms is that of stochastic relaxation, i.e., the optimization of the expected value of the original fitness function with respect to a probability distribution in a statistical model. From this perspective a model-based algorithm is solving a problem which is strictly related to the optimization of the stochastic relaxation over a statistical model. Notably, statistical models are well-known examples of manifolds, where the Fisher information plays the role of metric tensor. For this reason, it becomes of great interest to compare the standard techniques in the field of optimization over manifolds, with the mechanisms implemented by model-based algorithm in evolutionary computation. The tutorial will consist of two parts. In the first one, a unifying framework for the description of model-based algorithms will be introduced and some standard well-known algorithms will be presented from the perspective of the optimization over manifold. Particular attention will be devoted to first-order methods based on the Riemannian gradient over a manifold, which in the case of a statistical model is known as the natural gradient. In the second part, we will discuss how evolutionary algorithms can be adapted to solve optimization problems defined over manifold, which constitutes a novel and promising area of research in evolutionary computation.

Short Biography

Luigi Malagò is Assistant Professor at Shinshu University, Japan. He obtained a Bachelor and a Master degree in Computer Science Engineering at Politecnico di Milano, Italy, in 2004 and 2007, and the Diploma of Alta Scuola Politecnica in 2006. He completed a PhD in Information Technology at Politecnico di Milano in 2012, and his thesis was awarded with the Dimitris N. Chorafas Foundation Award. From April 2012 to April 2014, he was a postdoc researcher at the Department of Computer Science at Università degli Studi di Milano, and from May to June 2014, a research assistant at Collegio Carlo Alberto, Moncalieri, Italy. During his career, he had the opportunity to visit different laboratories, such as the Max Planck Institute for Mathematics in the Sciences in Leipzig (April – July 2010), RIKEN Brain Science Institute (July – August 2010) in Tokyo, and Inria Saclay Île-de-France (Sept. 2014 – Sept. 2015). His research interests mainly focus on information geometry and in particular on first and second-order optimization methods over statistical manifold. His research in evolutionary computation aims at the design and analysis of stochastic model-based optimization algorithms, both in discrete and continuous domains.

Implementing Evolutionary Algorithms in the Cloud

JJ Merelo, University of Granada (Spain)

Abstract

Creating experiments that can be easily reproduced and converted in a straightforward way into a report involves knowing a series of techniques that are of widespread use in the open source and commercial software communities. This tutorial will introduce this techniques, including an introduction to cloud computing and DevOps for evolutionary algorithm practitioners, with reference to the tools and platforms that can make development of new algorithms and problem solutions fast and reproducible.

Short Bio

JJ Merelo is professor at the university of Granada, Spain, where he has been working since 1988. He has been involved in evolutionary computation since 1993. He is currently director of the Free Software Office at the university of Granada and thinks the best science is open science.

 

 

The Attainment Function Approach to Performance Evaluation in Evolutionary Multiobjective Optimization

Carlos M. Fonseca and Andreia P. Guerreiro, University of Coimbra (Portugal)

Abstract

The development of improved optimization algorithms and their adoption by end users depend on the ability to evaluate their performance on the problem classes of interest. In the absence of theoretical guarantees, performance must be evaluated experimentally while taking into account both the experimental conditions and the nature of the data collected.

Evolutionary approaches to multiobjective optimization typically produce discrete Pareto-optimal front approximations in the form of sets of mutually non-dominated points in objective space. Since evolutionary algorithms are stochastic, such non-dominated point sets are random, and vary according to some probability distribution.

In contrast to quality indicators, which map non-dominated point sets to real values, and side-step the set nature of the data, the attainment-function approach addresses the non-dominated point set distribution directly. Distributional aspects such as location, variability, and dependence, can be estimated from the raw non-dominated point set data.

This tutorial will focus on the attainment function as a tool for the evaluation of the performance of evolutionary multiobjective optimization (EMO) algorithms. In addition to the theoretical foundations of the methodology, computational and visualization issues will be discussed. The application of the methodology will be demonstrated by interactively exploring example data sets with freely available software tools. To conclude, a selection of open questions and directions for further work will be identified.

Short Bio

Carlos M. Fonseca is an Associate Professor at the Department of Informatics Engineering of the University of Coimbra, Portugal, and the head of the Evolutionary and Complex Systems (ECOS) group of the Centre for Informatics and Systems of the University of Coimbra (CISUC). He graduated in Electronic and Telecommunications Engineering from the University of Aveiro, Portugal, in 1991, and obtained his doctoral degree from the University of Sheffield, U.K., in 1996. His research has been devoted mainly to evolutionary computation and multiobjective optimization. He has led the development of the attainment-function approach as a statistical methodology for the evaluation of multiobjective optimization algorithms, and of efficient algorithms to support it, since the 1990s. His current research interests include multiobjective optimization, evolutionary algorithms, experimental assessment of algorithms, dynamical systems, and engineering-design optimization. He was a General co-Chair of the International Conference on Evolutionary Multi-Criterion Optimization (EMO) in 2003, 2009 and 2013, and a Technical co-Chair of the IEEE Congress on Evolutionary Computation (CEC) in 2000 and 2005. He is a member of the Evolutionary Multi-Criterion Optimization Steering Committee and of the Portuguese Operations Research Association (APDIO).

Andreia P. Guerreiro is currently a Ph.D. student and a member of the Evolutionary and Complex Systems (ECOS) group of the Centre for Informatics and Systems of the University of Coimbra (CISUC), Portugal. She obtained her Master’s degree in Information Systems and Computer Engineering from Instituto Superior Técnico, Portugal, in 2011. Her research work has comprised the development of efficient algorithms for the assessment of evolutionary multiobjective optimization algorithms and, in particular, for the computation of the empirical attainment function. Her research interests include multiobjective optimization, algorithm design and evolutionary algorithms.

Runtime Analysis of Evolutionary Algorithms: Basic Introduction

Per Kristian Lehre, University of Nottingham, (UK)

Pietro S. Oliveto, University of Sheffield, (UK)

Abstract:

Evolutionary algorithm theory has studied the time complexity of evolutionary algorithms for more than 20 years. This tutorial presents the foundations of this field. We introduce the most important notions and definitions used in the field and consider different evolutionary algorithms on a number of well-known and important example problems. Through a careful and thorough introduction of important analytical tools and methods, including fitness- and level-based analysis, typical events and runs, and drift analysis. By the end of the tutorial the attendees will be able to apply these techniques to derive relevant runtime results for non-trivial evolutionary algorithms.

In addition to custom-tailored methods for the analysis of evolutionary algorithms we also introduce the relevant tools and notions from probability theory in an accessible form. This makes the tutorial appropriate for everyone with an interest in the theory of evolutionary algorithms without the need to have prior knowledge of probability theory and analysis of randomised algorithms.

Variants of this tutorial have been presented at GECCO 2013-2015, attracting well over 50 participants each time. The tutorial will be based on the ‘Theoretical analysis of stochastic search heuristics’ chapter of the forthcoming Springer Handbook of Heuristics.

Short Biography

Per Kristian Lehre received MSc and PhD degrees in Computer Science from the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. He finished the PhD in 2006 under the supervision of Prof Pauline Haddow, and joined the School of Computer Science at The University of Birmingham, UK, as a Research Fellow in January 2007 with
Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics, Technical University of Denmark in Lyngby, Denmark from April 2010. He is since September 2011 a Lecturer in the School of Computer Science at the University of Nottingham, UK. Dr Lehre’s research interests are in theoretical aspects of nature-inspired search heuristics, in particular runtime analysis of population-based evolutionary algorithms.  His research has won numerous best paper awards, including GECCO (2013, 2010, 2009, 2006), ICSTW (2008), and ISAAC (2014). He is vice-chair of IEEE Task Force on Theoretical Foundations of Bio-inspired Computation, and a member of the editorial board of Evolutionary Computation. He has guest-edited special issues of Theoretical Computer Science and IEEE Transaction on Evolutionary Computation on theoretical foundations of evolutionary computation. Dr Lehre has given many tutorials on evolutionary computation in summer schools (UK: 2007, France: 2009, 2010, and 2011, Estonia: 2010), as well as major conferences and workshops (GECCO 2013 and 2014, CEC 2013, ThRaSH 2013).  He is the main coordinator of the 2M euro EU-funded project SAGE which brings together theory of evolutionary computation and population genetics.

Pietro S. Oliveto is currently a Vice-Chancellor Fellow and an EPSRC Early Career Fellow at the University of Sheffield, UK.  He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener’s research group. He has worked at the University of Birmingham, respectively, from 2009 to 2010 as an EPSRC PhD+ Fellow and from 2010 to 2013 as an EPSRC Postdoctoral Fellow in Theoretical Computer Science.  His main research interest is the time complexity analysis of randomized search heuristics for combinatorial optimization problems.  He has published a review paper of the field and two book chapters on the theoretical techniques for the analysis. He has won best paper awards at the GECCO 2008, ICARIS 2011 and GECCO 2014 conferences and got very close at CEC 2009 and at
ECTA 2011 through best paper nominations.  Dr. Oliveto has given tutorials on the runtime complexity analysis of EAs at WCCI 2012, CEC 2013, GECCO 2013, WCCI 2014, GECCO 2014, GECCO 2015 and SSCI 2015. He has guest-edited special issues of the Springer Journal of Computer Science and Technology and of the Evolutionary Computation journal and has co-chaired the 2015 IEEE symposium on Foundations of Computational Intelligence (FOCI 2015).  He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), member of the EPSRC Peer Review College and Chair of the IEEE CIS Task Force on Theoretical Foundations of Bio-inspired Computation.

Meta-Model Assisted (Evolutionary) Optimization

Boris Naujoks, Jörg Stork, Martin Zaefferer, and Thomas Bartz-Beielstein
TH Köln (Germany)

Abstract

Meta-model assisted optimization is a well-recognized research area. When the evaluation of an objective function is expensive, meta-model assisted optimization yields huge improvements in optimization time or cost in a large number of different scenarios. Hence, it is extremely useful for numerous real-world applications. These include, but are not limited to, the optimization of designs like airfoils or ship propulsion systems, chemical processes, biogas plants, composite structures, and electromagnetic circuit design.

This tutorial is largely focused on evolutionary optimization assisted by meta-models, and has the following aims: Firstly, we will provide a detailed understanding of the established concepts and distinguished methods in meta-model assisted optimization. Therefore, we will present an overview of current research and open issues in this field. Moreover, we aim for a practical approach. The tutorial should enable the participants to apply up-to-date meta-modelling approaches to actual problems at hand. Afterwards, we will discuss typical problems and their solutions with the participants. Finally, the tutorial offers new perspectives by taking a look into areas where links to meta-modelling concepts have been established more recently, e.g., the application of meta-models in multi-objective optimization or in combinatorial search spaces.

Short Bio

Boris Naujoks is a professor for Applied Mathematics at TH Köln (Cologne University of Applied Sciences). He joined TH Köln directly after he received his PhD from Dortmund Technical University in 2011. During his time in Dortmund, Boris worked as a research assistant in different projects and gained industrial experience working for different SMEs. Meanwhile, he enjoys the combination of teaching mathematics as well as computer science and exploring EC and CI techniques at the Campus Gummersbach of TH Köln. He focused on multi-objective (evolutionary) optimization, in particular hypervolume based algorithms, surrogate-assisted (evolutionary) optimization, and the (industrial) applicability of the explored methods. He contributes to his research area by reviewing for most established journals and conferences in the field and organizes workshops, journal special issues, conference tracks etc.

M. Eng. Jörg Stork is a research associate at the TH Köln. He is a member of the SPOTSeven Research Team supervised by Prof. Dr. Thomas Bartz-Beielstein since 2009. He did his bachelor degree in electrical engineering with focus on automation and his master in Automation & IT (2013). Currently he is doing his PhD thesis in cooperation with Vrije Universiteit Amsterdam. His main research interest is continuous global optimization with focus on using surrogate modeling and evolutionary computation.

Martin Zaefferer is doing a cooperative PhD at TH Köln and TU Dortmund. He studied electrical engineering with a focus on automation at TH Köln and graduated in 2010. He received his master’s degree in Engineering (TH Köln, Automation & IT) in 2012. Since then he was involved in several research projects at the research center CIplus (TH Köln), dealing with methods from the fields of computational intelligence and data mining as well as their application in industry. Evolutionary computation and surrogate-model-based optimization are in the center of his interests, with a focus on combinatorial optimization problems.

Thomas Bartz-Beielstein received his Ph.D. degree in Computer Science from TU Dortmund. Since 2006, when he became a professor for TH Köln, Dr. Bartz-Beielstein has built a research team of international status and visibility. He is head of the research center CIplus and initiator of the SPOTSeven team. His expertise lies in optimization, simulation, and statistical analysis of complex real-world problems. He is one of the leaders in the field of statistical analysis of optimization algorithms and the inventor and the driving force behind the sequential parameter optimization technology (SPOT). Prof. Bartz-Beielstein has more than 100 publications on computational intelligence, optimization, simulation, and experimental research. His books on experimental research are considered as milestones in this emerging field. He is chairing the prestigious track “Evolutionary Computation in Practice” at GECCO and presented several tutorials on experimental research at GECCO, PPSN, and WCCI.