White Rose University Consortium logo
University of Leeds logo University of Sheffield logo York University logo

Learning to Control Differential Evolution Operators

Sharma, Mudita (2019) Learning to Control Differential Evolution Operators. PhD thesis, University of York.

This is the latest version of this item.

[img]
Preview
Text
Sharma_203024651_York_PhD_Thesis.pdf - Examined Thesis (PDF)
Available under License Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 UK: England & Wales.

Download (44Mb) | Preview

Abstract

Evolutionary algorithms are widely used for optimsation by researchers in academia and industry. These algorithms have parameters, which have proven to highly determine the performance of an algorithm. For many decades, researchers have focused on determining optimal parameter values for an algorithm. Each parameter configuration has a performance value attached to it that is used to determine a good configuration for an algorithm. Parameter values depend on the problem at hand and are known to be set in two ways, by means of offline and online selection. Offline tuning assumes that the performance value of a configuration remains same during all generations in a run whereas online tuning assumes that the performance value varies from one generation to another. This thesis presents various adaptive approaches each learning from a range of feedback received from the evolutionary algorithm. The contributions demonstrate the benefits of utilising online and offline learning together at different levels for a particular task. Offline selection has been utilised to tune the hyper-parameters of proposed adaptive methods that control the parameters of evolutionary algorithm on-the-fly. All the contributions have been presented to control the mutation strategies of the differential evolution. The first contribution demonstrates an adaptive method that is mapped as markov reward process. It aims to maximise the cumulative future reward. Next chapter unifies various adaptive methods from literature that can be utilised to replicate existing methods and test new ones. The hyper-parameters of methods in first two chapters are tuned by an offline configurator, irace. Last chapter proposes four methods utilising deep reinforcement learning model. To test the applicability of the adaptive approaches presented in the thesis, all methods are compared to various adaptive methods from literature, variants of differential evolution and other state-of-the-art algorithms on various single objective noiseless problems from benchmark set, BBOB.

Item Type: Thesis (PhD)
Keywords: Online Tuning, Parameter Control, Irace, Differential Evolution, Reinforcement Learning
Academic Units: The University of York > Computer Science (York)
Identification Number/EthosID: uk.bl.ethos.805488
Depositing User: Miss Mudita Sharma
Date Deposited: 22 May 2020 17:16
Last Modified: 21 Jun 2020 09:53
URI: http://etheses.whiterose.ac.uk/id/eprint/26582

Available Versions of this Item

  • Learning to Control Differential Evolution Operators. (deposited 22 May 2020 17:16) [Currently Displayed]

You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.

Actions (repository staff only: login required)