Wang, Ziwei ORCID: https://orcid.org/0000-0003-0481-6341 (2022) Evolutionary Optimisation in Convolutional Neural Networks. PhD thesis, University of York.
Abstract
Artificial Neural Networks (ANNs) have increased in performance in recent years and have been successfully used in many computer applications. The growth in ANNs and machine learning applications has resulted in the state-of-the-art ANNs comprising hundreds of hidden layers, which require millions of operations to process input data and gigabytes of memory to store model parameters. The complexity of ANNs limits them to be implemented on resource-constrained platforms for certain applications, such as Internet-of-Things (IoT). Therefore, it is obvious to try and optimise ANNs so that the computational cost and model size of ANNs is reduced, while maintaining high accuracy in classification. Evolutionary algorithms demonstrate flexible capabilities for solving optimisation problems with one or more objectives. Hence, applying evolutionary algorithms to optimise ANNs becomes a potential solution.
In this PhD work, evolutionary techniques are applied to the optimisation of ANNs, which aims to reduce the computational cost and parameter size while minimising loss in classification accuracy. The optimisation is divided into two categories, i.e. computational cost optimisation, and data representation optimisation. For the computational cost optimisation, a multi-objective evolutionary approach is proposed to reduce the size and number of convolution kernels in each convolutional layer and generate trade-offs between computational cost and model's classification accuracy. For data representation optimisation, an evolutionary-based adaptive integer quantisation methodology is introduced to quantise the pre-trained models from 32-bit floating point representation to small bit-width integer representation.
The experimental results for computational cost reduction that multi-objective evolutionary algorithms achieve large improvements in resource consumption with no significant reduction in a model classification accuracy, compared with the original models' architecture. From the experimental results on data representation optimisation, evolutionary-based adaptive integer quantisation methodology illustrates that weights and biases in convolutional layers can be quantised to 8-bit integer representation and 4-bit integer representation in fully-connected layers without significant gap in models' classification accuracy between pre-trained 32-bit floating point representation and quantised weights and biases.
Metadata
Supervisors: | Andy, Tyrrell and Martin, Trefzer |
---|---|
Keywords: | Artificial Neural Network; Convolutional Neural Network; Neural Architecture Search; Evolutionary Algorithm; Multi-objective optimisation |
Awarding institution: | University of York |
Academic Units: | The University of York > School of Physics, Engineering and Technology (York) |
Academic unit: | Electronic Engineering |
Identification Number/EthosID: | uk.bl.ethos.871150 |
Depositing User: | Mr Ziwei Wang |
Date Deposited: | 26 Jan 2023 12:54 |
Last Modified: | 21 Mar 2024 16:03 |
Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:32139 |
Download
Examined Thesis (PDF)
Filename: Wang_202052388_CorrectedThesisClean.pdf
Licence:
This work is licensed under a Creative Commons Attribution NonCommercial NoDerivatives 4.0 International License
Export
Statistics
You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.