Sanusi, Ibrahim E (2019) OPTIMAL AND ADAPTIVE CONTROL FRAMEWORKS USING REINFORCEMENT LEARNING FOR TIME-VARYING DYNAMICAL SYSTEMS. PhD thesis, University of Sheffield.
Abstract
Performance of complex propulsion and power systems are affected by a vast number of varying factors such as gradual system degradation, engine build differences and changing operating conditions. Owing to these variations, prior characterisation of the system performance metrics such as fuel efficiency function and constraints is infeasible. Existing model-based control approaches are therefore inherently conservative at the expense of the system performance as they are unable to fully characterise the system variations. The system performance characteristics affected by these variations are typically used for health monitoring and maintenance management, but the opportunities to complement the control design have received little attention. It is therefore increasingly important to use the information about the system performance characteristics in the control system design whilst considering the reliability of its implementation. This thesis therefore considers the design of direct adaptive frameworks that exploit emerging diagnostic technologies and enable the direct use of complex performance metrics to deliver self-optimising control systems in the face of disturbances and system variations. These frameworks are termed condition-based control techniques and this thesis extends reinforcement learning (RL) theory which has achieved significant successes in the area of computing and artificial intelligence to the new frameworks and applications. Consequently, an online RL framework was developed for the class of complex propulsion and power systems that make use of the performance metrics to directly learn and adapt the system control. The RL adaptations were further integrated into existing baseline controller structures whilst maintaining the safety and reliability of the underlying system. Furthermore, two online optimal RL tracking control frameworks were developed for time-varying dynamical systems that use a new augmented formulation with integral control. The proposed online RL frameworks advance the state-of-the-art for use in tracking control applications by not making restrictive assumptions on reference model dynamics or use of discounted tracking costs, and guaranteeing zero steady-state tracking error. Finally, an online power management optimisation scheme for hybrid systems that uses a condition-based RL adaptation was developed. The proposed power management optimisation scheme is able to learn and compensate for the gradual system variations and learn online the optimal power management strategy between the hybrid power source given future load predictions. This way, improved system performance is delivered and providing a through-life adaptation strategy.
Metadata
Supervisors: | Dodd, Tony and Mills, Andrew R and Konstantopoulos, George |
---|---|
Keywords: | Adaptive control; Intelligent systems; Reinforcement learning; Approximate dynamic control; Optimal control; Tracking control; Propulsion and power systems; Power management systems |
Awarding institution: | University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Automatic Control and Systems Engineering (Sheffield) |
Identification Number/EthosID: | uk.bl.ethos.806873 |
Depositing User: | Mr. Ibrahim E Sanusi |
Date Deposited: | 10 Jun 2020 10:21 |
Last Modified: | 01 Jul 2021 09:53 |
Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:26983 |
Download
Filename: MyThesis.pdf
Description: PDF
Licence:
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License
Export
Statistics
You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.