White Rose University Consortium logo
University of Leeds logo University of Sheffield logo York University logo

Deep Gaussian Processes and Variational Propagation of Uncertainty

Damianou, Andreas (2015) Deep Gaussian Processes and Variational Propagation of Uncertainty. PhD thesis, University of Sheffield.

Available under License Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 UK: England & Wales.

Download (9Mb) | Preview


Uncertainty propagation across components of complex probabilistic models is vital for improving regularisation. Unfortunately, for many interesting models based on non-linear Gaussian processes (GPs), straightforward propagation of uncertainty is computationally and mathematically intractable. This thesis is concerned with solving this problem through developing novel variational inference approaches. From a modelling perspective, a key contribution of the thesis is the development of deep Gaussian processes (deep GPs). Deep GPs generalise several interesting GP-based models and, hence, motivate the development of uncertainty propagation techniques. In a deep GP, each layer is modelled as the output of a multivariate GP, whose inputs are governed by another GP. The resulting model is no longer a GP but, instead, can learn much more complex interactions between data. In contrast to other deep models, all the uncertainty in parameters and latent variables is marginalised out and both supervised and unsupervised learning is handled. Two important special cases of a deep GP can equivalently be seen as its building components and, historically, were developed as such. Firstly, the variational GP-LVM is concerned with propagating uncertainty in Gaussian process latent variable models. Any observed inputs (e.g. temporal) can also be used to correlate the latent space posteriors. Secondly, this thesis develops manifold relevance determination (MRD) which considers a common latent space for multiple views. An adapted variational framework allows for strong model regularisation, resulting in rich latent space representations to be learned. The developed models are also equipped with algorithms that maximise the information communicated between their different stages using uncertainty propagation, to achieve improved learning when partially observed values are present. The developed methods are demonstrated in experiments with simulated and real data. The results show that the developed variational methodologies improve practical applicability by enabling automatic capacity control in the models, even when data are scarce.

Item Type: Thesis (PhD)
Additional Information: Research was carried out in the department of Neuroscience and the department of Computer Science (double affiliation), University of Sheffield.
Academic Units: The University of Sheffield > Faculty of Medicine, Dentistry and Health (Sheffield)
Identification Number/EthosID: uk.bl.ethos.665042
Depositing User: Dr Andreas Damianou
Date Deposited: 28 Aug 2015 14:27
Last Modified: 03 Oct 2016 12:18
URI: http://etheses.whiterose.ac.uk/id/eprint/9968

You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.

Actions (repository staff only: login required)