Stone, Rebecca Shio-Ming ORCID: https://orcid.org/0000-0001-6944-8725 (2024) Visual bias mitigation driven by Bayesian uncertainties. PhD thesis, University of Leeds.
Abstract
Today, intelligent models are used in applications across all of society, from recidivism prediction, identity verification, a vast collection of healthcare tasks from polyp segmentation to cancer grading, and information retrieval, among many more. The vast majority of these intelligent models are variants of deep neural networks trained on large real-world datasets. These datasets reflect our historical and societal biases; in turn, AI learns these correlations during training resulting in predictors and decision-makers exhibiting racism, ableism, sexism and other forms of prejudice.
Visual data contains many potential biases given its richness of features, and the research related to developing fair vision models is a challenging, open problem. In particular, the sub-problem of implicit mitigation – or mitigation when the knowledge of bias sources in the training or testing data is unknown – is relevant to many use cases where metadata for datasets is difficult to collect. This work contributes to this domain of research by leveraging the observation that bias-conflicting samples, or input samples which are not aligned with the majority correlations, tend to have higher uncertainties in the Bayesian paradigm. By using Bayesian deep neural networks, we can both maintain the performance capabilities of a deterministic network, while gaining access to principled uncertainty estimates. Model uncertainties or epistemic uncertainties in particular provide direct insight into the training data distribution and bias landscape, as bias-conflicting samples are under-represented.
We explore two novel strategies driven by the uncertainties of a Bayesian neural network. The first dynamically re-weights samples as a function of their predictive uncertainty estimates during training, encouraging the model to focus on the more difficult bias-conflicting samples. The second approach fine-tunes the posterior estimate of a converged Bayesian neural network, using the uncertainties to adjust the estimates in favour of fairer predictions. The potential of these methods for implicit visual bias mitigation is demonstrated on benchmark classification tasks and then extended to a medical image segmentation problem with known generalisability issues. Our research, while far from a solution to the bias problem, shows potential for improving model fairness and generalisability and contributes to the literature in this challenging domain.
Metadata
Supervisors: | Bulpitt, Andrew and Hogg, David |
---|---|
Related URLs: | |
Keywords: | bias, bias mitigation, fairness, computer vision, deep learning, responsible AI |
Awarding institution: | University of Leeds |
Academic Units: | The University of Leeds > Faculty of Engineering (Leeds) > School of Computing (Leeds) |
Depositing User: | Mrs Rebecca Shio-Ming Stone |
Date Deposited: | 23 Oct 2024 09:04 |
Last Modified: | 23 Oct 2024 09:04 |
Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:35707 |
Download
Final eThesis - complete (pdf)
Filename: PhD_Thesis_RStone_POSTCORR_FINAL.pdf
Licence:
This work is licensed under a Creative Commons Attribution NonCommercial ShareAlike 4.0 International License
Export
Statistics
You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.