Longman, Fodio S (2019) Advanced Methods for Fusion of Remote Sensing Images and Oil Spill Segmentation. PhD thesis, University of Sheffield.
Abstract
Remote sensing systems on board satellites (spaceborne) or aircraft (airborne) have continued to play significant role in disaster management and mitigation, including for oil spill detection due to their ability to obtain wide area coverage images and other data from a distance. A single remotely sensed image can cover hundreds of kilometres of the earth surface enabling wider monitoring and change detection observation. When oil spill occur, remote sensing systems equipped with different sensors covering the spectral bands of the electro-magnetic spectrum are deployed to obtain images for damage assessment, scientific analysis or to ascertain spill location, amount of oil spilled and the type of oil for efficient planning, management and illegal dumping of ballasts identification for legal actions.
In the design of such remote sensing systems, there are usually considerate trade-offs that are inevitable due technological limitations of such systems, resulting in spatial and spectral amendments. Panchromatic sensors for example obtain images at high spatial resolutions but lower spectral resolutions, while hyperspectral images obtain high spectral images but in lower spatial resolutions. Additionally, optical systems depend on external energy sources to obtain the images while others can acquire data irrespective of weather conditions. By combining data originating from different sources, scientists, analyst and planners can have images of higher quality than the individual images and can take advantage of the complimentary information embedded in diverse data acquired.
This thesis presents a new framework for oil spill detection by combining data originating from different imaging sensors of remote sensing systems. Firstly, the new framework for oil spill segmentation utilises the fusion of images to improve image quality and to take ad-vantage of complimentary information available in the different resolutions of SAR images. The framework adopts the wavelet image fusion technique where the individual images are converted from spatial to frequency domain and decomposed to approximations and de-tail coefficients, allowing image properties to be transferred using a maximum fusion rule. While this method improves spatial resolution of images and retains colour information, it is observed that the scale of decomposition needs to be sensibly selected since smaller scales creates mosaic effects and large scale values causes loss of colour contents making it unsuitable for images with different spectral channels. To solve the problem of multi-modality in images, a Gaussian Process (GP) regression approach is utilised using a custom covariance that learns the geometry and intensity of pixels and also handles the change of support problem inherent in multi-resolution images. Established performance metrics in the literature are used to evaluate the quality of the fused images when compared with a reference data. Additionally, a qualitative and quantitative review of pansharpening methods for hyper-spectral images is carried out specifically for the oil spill detection application. The pansharpened results are compared in terms of un-mixing performance with a reference hyper-spectral image. This re-view can help researchers interested in this field of study to determine what methods are best for pansharpening and un-mixing and to answer the question of whether pansharpening improves un-mixing result. This can be extended for other applications that include weather forecasting, spectral analysis etc. Lastly, the a new covariance kernel is developed to solve image fusion problems in multi-band images by treating differently each spatial and spectral channels as input to the Gaussian process allowing different spatial and spectral pixels of the images to be learned and combined. The developed approach allows the transfer of information between different image modalities enabling local structure of high spatial resolution images that forms the base of the estimated image to be recovered. The developed fusion approaches achieves compelling enhancement when compared with the state of the art. Furthermore, segmentation is done on the fused and reference images with the developed fused image picking up more objects from the image than other methods. This can be attributed to the ability of the approach to sharpen the resolution of the spectral channels that supports pixel coordinates from high spatial image that improves edges of the image.
Metadata
Supervisors: | Mihaylova, Lyudmila and Coca, Daniel |
---|---|
Awarding institution: | University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Automatic Control and Systems Engineering (Sheffield) |
Identification Number/EthosID: | uk.bl.ethos.805387 |
Depositing User: | Mr Fodio S Longman |
Date Deposited: | 07 May 2020 16:33 |
Last Modified: | 01 May 2021 09:53 |
Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:26645 |
Download
ThesisPractice2_VF_edited_2
Filename: ThesisPractice2_VF_edited_2.pdf
Licence:
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License
Export
Statistics
You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.