Wijesinghe, W Okandapola Kankanamalage Isuru Suranga ORCID: https://orcid.org/0000-0003-1414-4825 (2024) Intelligent image-driven motion modelling for adaptive radiotherapy. PhD thesis, University of Leeds.
Abstract
Internal anatomical motion (e.g. respiration-induced motion) confounds the precise delivery of radiation to target volumes during external beam radiotherapy. Precision is, however, critical to ensure prescribed radiation doses are delivered to the target (tumour) while surrounding healthy tissues are preserved from damage. If the motion itself can be accurately estimated, the treatment plan and/or delivery can be adapted to compensate.
Current methods for motion estimation rely either on invasive implanted fiducial markers, imperfect surrogate models based, for example, on external optical measurements or breathing traces, or expensive and rare systems like in-treatment MRI. These methods have limitations such as invasiveness, imperfect modelling, or high costs, underscoring the need for more efficient and accessible approaches to accurately estimate motion during radiation treatment. This research, in contrast, aims to achieve accurate motion prediction using only relatively low-quality, but almost universally available planar X-ray imaging. This is challenging since such images have poor soft tissue contrast and provide only 2D projections through the anatomy. However, our hypothesis suggests that, with strong priors in the form of learnt models for anatomical motion and image appearance, these images can provide sufficient information for accurate 3D motion reconstruction.
We initially proposed an end-to-end graph neural network (GNN) architecture aimed at learning mesh regression using a patient-specific template organ geometry and deep features extracted from kV images at arbitrary projection angles. However, this approach proved to be more time-consuming during training. As an alternative, a second framework was proposed, based on a self-attention convolutional neural network (CNN) architecture. This model focuses on learning mappings between deep semantic angle-dependent X-ray image features and the corresponding encoded deformation latent representations of deformed point clouds of the patient's organ geometry.
Both frameworks underwent quantitative testing on synthetic respiratory motion scenarios and qualitative assessment on in-treatment images obtained over a full scan series for liver cancer patients. For the first framework, the overall mean prediction errors on synthetic motion test datasets were 0.16±0.13 mm, 0.18±0.19 mm, 0.22±0.34 mm, and 0.12±0.11 mm, with mean peak prediction errors of 1.39 mm, 1.99 mm, 3.29 mm, and 1.16 mm. As for the second framework, the overall mean prediction errors on synthetic motion test datasets were 0.065±0.04 mm, 0.088±0.06 mm, 0.084±0.04 mm, and 0.059±0.04 mm, with mean peak prediction errors of 0.29 mm, 0.39 mm, 0.30 mm, and 0.25 mm.
Metadata
Supervisors: | Taylor, Zeike and Nix, Michael and Gooya, Ali |
---|---|
Keywords: | Motion Modelling;Adaptive Radiotherapy;X-ray image;Convolutional Neural Network;Graph Neural Network;Self-attention;Synthetic Data |
Awarding institution: | University of Leeds |
Academic Units: | The University of Leeds > Faculty of Engineering (Leeds) > School of Mechanical Engineering (Leeds) > Institute of Medical and Biological Engineering (iMBE)(Leeds) |
Depositing User: | Mr. W Okandapola Kankanamalage Isuru Suranga Wijesinghe |
Date Deposited: | 14 May 2024 09:13 |
Last Modified: | 14 May 2024 09:13 |
Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:34856 |
Download
Final eThesis - complete (pdf)
Filename: PhD_Thesis_Isuru_Wijesinghe_201388721.pdf
Licence:
This work is licensed under a Creative Commons Attribution NonCommercial ShareAlike 4.0 International License
Export
Statistics
You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.