Avendaño Muñoz, Luis Alfredo
ORCID: 0000-0002-6631-7591
(2025)
Impact of Local and Spatial Redundancy on Prunability of Convolutional Neural Networks and their Training Dynamics.
PhD thesis, University of Leeds.
Abstract
Convolutional Neural Networks (CNNs) are widely used in computer vision, but despite their efficient use of weights via weight sharing, approaches to compressing CNNs remain a significant challenge. Among various efficiency-improving techniques, pruning has been extensively studied as it enables model compression with a controlled reduction in accuracy. This thesis explores pruning from 3 perspectives: the local neighbourhood of a solution, the receptive field, and the interplay between training dynamics and prunability.
Exploring the neighbourhood of a solution revealed that applying noise to dense CNNs before pruning, uncovers better pruned models than pruning the original model alone. These improved pruned models are found at extreme pruning rates (greater than 0.8), and their improvements are present with and without further fine-tuning
Results show that adding noise to a CNN model mitigates a detrimental phenomenon in which the variance discrepancy among network features grows dramatically during pruning, thereby disrupting internal inference dynamics.
Next, this thesis investigates how receptive field size
(i.e., the area of the input image that a neuron ``attends'' to) affects the pruning performance of CNNs trained for visual classification tasks. The findings reveal that pruning's impact on accuracy varies systematically across receptive fields. Surprisingly, larger receptive fields tend to mitigate pruning-induced accuracy loss, and this effect often persists even after fine-tuning. Although dense networks achieve peak accuracy with smaller receptive fields, the optimal receptive field for pruned networks is typically larger. This phenomenon can be attributed to highly redundant deep layers that exhibit low feature saturation. This results in minimal impact on classification accuracy in the deep layers.
Finally, this thesis explores how both a model's receptive field and the choice of optimiser influence the geometry of the loss landscape and, ultimately, the extent to which the model can be effectively pruned. Results show that the ruggedness of the loss landscape is directly related to the receptive field, making landscapes associated with larger receptive fields more challenging for Stochastic Gradient Descent (SGD) to navigate.
Models trained with second-order optimisers are significantly less prunable than those trained with first-order or flatness-promoting optimisation methods. Furthermore, it is possible to manipulate the loss landscape (e.g., by increasing model width) so that SGD can partially circumvent the limitations imposed by the ruggedness of larger receptive fields. Additionally, the results suggest that the reduced prunability of models trained with second-order optimisers is attributable to the sharpness and filter diversity of the learned solutions.
Lastly, results show that increasing receptive fields recovers the pruned accuracy of models trained with second-order optimisers.
Metadata
| Supervisors: | Omidvar, Nabi and Cohen, Netta |
|---|---|
| Keywords: | Neural Network Pruning, Convolutional Neural Networks, Second-Order Optimisers, Optimisation and Pruning, Receptive Field, |
| Awarding institution: | University of Leeds |
| Academic Units: | The University of Leeds > Faculty of Engineering (Leeds) |
| Academic unit: | School of Computer Science |
| Date Deposited: | 28 Apr 2026 09:41 |
| Last Modified: | 28 Apr 2026 09:41 |
| Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:38497 |
Download
Final eThesis - complete (pdf)
Filename: Avendano_Munoz_LA_Computer_Science_PhD_2025.pdf
Licence:

This work is licensed under a Creative Commons Attribution 4.0 International License
Export
Statistics
You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.