Mohammed, Abrar (2024) Generating video narratives to support learning. PhD thesis, University of Leeds.
Abstract
Videos have become popular learning resources which are available through various online platforms, e.g. YouTube, Khan Academy and MOOCS. However, using videos for learning can be challenging, especially when learners are new to the domain. Learners may not know what key points to notice, can lose concentration, and become
frustrated. Some of the effective ways to address these challenges are to provide video annotations, summarise the main points, and link different video segments. Existing approaches for annotation and summarisation are either manual and do not scale, or automatic but do not explicitly relate to main domain concepts. There are no approaches which provide automated ways to segment and characterise videos by relating to domain concepts and linking segments from different videos to support learning key domain areas. The thesis embarks on the challenge of addressing this shortcoming by proposing an automated way to generate video narratives that group segments from different videos to support learning key domain concepts. Our approach is knowledge driven and presumes the domain is computationally represented via a taxonomy. Our first contribution is by developing the Video Segmentation and Characterisation for Learning (VISC-L) framework which uses video
transcripts and a domain taxonomy to characterise video segments. Characterising video segments with a domain related topic(s) and concept(s) generated by combining both the Semantic Tagging and Topic Classifier characterisation is found to be an interesting way of linking video content to its domain as recognised by the
Technology Enhance Learning community. Our second contribution is the designing of the Videos Narratives for Learning (VIN-L) framework which links different video segments from different videos following a pedagogical theory called ‘Subsumption Theory for meaningful learning.’ Four types of video narratives, as defined in
the Subsumption Theory, are generated–Combinational, Correlative, Derivative, and Super-Ordinate. These narrative types represent how the information is linked in the learner’s cognitive structure, for full details about the narrative types, see Section 3.3.3. VIN-L is a novel and generic framework which contributes to the area of using Artificial Intelligence in Education by linking domain related parts from different videos using a pedagogical theory and this framework is new and the idea of this type of narrative has not been done before. Both VISC-L and VIN-L are implemented in two domains for learning soft skills: Giving Pitch Presentations and Health Related Quality of Life Awareness. Soft skill domains, (See Section 3.7), are an important domain but are not well structured and the main learning resources are available as online videos like: tutorials, interviews, stories, etc. The learners of these domains need to find the learning materials among the huge amount of available online videos. Additionally, learners need to be able to identify and remember different domain areas in these videos which could cause learning barriers if the learners
are new to the domain and may not able to learn these domains systematically by themselves. The resultant video narratives are evaluated with domain experts (11
experts for the first domain and 6 experts for the second domain) and advanced learners (in the presentation skills domain)–we could not evaluate VIN-L with learners in the second domain because of the time and budget constraints which are out of the scope of this thesis and will be considered as future work (see Section 7.3). With
the first domain (Giving Pitch Presentations), we have evaluated VIN-L with advanced learners because we used workers from Amazon Mechanical Turk to participate in the evaluation study. The number of workers was 217 and they had different knowledge of the domain (80% had domain related experience) and this is why we refer to them as advanced learners. The Quality and the Perceived Usefulness of the video narratives was assessed. The evaluation outcomes from the evaluation studies are presented as average values in (Sections 5.4 and 5.5–the experts and learners outcomes in the first domain respectively, Section 6.6–the experts outcome in the second domain). For the actual numbers used in these studies, see (Appendices L for the first domain and O for the second domain). All the evaluation studies conducted
in this thesis are saved in our GitHub account 1. The findings of the responses of all video narrative types show that the overall Quality of the narratives is good, as well as indicate some areas for improving the supporting text that clarifies the domain connections. All video narrative types have been proved to have potential to be useful if used to support learning, considering Perceived Usefulness, Learning Effect and Cognitive Work Load evaluation results.
Metadata
Supervisors: | Dimitrova, Vania and Hogg, David |
---|---|
Related URLs: | |
Keywords: | Video-based learning , Video Segmentation , Video Characterisation , Video Narratives , Ontology , Subsumption theory |
Awarding institution: | University of Leeds |
Academic Units: | The University of Leeds > Faculty of Engineering (Leeds) > School of Computing (Leeds) |
Depositing User: | Mrs Abrar Mohammed |
Date Deposited: | 25 Sep 2024 09:04 |
Last Modified: | 25 Sep 2024 09:04 |
Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:35205 |
Downloads
Final eThesis - complete (pdf)
Filename: Generating Video Narratives to Support Learning.pdf
Licence:
This work is licensed under a Creative Commons Attribution NonCommercial ShareAlike 4.0 International License
Final eThesis - complete (pdf)
Filename: Abrar_Mohammed_PhD_Thesis__2024_First page.pdf
Licence:
This work is licensed under a Creative Commons Attribution NonCommercial ShareAlike 4.0 International License
Export
Statistics
You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.