Wang, Xiaoman ORCID: https://orcid.org/0000-0001-5863-5517 (2024) Developing An Automated Graded Assessment System for English/Chinese Interpreting. PhD thesis, University of Leeds.
Abstract
Assessment of interpreting quality is an important aspect of interpreting practice and interpreter training as well as a meaningful topic for researchers in interpreting studies. Traditional methods of evaluating interpreting quality are often hindered by their time-consuming and labour-intensive nature, involving manual processes prone to inconsistency and subjective bias. This project seeks to address these challenges by automating the assessment process through the development of a machine learning model. The primary objective of this project is to devise a machine learning model capable of predicting the quality of consecutive interpreting performances on a scale of 1 to 5. This model covers key quality features such as information fidelity, target-language quality, and delivery, which includes aspects like fluency and pronunciation. In realising automatic assessment of information fidelity, the project employs neural network models to vectorise transcribed sentences from both source speeches and their interpretations. This method facilitates the evaluation of similarity between the source and interpreted sentences, thereby measuring information fidelity. In realising automatic assessment of delivery, with a focus on fluency, the project extracts ten parameters, seven of which are newly identified based under the theoretical framework of spoken language studies and through a comprehensive descriptive analysis of interpreting data. Pronunciation, another critical component of delivery, is quantitatively analysed using the Confidence Measure, applied through a regression analysis framework. Additionally, the project assesses target-language quality by examining editor ratios. This involves a comparison of transcribed interpretations against revised sentences, which are enhanced for better language quality by leveraging Large Language Models (LLMs). The integration of these features into the machine learning model shows that the accuracy of the model in predicting interpreting performance stands at approximately 64% before oversampling. Notably, this accuracy improves to 87% following the application of oversampling techniques. This approach to automated interpreting assessment serves as a preliminary exploration to the field, offering possible empirical solution to the challenges of traditional manual assessment methods.
Metadata
Supervisors: | Wang, Binhua and Sharoff, Serge |
---|---|
Related URLs: |
|
Keywords: | automated assessment; machine learning; consecutive interpreting; consecutive interpreting; interpreting quality |
Awarding institution: | University of Leeds |
Academic Units: | The University of Leeds > Faculty of Arts, Humanities and Cultures (Leeds) > School of Languages Cultures and Societies (Leeds) |
Depositing User: | Dr Xiaoman Wang |
Date Deposited: | 16 Jul 2024 10:08 |
Last Modified: | 16 Jul 2024 10:08 |
Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:35214 |
Download
Final eThesis - complete (pdf)
Embargoed until: 1 August 2027
Please use the button below to request a copy.
Filename: Thesis for deposit_Xiaoman.pdf
Export
Statistics
Please use the 'Request a copy' link(s) in the 'Downloads' section above to request this thesis. This will be sent directly to someone who may authorise access.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.