Thompson, Benjamin Criveli (1987) A mathematical theory of synchronous concurrent algorithms. PhD thesis, University of Leeds.
Abstract
A synchronous concurrent algorithm is an algorithm that is described as a network of intercommunicating processes or modules whose concurrent actions are synchronised with respect to a global clock. Synchronous algorithms include systolic algorithms; these are algorithms that are well-suited to implementation in VLSI technologies.
This thesis provides a mathematical theory for the design and analysis of synchronous algorithms. The theory includes the formal specification of synchronous algorithms; techniques for proving the correctness and performance or time-complexity of synchronous algorithms, and formal accounts of the simulation and top-down design of synchronous algorithms.
The theory is based on the observation that a synchronous algorithm can be specified in a natural way as a simultaneous primitive recursive function over an abstract data type; these functions were first studied by J. V. Tucker and J. I. Zucker. The class of functions is described via a formal syntax and semantics, and this leads to the definition of a functional algorithmic notation called PR. A formal account of synchronous algorithms and their behaviour is achieved by showing that synchronous algorithms can be specified in PR. A formal account of the performance of synchronous algorithms is achieved via a mathematical account of the time taken to evaluate a function defined by simultaneous primitive recursion.
A synchronous algorithm, when specified in PR, can be transformed into a program in a language called FPIT. FPIT is a language based on abstract data types and on the multiple or concurrent assignment statement. The transformation from PR to FPIT is phrased as a compiler that is proved correct; compiling the PR-representation of a synchronous algorithm thus yields a provably correct simulation of the algorithm. It is proved that FPIT is just what is needed to implement PR by defining a second compiler, this time from FPIT back into PR, which is again proved correct, and thus PR and FPIT are formally
computationally equivalent. Furthermore, an autonomous account of the length of computation of FPIT programs is given, and the two compilers are shown to be performance preserving; thus PR and FPIT are computationally equivalent in an especially strong sense.
The theory involves a formal account of the top-down design of synchronous algorithms that is phrased in terms of correctness and performance preserving transformations between synchronous algorithms specified at different levels of data abstraction. A new definition of what it means for one abstract data type to be 'implemented' over another is given. This definition generalises the idea of a computable algebra due to A. I. Mal'cev and M. 0. Rabin. It is proved that if one data type D is implementable over
another data type D', then there exists correctness and performance preserving compiler mapping high level PR-programs over D to low level PR-programs over D'.
The compilers from PR to FPIT and from FPIT to PR are defined explicitly, and our compilerexistence
proof is constructive, and so this work is the basis of theoretically well-founded software tools
for the design and analysis of synchronous algorithms.
Metadata
Supervisors: | Tucker, J. |
---|---|
Awarding institution: | University of Leeds |
Academic Units: | The University of Leeds > Faculty of Engineering (Leeds) > School of Computing (Leeds) |
Identification Number/EthosID: | uk.bl.ethos.379746 |
Depositing User: | Ethos Import |
Date Deposited: | 02 Jul 2010 09:23 |
Last Modified: | 07 Mar 2014 11:21 |
Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:945 |
You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.