The chapter discusses the variations and optimizations of threaded code, which can pass or receive data according to any convention and even include parameters. It highlights that the parameters of a routine can follow the threaded link, and the link pointer can be incremented to step through the parameters. For example, a two-parameter routine on the PDP-11 to copy a word from $A$ to $B$ can be implemented using threaded code. The chapter also mentions that threaded code can be optimized by directly jumping to the next routine if it is known to be always followed by the same routine, saving space and time. In practical applications, mixing threaded and hard code can be beneficial if switching between modes is efficient.
The chapter concludes that threaded code provides an attractive alternative to hard code under certain circumstances, saving space with minimal cost in time. It acknowledges the contributions of several individuals who suggested improvements during the development of the FORTRAN IV compiler for DEC's PDP-11, which generates threaded code. The chapter was received in June 1971 and revised in December 1972.
The chapter presents efficient algorithms for partitioning a graph into connected components, biconnected components, and simple paths. These algorithms are designed to run in time and space proportional to the maximum number of vertices or edges, making them suitable for random-access computers. The algorithms use a list structure to represent graphs, where each vertex has a list of adjacent vertices. The chapter details the algorithms for finding connected components, biconnected components, and simple paths, proving their correctness and providing time bounds. The algorithms are implemented in Algol W and translated to Algol for publication and testing. Auxiliary subroutines are also described, including procedures for adding values to stacks, building graph structures, and finding connected components and biconnected components. The chapter concludes with references to the algorithms and their implementation details.The chapter discusses the variations and optimizations of threaded code, which can pass or receive data according to any convention and even include parameters. It highlights that the parameters of a routine can follow the threaded link, and the link pointer can be incremented to step through the parameters. For example, a two-parameter routine on the PDP-11 to copy a word from $A$ to $B$ can be implemented using threaded code. The chapter also mentions that threaded code can be optimized by directly jumping to the next routine if it is known to be always followed by the same routine, saving space and time. In practical applications, mixing threaded and hard code can be beneficial if switching between modes is efficient.
The chapter concludes that threaded code provides an attractive alternative to hard code under certain circumstances, saving space with minimal cost in time. It acknowledges the contributions of several individuals who suggested improvements during the development of the FORTRAN IV compiler for DEC's PDP-11, which generates threaded code. The chapter was received in June 1971 and revised in December 1972.
The chapter presents efficient algorithms for partitioning a graph into connected components, biconnected components, and simple paths. These algorithms are designed to run in time and space proportional to the maximum number of vertices or edges, making them suitable for random-access computers. The algorithms use a list structure to represent graphs, where each vertex has a list of adjacent vertices. The chapter details the algorithms for finding connected components, biconnected components, and simple paths, proving their correctness and providing time bounds. The algorithms are implemented in Algol W and translated to Algol for publication and testing. Auxiliary subroutines are also described, including procedures for adding values to stacks, building graph structures, and finding connected components and biconnected components. The chapter concludes with references to the algorithms and their implementation details.