next up previous
Next: Searching and Sorting Up: TECHNICAL OUTLINE Previous: Re-Structuring, Task Parallelism and

Memory Hierarchies

In reality, COBOL files and tables are too large for a single memory, hence data required by one processor may be on disk or held in a remote memory in a parallel system. One way of handling of memory hierarchies is to use latency tolerance: hide latency by overlapping communication with computation. Pre-fetching is the basic technique. For COBOL if required data is held in a remote memory pre-fetching can be used. The more interesting and applicable case is to generalise this approach to handle disk access.

The following generic COBOL loop is ubiquitous in many applications: it reads every record in a salary file, updates each record and writes it back to the file. This style of loop is performance-bound on the read-from and write-to disk:

PERFORM UNTIL EOF
READ SALARY-FILE
AT END SET EOF TO TRUE
NOT AT END
PERFORM PROCESS-SALARY-REC ...
WRITE SALARY-REC
END-READ
END-PERFORM

Adding pre-reading, this becomes:

READ SALARY-FILE INTO SALARY-REC(1) AT END ...
PERFORM VARYING I FROM 2 BY 1 UNTIL EOF
READ SALARY-FILE INTO SALARY-REC(I) AT END ...
NOT AT END
PERFORM PROCESS-SALARY-REC (I-1)
WRITE SALARY-REC (I-1)
END-READ
END-PERFORM

This loop peeling is similar in effect to software pipelining used in processors to reduce access latency to cache. In particular, when disk I/O is considered, there are techniques in the parallel database community and the shared-disk and shared-nothing models (DeWitt and Gray, 1992; Miller et al. 1995). Other techniques that can be used involve latency avoidance, such as exploiting temporal and spatial locality.


next up previous
Next: Searching and Sorting Up: TECHNICAL OUTLINE Previous: Re-Structuring, Task Parallelism and
Rizos Sakellariou 2000-07-31