LimitLESS Directories A Scalable Cache Coherence Scheme

Page created by Bonnie Mendoza
 
CONTINUE READING
LimitLESS Directories: A Scalable Cache Coherence Scheme
                                  David Chaiken, John Kubiatowicz, and Anant Agarwal
                                           Laboratory for Computer Science
                                         Massachusetts Institute of Technology
                                            Cambridge, Massachusetts 02139

Abstract                                                               processors maintain cached copies of a shared memory loca-
                                                                       tion, local modi cations can result in a globally inconsistent
Caches enhance the performance of multiprocessors by re-               view of memory. Buses in small-scale multiprocessors o er
ducing network trac and average memory access latency.                convenient solutions to the coherence problem that rely on
However, cache-based systems must address the problem of               system-wide broadcast mechanisms [4, 13, 15, 20, 24]. When
cache coherence. We propose the LimitLESS directory pro-               any change is made to a data location, a broadcast is sent so
tocol to solve this problem. The LimitLESS scheme uses a               that all of the caches in the system can either invalidate or
combination of hardware and software techniques to realize             update their local copy of the location. Unfortunately, this
the performance of a full-map directory with the memory                type of broadcast in large-scale multiprocessors negates the
overhead of a limited directory. This protocol is supported            bandwidth reduction that makes caches attractive in the
by Alewife, a large-scale multiprocessor. We describe the ar-            rst place. In large-scale multiprocessors, broadcast mech-
chitectural interfaces needed to implement the LimitLESS               anisms are either inecient or prohibitively expensive to
directory, and evaluate its performance through simulations            implement. Furthermore, it is dicult to make broadcasts
of the Alewife machine.                                                atomic.
                                                                          A number of cache coherence protocols have been pro-
                                                                       posed to solve the coherence problem in the absence of
1 Introduction                                                         broadcast mechanisms [3, 6, 14, 23]. These message-based
                                                                       protocols allocate a section of the system's memory, called
The communication bandwidth of interconnection networks                a directory, to store the locations and state of the cached
is a critical resource in large-scale multiprocessors. This sit-       copies of each data block. Instead of broadcasting a modi-
uation will remain unchanged in the future because physi-                ed location, the memory system sends an invalidate mes-
cally constrained communication speeds cannot match the                sage to each cache that has a copy of the data. The protocol
increasing bandwidth requirements of processors. Caches                must also record the acknowledgment of each of these mes-
reduce the volume of trac imposed on the network by au-               sages to ensure that the global view of memory is actually
tomatically replicating data where it is needed. When a                consistent.
processor attempts to read or to write a unit of data, the                Although directory protocols have been around since the
system fetches the data from a remote memory module into               late 1970's, the usefulness of the early protocols (e.g., [6])
a cache, which is a fast local memory dedicated to the pro-            was in doubt for several reasons: First, the directory itself
cessor. Subsequent accesses to the same data are satis ed              was a centralized monolithic resource which serialized all
within the local processing node, thereby avoiding repeat              requests. Second, directory accesses were expected to con-
requests over the interconnection network.                             sume a disproportionately large fraction of the available net-
   In satisfying most memory requests, a cache increases the           work bandwidth. Third, the directory became prohibitively
performance of the system in two ways: First, memory ac-               large as the number of processors increased. To store point-
cess latency incurred by the processors is shorter than in a           ers to blocks potentially cached by all the processors in the
system that does not cache data, because typical cache ac-             system, the early directory protocols (such as the Censier
cess times are much lower than interprocessor communica-               and Feautrier scheme [6]) allocate directory memory pro-
tion times (often, by several orders of magnitude). Second,            portional to the product of the total memory size and the
when most requests are satis ed within processing nodes,               number of processors. While such full-map schemes per-
the volume of network trac is also lower.                             mit unlimited caching, their directory size grows as (N 2 ),
   However, replicating blocks of data in multiple caches              where N is the number of processors in the system.
introduces the cache coherence problem. When multiple                     As observed in [3], the rst two concerns are easily dis-
                                                                       pelled: The directory can be distributed along with main
                                                                       memory among the processing nodes to match the aggre-
                                                                       gate bandwidth of distributed main memory. Furthermore,
                                                                       required directory bandwidth is not much more than the
                                                                       memory bandwidth, because accesses destined to the direc-
    Appeared in ASPLOS-IV. April, 1991.                                tory alone comprise a small fraction of all network requests.

                                                                   1
Thus, recent research in scalable directory protocols focuses       and thus incur high write latencies for very large machines.
on alleviating the severe memory requirements of the dis-           Furthermore, chained directory protocols lack the Limit-
tributed full-map directory schemes.                                LESS protocol's ability to couple closely with a multipro-
   Scalable coherence protocols di er in the size and the           cessor's software, as described in Section 6.
structure of the directory memory that is used to store the            To evaluate the LimitLESS protocol, we have imple-
locations of cached blocks of data. Limited directory proto-        mented the full-map directory, limited directory, and other
cols [3], for example, avoid the severe memory overhead of          cache coherence protocols in ASIM, the Alewife system sim-
full-map directories by allowing only a limited number of si-       ulator. Since ASIM is capable of simulating the entire
multaneously cached copies of any individual block of data.         Alewife machine, the di erent coherence schemes can be
Unlike a full-map directory, the size of a limited directory        compared in terms of absolute execution time.
grows as (N log N ) with the number of processors, because            The next section describes the details of the Alewife ma-
it allocates only a small, xed number of pointers per entry.        chine's architecture that are relevant to the LimitLESS di-
Once all of the pointers in a directory entry are lled, the         rectory protocol. Section 3 introduces the LimitLESS pro-
protocol must evict previously cached copies to satisfy new         tocol, and Section 4 presents the architectural interfaces
requests to read the data associated with the entry. In such        needed to implement the new coherence scheme. Section 5
systems, widely shared data locations degrade system per-           describes the Alewife system simulator and compares the
formance by causing constant eviction and reassignment, or          di erent coherence schemes in terms of absolute execution
thrashing, of directory pointers. However, previous stud-           time. Section 6 suggests extensions to the software compo-
ies have shown that a small set of pointers is sucient to          nent of the LimitLESS scheme, and Section 7 concludes the
capture the worker-set of processors that concurrently read         paper.
many types of data [7, 19, 25]. The performance of limited
directory schemes can approach the performance of full-map
schemes if the software is optimized to minimize the number
of widely-shared objects.                                           2 The Alewife Machine
   This paper proposes the LimitLESS cache coherence pro-           Alewife is a large-scale multiprocessor with distributed
tocol, which realizes the performance of the full-map direc-        shared memory. The machine, organized as shown in Fig-
tory protocol, with the memory overhead of a limited direc-         ure 1, uses a cost-e ective mesh network for communication.
tory, but without excessive sensitivity to software optimiza-       This type of architecture scales in terms of hardware cost
tion. This new protocol is supported by the architecture            and allows the exploitation of locality. Unfortunately, the
of the Alewife machine, a large-scale, distributed-memory           non-uniform communication latencies make such machines
multiprocessor. Each processing node in the Alewife ma-             hard to program because the onus of managing locality in-
chine contains a processor, a oating-point unit, a cache,           variably falls on the programmer. The goal of the Alewife
and portions of the system's globally shared memory and             project is to discover and to evaluate techniques for auto-
directory. The LimitLESS scheme implements a small set              matic locality management in scalable multiprocessors, in
of pointers in the memory modules, as do limited directory          order to insulate the programmer from the underlying ma-
protocols. But when necessary, the scheme allows a memory           chine details. Our approach to achieving this goal employs
module to interrupt the processor for software emulation of         techniques for latency minimization and latency tolerance.
a full-map directory. Since this new coherence scheme is
partially implemented in software, it can work closely with            Alewife's multilayered approach, in which the compiler,
a multiprocessor's compiler and run-time system.                    runtime system, and hardware cooperate in enhancing com-
                                                                    munication locality, reduces average communication latency
   The LimitLESS scheme should not be confused with                 and required network bandwidth. Shared-data caching in
schemes previously termed software-based, which require             Alewife is an example of a hardware method for reduc-
static identi cation of non-cacheable locations. Although           ing communication trac. This method is dynamic, rather
the LimitLESS scheme is partially implemented in software,          than static. Compiler partitioning and placement together
it dynamically detects when coherence actions are required;         with near-neighbor scheduling are Alewife's software meth-
consequently, the software emulation should be considered           ods for achieving the same e ect.
a logical extension of the hardware functionality. To clarify
the di erence between protocols, schemes may be classi-                When the system cannot avoid a remote memory request
  ed by function as static (compiler-dependent) or dynamic          and is forced to incur the latency of the communication net-
(using run-time information), and by implementation as              work, an Alewife processor rapidly schedules another pro-
software-based or hardware-based.                                   cess in place of the stalled process. Alewife can also tolerate
                                                                    synchronization latencies through the same context switch-
   Chained directory protocols [14], another scalable alter-        ing mechanism. Because context switches are forced only
native for cache coherence, avoid both the memory over-             on memory requests that require the use of the intercon-
head of the full-map scheme and the thrashing problem of            nection network and on synchronization faults, the proces-
limited directories by distributing directory pointer infor-        sor achieves high single-thread performance. Some systems
mation among the caches in the form of linked lists. But            have opted to use weak ordering [1, 11, 12] to tolerate certain
unlike the LimitLESS scheme, chained directories are forced         types of communication latency, but this method lacks the
to transmit invalidations sequentially through a linked-list        ability to overlap read-miss and synchronization latencies.
structure (unless they implement some form of combining),           Although the Alewife cache coherence protocol enforces se-

                                                                2
memory location. The worker-set of a memory block cor-
                                Alewife node
                                                                         responds to the number of active pointers it would have in
                                                                         a full-map directory entry. The observation that worker-
                                                                         sets are often small has led some researchers to propose the
                                                                         use of a hardware cache of pointers to augment the limited-
                                         Distributed Shared Memory
                      Network
                       Router                     X: Data
                                                                         directory for a few widely-shared memory blocks [19]. How-
                                             X:                          ever, when running properly optimized software, a directory
                                                                         entry over ow is an exceptional condition in the memory
             Cache
                             Cache
                            Controller
            X: Data                      Distributed Directory
                                                                         system. We propose to handle such \protocol exceptions"
                                           X: C
                                                                         in software. This is the integrated systems approach |
               FPU         SPARCLE                                       handling common cases in hardware and exceptional cases
                                                                         in software.
                                                                            The LimitLESS scheme implements a small number of
                                                                         hardware pointers for each directory entry. If these pointers
                                                                         are not sucient to store the locations of all of the cached
                                                                         copies of a given block of memory, then the memory module
                                                                         interrupts the local processor. The processor then emulates
                                                                         a full-map directory for the block of memory that caused
          Alewife machine                                                the interrupt. The structure of the Alewife machine pro-
                                                                         vides for an ecient implementation of this memory system
                                                                         extension. Since each processing node in Alewife contains
                                                                         both a memory controller and a processor, it is a straightfor-
                                                                         ward modi cation of the architecture to couple the respon-
                                                                         sibilities of these two functional units. This scheme is called
 Figure 1: Alewife node, LimitLESS directory extension.                  LimitLESS, to indicate that it employs a Limited directory
                                                                         that is Locally Extended through Software Support. Fig-
quential consistency [18], the LimitLESS directory scheme                ure 1 is an enlarged view of a node in the Alewife machine.
can also be used with a weakly-ordered memory model.                     The diagram depicts a set of directory pointers that corre-
                                                                         spond to the shared data block X , copies of which exist in
   We have designed a new processor architecture that can                several caches. In the gure, the software has extended the
rapidly switch between processes [2]. The rst-round imple-               directory pointer array (which is shaded) into local memory.
mentation of the processor called SPARCLE will switch be-                   Since Alewife's SPARCLE processor is designed with a
tween processes in 11 cycles. The rapid-switching features of            fast trap mechanism, the overhead of the LimitLESS inter-
SPARCLE allow an ecient implementation of LimitLESS                     rupt is not prohibitive. The emulation of a full-map di-
directories.                                                             rectory in software prevents the LimitLESS protocol from
   An Alewife node consists of a 33 MHz SPARCLE pro-                     exhibiting the sensitivity to software optimization that is
cessor, 64K bytes of direct-mapped cache, 4M bytes of                    exhibited by limited directory schemes. But given current
globally-shared main memory, and a oating-point copro-                   technology, the delay needed to emulate a full-map directory
cessor. Both the cache and oating-point units are SPARC                  completely in software is signi cant. Consequently, the Lim-
compatible [22]. The nodes communicate via messages                      itLESS protocol supports small worker-sets of processors in
through a direct network [21] with a mesh topology us-                   its limited directory entries, implemented in hardware.
ing wormhole routing [10]. A single-chip controller on each
node holds the cache tags and implements the cache co-
herence protocol by synthesizing messages to other nodes.                3.1 A Simple Model of the Protocol
Figure 1 is an enlarged view of a node in the Alewife ma-
chine. Because the directory itself is distributed along with            Before discussing the details of the new coherence scheme,
the main memory, its bandwidth scales with the number of                 it is instructive to examine a simple model of the relation-
processors in the system. The SPARCLE processor is being                 ship between the performance of a full-map directory and
implemented jointly with LSI Logic and SUN Microsystems                  the LimitLESS directory scheme. Let Th be the average
through modi cations to an existing SPARC design. The                    remote memory access latency for a full-map directory pro-
design of the cache/memory controller is also in progress.               tocol. Th includes factors such as the delay in the cache and
                                                                         memory controllers, invalidation latencies, and network la-
                                                                         tency. Given Th , the formula Th + mTs approximates the
3 The LimitLESS Protocol                                                 average remote memory access latency for the LimitLESS
                                                                         protocol. Ts (the software latency) is the average delay for
As do limited directory protocols, the LimitLESS directory               the full-map directory emulation interrupt, and m is the
scheme capitalizes on the observation that only a few shared             fraction of memory accesses that over ow the small set of
memory data types are widely shared among processors.                    pointers implemented in hardware.
Many shared data structures have a small worker-set, which                  For example, our simulations of a Weather trace on a
is de ned as the set of processors that concurrently read a              64 node Alewife system (see Section 5) indicate that Th 

                                                                     3
Type  Symbol                 Name      Data?
    Cache  RREQ              Read Request                                      Read-Only
    to     WREQ              Write Request    p
                                                                           P={ k , k , ... , k }
                                                                                1 2           n
                                                                                                   2
                                                                                                       6
                                                                                                                 Read-Write
                                                                                                                   P={i}
    Memory REPM              Replace Modi ed  p
                                                                                S: n > p
           UPDATE            Update
           ACKC              Invalidate Ack.  p
                                                                          1                    3           5

    Memory RDATA             Read Data        p
                                                                                                                    4
    to     WDATA             Write Data                                             10
    Cache  INV               Invalidate
           BUSY              Busy Signal                                                                                8

   Table 1: Protocol messages for hardware coherence.
                                                                           Read Transaction                    Write Transaction
                                                                                P={i}                                P={i}
35 cycles. If Ts = 100 cycles, then remote accesses with
the LimitLESS scheme are 10% slower (on average) than
with the full-map protocol when m  3%. In the Weather
                                                                           9                                                  7

program, 97% of accesses hit in the limited directory.
   LimitLESS directories are scalable, because the mem-                       Figure 2: Directory state transition diagram.
ory overhead grows as (N log N ), and the performance
approaches that of a full-map directory as system size in-           rectory entry change that the transition may require. 4. The
creases. Although in a 64 processor machine, Th and Ts are           output message or messages that are sent in response to the
comparable, in much larger systems the internode commu-              input message. Note that certain transitions require the use
nication latency will be much larger than the processors'            of an acknowledgment counter (AckCtr), which is used to
interrupt handling latency (Th  Ts ). Furthermore, im-              ensure that cached copies are invalidated before allowing a
proving processor technology will make Ts even less signif-          write transaction to be completed.
icant. This approximation indicates that if both processor
speeds and multiprocessor sizes increase, handling cache co-            For example, Transition 2 from the Read-Only state to
herence completely in software (m = 1) will become a vi-             the Read-Write state is taken when cache i requests write
able option. The LimitLESS protocol is the rst step on               permission (WREQ) and the pointer set is empty or con-
the migration path towards interrupt-driven cache coher-             tains just cache i (P = fg or P = fig). In this case, the
ence. Other systems [9] have experimented with handling              pointer set is modi ed to contain i (if necessary) and the
cache misses entirely in software.                                   memory controller issues a message containing the data of
                                                                     the block to be written (WDATA).
3.2 Protocol Speci cation                                               Following the notation in [3], both full-map and Limit-
                                                                     LESS are members of the DirN NB class of cache coher-
In the above discussion, we assume that the hardware la-             ence protocols. Therefore, from the point of view of the
tency (Th ) is approximately equal for the full-map and the          protocol speci cation, the LimitLESS scheme does not dif-
LimitLESS directories, because the LimitLESS protocol has            fer substantially from the full-map protocol. In fact, the
the same state transition diagram as the full-map protocol.          LimitLESS protocol is also speci ed in Figure 2. The ex-
The memory controller side of this protocol is illustrated in        tra notation on the Read-Only ellipse (S : n > p) indicates
Figure 2, which contains the memory states listed in Ta-             that the state is handled in software when the size of the
ble 2. These states are mirrored by the state of the block           pointer set (n) is greater than the size of the limited direc-
in the caches, also listed in Table 2. It is the responsibil-        tory (p). In this situation, the transitions with the square
ity of the protocol to keep the states of the memory and             labels (1, 2, and 3) are executed by the interrupt handler
the cache blocks coherent. The protocol enforces coherence           on the processor that is local to the over owing directory.
by transmitting messages (listed in Table 1) between the             When the protocol changes from a software-handled state
cache/memory controllers. Every message contains the ad-             to a hardware-handled state, the processor must modify the
dress of a memory block, to indicate which directory entry           directory state so that the memory controller can resume
should be used when processing the message. Table 1 also             responsibility for the protocol transitions.
indicates whether a message contains the data associated                The Alewife machine will support an optimization of the
with a memory block.                                                 LimitLESS protocol that maximizes the number of transac-
   The state transition diagram in Figure 2 speci es the             tions serviced in hardware. When the controller interrupts
states, the composition of the pointer set (P), and the tran-        the processor due to a pointer array over ow, the proces-
sitions between the states. Each transition is labeled with          sor completely empties the pointer array into local memory,
a number that refers to its speci cation in Table 3. This            allowing the controller to continue handling read requests
table annotates the transitions with the following informa-          until the next pointer array over ow. This optimization is
tion: 1. The input message from a cache that initiates the           called Trap-On-Write, because the memory controller must
transaction and the identi er of the cache that sends it. 2. A       interrupt the processor upon a write request, even though
precondition (if any) for executing the transition. 3. Any di-       it can handle read requests itself.

                                                                 4
Component        Name                                          Meaning
               Memory    Read-Only                   Some number of caches have read-only copies of the data.
                         Read-Write                  Exactly one cache has a read-write copy of the data.
                         Read-Transaction            Holding read request, update is in progress.
                         Write-Transaction           Holding write request, invalidation is in progress.
               Cache     Invalid                     Cache block may not be read or written.
                         Read-Only                   Cache block may be read, but not written.
                         Read-Write                  Cache block may be read or written.
                                                   Table 2: Directory states.
             Transition         Input             Precondition                       Directory Entry       Output
               Label           Message                                                   Change         Message(s)
                 1            i! RREQ                  |                                P= [f g     P   RDATA !
                                                                                                            i                                    i

                 2            ! WREQ
                              i                     P  =fgi                                 |          WDATA !                                   i
                              ! WREQ
                              i                      P = fg                                 =fg
                                                                                            P          WDATA !
                                                                                                        i                                        i

                  3           ! WREQ
                              i            P   =f 1
                                                  k ;:::;kn g ^ 62
                                                                 i       P         P = f g, AckCtr =
                                                                                            i          8 j INV ! j  n            k           k
                              ! WREQ
                              i            P   =f 1
                                                  k ;:::;kng ^ 2 i       P     P   = f g, AckCtr = 1 8 j 6= INV ! j
                                                                                        i                       n            k       i               k

                  4           j! WREQ               P  =fgi                        P = f g, AckCtr = 1
                                                                                            j             INV !                          i

                  5           j! RREQ               P  =fgi                        P = f g, AckCtr = 1
                                                                                            j             INV !                          i

                  6           i! REPM               P  =fgi                                 = fgP            |
                  7           j! RREQ                  |                                    |           BUSY !                               j
                              j! WREQ                  |                                    |           BUSY !                               j

                              j! ACKC             AckCtr 6= 1                      AckCtr = AckCtr 1         |
                              j! REPM                  |                                    |                |
                  8           j! ACKC          AckCtr = 1, = f g
                                                             P       i                 AckCtr = 0      WDATA !                                   i
                          j   ! UPDATE              P  =fgi                            AckCtr = 0      WDATA !                                   i

                  9           j! RREQ                  |                                    |           BUSY !                               j
                              j! WREQ                  |                                    |           BUSY !                               j
                              j! REPM                  |                                    |                |
                 10       j   ! UPDATE              P  =fgi                            AckCtr = 0       RDATA !                                  i
                              j! ACKC               P  =fgi                            AckCtr = 0       RDATA !                                  i

                                    Table 3: Annotation of the state transition diagram.

4 Interfaces for LimitLESS                                                                                          Condition Bits

This section discusses the architectural features and hard-                                                             Trap Lines
ware interfaces needed to support the LimitLESS directory                              Processor                                         Controller
scheme. We describe how these interfaces are supported in                                                               Data Bus
the Alewife machine. Since the Alewife network interface
is somewhat unique for shared-memory machines, it is ex-                                                            Address Bus
amined in detail. Afterwards, we introduce the additional
directory state that Alewife supports, over and above the                       Figure 3: Signals between processor and controller.
state needed for a limited directory protocol, and examine
its application to LimitLESS.
   To support the LimitLESS protocol eciently, a cache-                     processor manipulation of controller state and initiation of
based multiprocessor needs several properties. First, it must                actions via load and store instructions to memory-mapped
be capable of rapid trap handling, because LimitLESS is an                   I/O space. In Alewife, the directories are placed in this
extension of hardware through software. The Alewife ma-                      special region of memory distinguished from normal mem-
chine couples a rapid context switching processor (SPAR-                     ory space by a distinct Alternate Space Indicator (ASI) [2].
CLE) with a nely-tuned software trap architecture, per-                      The controller returns two condition bits and several trap
mitting execution of trap code within ve to ten cycles from                  lines to the processor.
the time a trap is initiated.                                                   Finally, a machine implementing the LimitLESS scheme
   Second, the processor needs complete access to coherence                  needs an interface to the network that allows the processor
related controller state such as pointers and state bits in                  to launch and to intercept coherence protocol packets. Al-
the hardware directories. Similarly the directory controller                 though most shared-memory multiprocessors export little
must be able to invoke processor trap handlers when neces-                   or no network functionality to the processor, Alewife pro-
sary. The hardware interface between the Alewife processor                   vides the processor with direct network access through the
and controller, depicted in Figure 3, is designed to meet                    Interprocessor-Interrupt (IPI) mechanism. The following
these requirements. The address and data buses permit                        subsections outline the implementations of the IPI mecha-
                                                                             nism, the LimitLESS directory modes, and the trap han-

                                                                     5
Source Processor
                       Packet Length                                                                                SPARCLE
                          Opcode                                             From Network
                                                                                                                    Processor                        To Network
                         operand 0
                         operand 1
                              ..
                               .

                                                                                                                                                        Transmit Queue
                       operand m 1

                                                                                  Receive Queue

                                                                                                                                  IPI Output Queue
                        data word 0

                                                                                                  IPI Input Queue
                        data word 1
                               ..
                                .
                      data word n 1
Figure 4: Uniform packet format for the Alewife machine.
                                                                                                                    Message
                                                                                                                    Processor

dlers.                                                                                       Memory

4.1 Interprocessor-Interrupt (IPI)                                                                                              Alewife Controller

The Alewife machine supports a complete interface to the             Figure 5: Queue-based diagram of the Alewife controller.
interconnection network. This interface provides the pro-
cessor with a superset of the network functionality needed
by the cache-coherence hardware. Not only can it be used             codes are called interprocessor interrupts and are processed
to send and receive cache protocol packets, but it can               in software at their destinations.
also be used to send preemptive messages to remote pro-
cessors (as in message-passing machines), hence the name
Interprocessor-Interrupt.                                            Transmission of IPI Packets             A simpli ed, queue-
                                                                     based diagram of the Alewife cache/memory controller is
   We stress that the IPI interface is a single generic mecha-       shown in Figure 5. This is a \memory-side" diagram; for
nism for network access { not a conglomeration of di erent           simplicity it excludes the processor cache.
mechanisms. The power of such a mechanism lies in its
generality.                                                             The processor interface uses memory-mapped store in-
                                                                     structions to specify destination, opcode, and operands. It
                                                                     also speci es a starting address and length for the data por-
Network Packet Structure             To simplify the IPI in-         tion of the packet. Taken together, this information com-
terface, network packets have a single, uniform structure,           pletely speci es an outgoing packet. Note that operands
shown in Figure 4. This gure includes only the information           and data are distinguished by their speci cation: operands
seen at the destination; routing information is stripped o           are written explicitly through the interface, while data is
by the network. The Packet Header contains the identi er             fetched from memory. The processor initiates transmission
of the source processor, the length of the packet, and an op-        by storing to a special trigger location, which enqueues the
code. It is a single word in the Alewife machine. Following          request on the IPI output queue.
the header are zero or more operands and data words. The
distinction between operands and data is software-imposed;
however, it is a useful abstraction supported by the IPI in-         Reception of IPI Packets          When the controller wishes
terface.                                                             to hand a packet to the processor, it places it in a special in-
                                                                     put bu er, the IPI input queue. This queue is large enough
   Opcodes are divided into two distinct classes: proto-             for several protocol packets and over ows into the network
col and interrupt. Protocol opcodes are used for cache-              receive queue. The forwarding of packets to the IPI queue
coherence trac. While they are normally produced and                is accompanied by an interrupt to the processor.
consumed by the controller hardware, they can also be
produced or consumed by the LimitLESS trap-handler.                     The header (source, length, opcode) and operands of the
Protocol opcodes designate the type of coherence trans-              packet at the head of the IPI input queue can be examined
action. For example, a cache read miss generates a mes-              with simple load instructions. Once the trap routine has ex-
sage with , , and                  amined the header and operands, it can either discard the
. Packets with protocol opcodes                  packet or store it to memory, beginning at a speci ed loca-
are called protocol packets.                                         tion. In the latter case, the data that is stored starts from
                                                                     a speci ed o set in the packet. This store-back capability
   Interrupt opcodes have their most signi cant bit set and          permits message-passing and block-transfers in addition to
are used for interprocessor messages. Their format is de-            enabling the processing of protocol packets with data.
 ned entirely by the software. Packets with interrupt op-

                                                                 6
The two bits required to represent these states are stored in
     Meta State                      Description                        directory entries along with the states of Figure 2 and ve
  Normal                 Directory handled by hardware.                 hardware pointers.
  Trans-In-Progress      Interlock, software processing.                   Controller behavior for pointer over ow is straightfor-
  Trap-On-Write          Trap: WREQ, UPDATE, REPM.                      ward: when a memory line is in the Read-Only state and
  Trap-Always            Trap: all incoming packets.                    all hardware pointers are in use, then an incoming read re-
  Table 4: Directory meta states for LimitLESS scheme.                  quest for this line (RREQ) will be diverted into the IPI
                                                                        input queue and the directory mode will be switched to
                                                                        Trans-In-Progress.
   IPI input traps are synchronous, that is, they are capa-             Local Memory Faults          What about local processor ac-
ble of interrupting instruction execution. This is necessary,           cesses? A processor access to local memory that must be
because the queue topology shown in Figure 5 is otherwise               handled by software causes a memory fault. The controller
subject to deadlock. If the processor pipeline is being held            places the faulting address and access type (i.e. read or
for a remote cache- ll 1 and the IPI input queue over ows,              write) in special controller registers, then invokes a syn-
then the receive queue will be blocked, preventing the load             chronous trap.
or store from completing. At this point, a synchronous trap
must be taken to empty the input queue. Since trap code is                 A trap handler must alter the directory when processing
stored in local memory, it may be executed without network              a memory fault to avoid an identical fault when the trap
transactions.                                                           returns. To permit the extensions suggested in Section 6,
                                                                        the Alewife machine reserves a one bit pointer in each hard-
                                                                        ware directory entry, called the Local Bit. This bit ensures
4.2 Meta States                                                         that local read requests will never over ow a directory. In
                                                                        addition, the trap handler can set this bit after a memory
As described in Section 3, the LimitLESS scheme consists                fault to permit the faulting access to complete.
of a series of extensions to the basic limited directory pro-
tocol. That section speci es circumstances under which the
memory controller would invoke the software. LimitLESS                  4.3 LimitLESS Trap Handler
requires a small amount of hardware support in addition to
the IPI interface.                                                      The current implementation of the LimitLESS trap han-
   This support consists of two components, meta states and             dler is as follows: when an over ow trap occurs for the
pointer over ow trapping. Meta states are directory modes                 rst time on a given memory line, the trap code allocates a
listed in Table 4. They may be described as follows:                    full-map bit-vector in local memory. This vector is entered
                                                                        into a hash table. All hardware pointers are emptied and
     The hardware maintains coherence for Normal-mode                  the corresponding bits are set in this vector. The direc-
      memory blocks. The worker-sets of such blocks are no              tory mode is set to Trap-On-Write before the trap returns.
      larger than the number of hardware pointers.                      When additional over ow traps occur, the trap code locates
                                                                        the full-map vector in the hash table, empties the hardware
     The Trans-In-Progress mode is entered automatically               pointers, and sets the appropriate bits in the vector.
      when a protocol packet is passed to software (by plac-
      ing it in the IPI input queue). It instructs the con-                Software handling of a memory line terminates when the
      troller to block on all future protocol packets for the           processor traps on an incoming write request (WREQ) or
      associated memory block. The mode is cleared by the               local write fault. The trap handler nds the full-map bit
      LimitLESS trap code after processing the packet.                  vector and empties the hardware pointers as above. Next, it
                                                                        records the identity of the requester in the directory, sets the
     For memory blocks in the Trap-On-Write mode, read                 acknowledgment counter to the number of bits in the vector
      requests are handled as usual, but write requests                 that are set, and places the directory in the Normal mode,
      (WREQ), update packets (UPDATE), and replace-                     Write Transaction state. Finally, it sends invalidations to
      modi ed packets (REPM) are forwarded to the IPI in-               all caches with bits set in the vector. The vector may now
      put queue. When packets are forwarded to the IPI                  be freed. At this point, the memory line has returned to
      queue, the directory mode is changed to Trans-In-                 hardware control. When all invalidations are acknowledged,
      Progress.                                                         the hardware will send the data with write permission to the
     Trap-Always instructs the controller to pass all pro-             requester.
      tocol packets to the processor. As with Trap-On-                     Since the trap handler is part of the Alewife software
      Write, the mode is switched to Trans-In-Progress when             system, many other implementations are possible.
      a packet is forwarded to the processor.

                                                                        5 Performance Measurements
   1 While the Alewife machineswitches contexts on remote cache
misses (see [2]) under normal circumstances, certain forward-
progress concerns dictate that we occasionally hold the processor
while waiting for a cache- ll.                                          This section describes some preliminary results from the
                                                                        Alewife system simulator, comparing the performance of

                                                                    7
Mul-T program                                             of styles.
                                                                         The simulation overhead for large machines forces a
                                                                      trade-o between application size and simulated system
            Mul-T Compiler                Alewife Runtime
                                                                      size. Programs with enough parallelism to execute well on
                                                                      a large machine take an inordinate time to simulate. When
                                                                      ASIM is con gured with its full statistics-gathering capa-
   SPARCLE machine language program                                   bility, it runs at about 5000 processor cycles per second on
                                                                      an unloaded SPARCserver 330. At this rate, a 64 processor
          SPARCLE Simulator
                                          Parallel Traces
                                                                      machine runs approximately 80 cycles per second. Most of
                                                                      the simulations that we chose for this paper run for roughly
                                       Dynamic Post-Mortem            one million cycles (a fraction of a second on a real machine),
                                                                      which takes 3.5 hours to complete. This lack of simulation
   Memory requests/acknowledgements
                                            Scheduler

                                                                      speed is one of the primary reasons for implementing the
        Cache/Memory Simulator                                        Alewife machine in hardware | to enable a thorough eval-
                                                                      uation of our ideas.
          Network transactions                                           To evaluate the potential bene ts of the LimitLESS co-
                                                                      herence scheme, we implemented an approximation of the
           Network Simulator                                          new protocol in ASIM. The technique assumes that the over-
                                                                      head of the LimitLESS full-map emulation interrupt is ap-
                                                                      proximately the same for all memory requests that over ow
          ALEWIFE Simulator
                                                                      a directory entry's pointer array. This is the Ts parameter
                                                                      described in Section 3. During the simulations, ASIM sim-
Figure 6: Diagram of ASIM, the Alewife system simulator.              ulates an ordinary full-map protocol. But when the sim-
                                                                      ulator encounters a pointer array over ow, it stalls both
limited, LimitLESS, and full-map directories. The proto-              the memory controller and the processor that would handle
cols are evaluated in terms of the total number of cycles             the LimitLESS interrupt for Ts cycles. While this evalu-
needed to execute an application on a 64 processor Alewife            ation technique only approximates the actual behavior of
machine. Using execution cycles as a metric emphasizes the            the fully-operational LimitLESS scheme, it is a reasonable
bottom line of multiprocessor design: how fast a system can           method for determining whether to expend the greater ef-
run a program.                                                        fort needed to implement the complete protocol.

5.1 The Measurement Technique                                         5.2 Performance Results
The results presented below are derived from complete                 Table 5 shows the simulated performance of four applica-
Alewife machine simulations and from dynamic post-                    tions, using a four-pointer limited protocol (Dir4 NB ), a
mortem scheduler simulations. Figure 6 illustrates these              full-map protocol, and a LimitLESS (LimitLESS 4 ) scheme
two branches of ASIM, the Alewife Simulator.                          with Ts = 50. The table presents the performance of
                                                                      each application/protocol combination in terms of the time
   ASIM models each component of the Alewife machine,                 needed to run the program, in millions of processor cycles.
from the multiprocessor software to the switches in the in-           All of the runs simulate a 64-node Alewife machine with
terconnection network. The complete-machine simulator                 64K byte caches and a two-dimensional mesh network.
runs programs that are written in the Mul-T language [16],               Multigrid is a statically scheduled relaxation program,
optimized by the Mul-T compiler, and linked with a run-
time system that implements both static work distribution             Weather forecasts the state of the atmosphere given an ini-
and dynamic task partitioning and scheduling. The code                tial state, SIMPLE simulates the hydrodynamic and ther-
generated by this process runs on a simulator consisting of           mal behavior of uids, and Matexpr performs several mul-
processor, cache/memory, and network modules.                         tiplications and additions of various sized matrices. The
                                                                      computations in Matexpr are partitioned and scheduled by
   Although the memory accesses in ASIM are usually de-               a compiler. The Weather and SIMPLE applications are
rived from applications running on the SPARCLE proces-                measured using dynamic post-mortem scheduling of traces,
sor, ASIM can alternatively derive its input from a dynamic           while Multigrid and Matexpr are run on complete-machine
post-mortem trace scheduler, shown on the right side of Fig-          simulations.
ure 6. Post-mortem scheduling is a technique that generates
a parallel trace from a uniprocessor execution trace that                Since the LimitLESS scheme implements a full- edged
has embedded synchronization information [8]. The post-               limited directory in hardware, applications that perform
mortem scheduler is coupled with the memory system simu-              well using a limited scheme also perform well using Lim-
lator and incorporates feedback from the network in issuing           itLESS. Multigrid is such an application. All of the proto-
trace requests, as described in [17]. The use of this input           cols, including the four-pointer limited directory (Dir4 NB ),
source is important because it lets us expand the workload            the full-map directory, and the LimitLESS scheme require
set to include large parallel applications written in a variety       approximately the same time to complete the computation
                                                                      phase. This con rms the assumption that for applications

                                                                  8
Application     Dir4 NB LimitLESS 4 Full-Map
     Multigrid           0.729        0.704        0.665              Dir1NB

     SIMPLE              3.579        2.902        2.553
     Matexpr             1.296        0.317        0.171              Dir2NB
     Weather             1.356        0.654        0.621
                                                                      Dir4NB
Table 5: Performance for three coherence schemes, in terms
of millions of cycles.                                               Full-Map
                                                                                  |          |          |               |             |
                                                                                0.00       0.40       0.80            1.20          1.60
with small worker-sets, such as multigrid, the limited (and                                                   Execution Time (Mcycles)
therefore the LimitLESS) directory protocols perform al-                                            Weather
most as well as the full-map protocol. See [7] for more evi-
dence of the general success of limited directory protocols.                    Figure 7: Limited and full-map directories.
   To measure the performance of LimitLESS under extreme
conditions, we simulated a version of SIMPLE with bar-
rier synchronization implemented using a single lock (rather          Dir4NB

than a software combining tree). Although the worker-                Ts = 150
sets in SIMPLE are small for the most part, the globally
shared barrier structure causes the performance of the lim-          Ts = 100

ited directory protocol to su er. In contrast, the LimitLESS          Ts = 50

scheme performs almost as well as the full-map directory              Ts = 25
protocol, because LimitLESS is able to distribute the bar-
rier structure to as many processors as necessary.                   Full-Map
                                                                                  |          |          |               |             |
   The Matexpr application uses several variables that                          0.00       0.40       0.80            1.20          1.60

have worker-sets of up to 16 processors. Due to these                                                         Execution Time (Mcycles)

large worker-sets, the processing time with the LimitLESS                                           Weather

scheme is almost double that with the full-map protocol.
The limited protocol, however, exhibits a much higher sen-          Figure 8: LimitLESS, 25 to 150 cycle emulation latencies.
sitivity to the large worker-sets.
   Weather provides a case-study of an application that has         forces the belief that complete-machine simulations are nec-
not been completely optimized for limited directory proto-          essary to evaluate the implementation of cache coherence.
cols. Although the simulated application uses software com-            As shown in Figure 8, the LimitLESS protocol avoids
bining trees to distribute its barrier synchronization vari-        the sensitivity displayed by limited directories. This gure
ables, Weather has one variable initialized by one processor        compares the performance of a full-map directory, a four-
and then read by all of the other processors. Our simula-           pointer limited directory (Dir4 NB ), and the four-pointer
tions show that if this variable is agged as read-only data,        LimitLESS (LimitLESS 4 ) protocol with several values for
then a limited directory performs just as well for Weather          the additional latency required by the LimitLESS protocol's
as a full-map directory.                                            software (Ts = 25, 50, 100, and 150). The execution times
   However, it is easy for a programmer to forget to perform        show that the LimitLESS protocol performs about as well as
such optimizations, and there are some situations where it          the full-map directory protocol, even in a situation where a
is very dicult to avoid this type of sharing. Figure 7 gives       limited directory protocol does not perform well. Further-
the execution times for Weather when this variable is not           more, while the LimitLESS protocol's software should be
optimized. The vertical axis on the graph displays several          as ecient as possible, the performance of the LimitLESS
coherence schemes, and the horizontal axis shows the pro-           protocol is not strongly dependent on the latency of the
gram's total execution time (in millions of cycles). The re-        full-map directory emulation. The current implementation
sults show that when the worker-set of a single location in         of the LimitLESS software trap handlers in Alewife suggests
memory is much larger than the size of a limited directory,         Ts  50.
the whole system su ers from hot-spot access to this loca-             It is interesting to note that the LimitLESS protocol, with
tion. So, limited directory protocols are extremely sensitive       a 25 cycle emulation latency, actually performs better than
to the size of a heavily-shared data block's worker-set.            the full-map directory. This anomalous result is caused by
   The e ect of the unoptimized variable in Weather was             the participation of the processor in the coherence scheme.
not evident in previous evaluations of directory-based cache        By interrupting and slowing down certain processors, the
coherence [7], because the network model did not account            LimitLESS protocol produces a slight back-o e ect that
for hot-spot behavior. Since the program can be optimized           reduces contention.
to eliminate the hot-spot, the new results do not contradict           The number of pointers that a LimitLESS protocol imple-
the conclusion of [7] that system-level enhancements make           ments in hardware interacts with the worker-set size of data
large-scale cache-coherent multiprocessors viable. Never-           structures. Figure 9 compares the performance of Weather
theless, the experience with the Weather application rein-          with a full-map directory, a limited directory, and Limit-

                                                                9
Synchronization objects, such FIFO lock data types, can
  Dir4NB
                                                                          receive special handling; the trap handler can bu er write
LimitLESS1                                                                requests for a programmer-speci ed variable and grant the
                                                                          requests on a rst-come, rst-serve basis.
LimitLESS2
                                                                             The mechanisms implementing the LimitLESS directory
LimitLESS4                                                                protocol provide a set of generic interfaces that can be used
                                                                          for many di erent memory models. Judging by the num-
  Full-Map                                                                ber of synchronization and coherence mechanisms that have
             |       |           |                |             |         been de ned by multiprocessor architects and programmers,
           0.00    0.40        0.80             1.20          1.60
                                        Execution Time (Mcycles)          it seems that there is no lack of uses for such a exible co-
                              Weather
                                                                          herence scheme.

Figure 9: LimitLESS with 1, 2, and 4 hardware pointers.
                                                                          7 Conclusion
LESS directories with 50 cycle emulation latency and one                  This paper proposed a new scheme for cache coherence,
(LimitLESS 1 ), two (LimitLESS 2 ), and four (LimitLESS 4 )               called LimitLESS, which is being implemented in the
hardware pointers. The performance of the LimitLESS pro-                  Alewife machine. Hardware requirements include rapid trap
tocol degrades gracefully as the number of hardware point-                handling and a exible processor interface to the network.
ers is reduced. The one-pointer LimitLESS protocol is es-                 Preliminary simulation results indicate that the LimitLESS
pecially bad, because some of Weather's variables have a                  scheme approaches the performance of a full-map directory
worker-set that consists of two processors.                               protocol with the memory eciency of a limited directory
   This behavior indicates that multiprocessor software run-              protocol. Furthermore, the LimitLESS scheme provides a
ning on a system with a LimitLESS protocol will require                   migration path toward a future in which cache coherence is
some of the optimizations that would be needed on a system                handled entirely in software.
with a limited directory protocol. However, the LimitLESS
protocol is much less sensitive to programs that are not
perfectly optimized. Moreover, the software optimizations
used with a LimitLESS protocol should not be viewed as
                                                                          8 Acknowledgments
extra overhead caused by the protocol itself. Rather, these               All of the members of the Alewife group at MIT helped
optimizations might be employed, regardless of the cache                  develop and evaluate the ideas presented in this paper.
coherence mechanism, since they tend to reduce hot-spot                   Beng-Hong Lim wrote the prototype LimitLESS trap han-
contention and to increase communication locality.                        dlers. David Kranz wrote the Mul-T compiler. Beng-Hong
                                                                          Lim and Dan Nussbaum wrote the SPARCLE simulator,
6 Extensions to the Scheme                                                the run-time system, and the static multigrid application.
                                                                          Kirk Johnson supported the benchmarks. Kiyoshi Kurihara
                                                                          found the hot-spot variable in Weather. Gino Maa wrote
Using the interface described in Section 4, the LimitLESS                 the network simulator. G.N.S. Prasanna wrote the auto-
protocol may be extended in several ways. The simplest ex-                matic matrix expression partitioner and analyzed Matexpr.
tension uses the LimitLESS trap handler to gather statistics                 The notation for the transition state diagram borrows
about shared memory locations. For example, the handler                   from the doctoral thesis of James Archibald at the Univer-
can record the worker-set of each variable that over ows its              sity of Washington, Seattle, and from work done by Ingmar
hardware directory. This information can be fed back to the               Vuong-Adlerberg at MIT.
programmer or compiler to help recognize and minimize the                    Pat Teller and Allan Gottlieb of NYU provided the
use of such variables. For studies of data sharing, locations             source code of the Weather and SIMPLE applications.
can be placed in the Trap-Always directory mode and han-                  Harold Stone and Kimming So helped us obtain the traces.
dled entirely in software. This scheme permits pro ling of                The post-mortem scheduler was implemented by Mathews
memory transactions to these locations without degrading                  Cherian with Kimming So at IBM. It was extended by
performance of non-pro led locations.                                     Kiyoshi Kurihara to include several other forms of barrier
   More interesting enhancements couple the LimitLESS                     synchronization such as backo s and software combining
protocol with the compiler and run-time systems to im-                    trees, and to incorporate feedback from the cache controller.
plement various special coherence, synchronization, and                      Machines used for simulations were donated by SUN Mi-
garbage-collection mechanisms. Coherence types special-                   crosystems and Digital Equipment Corporation. The re-
ized to various data types [5] can be synthesized using the               search reported in this paper is funded by DARPA contract
Trap-Always and Trap-On-Write directory modes (de ned                     # N00014-87-K-0825, and by grants from the Sloan Foun-
in Section 4). For example, the LimitLESS trap handler can                dation and IBM.
cause FIFO directory eviction for data structures known to
migrate from processor to processor, or it can cause updates
(rather than invalidates) when cached copies are modi ed.

                                                                     10
References                                                         [14] David V. James, Anthony T. Laundrie, Stein Gjessing,
                                                                        and Gurindar S. Sohi. Distributed-Directory Scheme:
 [1] Sarita V. Adve and Mark D. Hill. Weak Ordering -                   Scalable Coherent Interface. IEEE Computer, pages
     A New De nition. In Proceedings 17th Annual Inter-                 74{77, June 1990.
     national Symposium on Computer Architecture, June             [15] R. H. Katz, S. J. Eggers, D. A. Wood, C. L. Perkins,
     1990.                                                              and R. G. Sheldon. Implementing a Cache Consistency
 [2] Anant Agarwal, Beng-Hong Lim, David Kranz, and                     Protocol. In Proceedings of the 12th International Sym-
     John Kubiatowicz. APRIL: A Processor Architecture                  posium on Computer Architecture, pages 276{283, New
     for Multiprocessing. In Proceedings 17th Annual Inter-             York, June 1985. IEEE.
     national Symposium on Computer Architecture, June             [16] D. Kranz, R. Halstead, and E. Mohr. Mul-T: A High-
     1990.                                                              Performance Parallel Lisp. In Proceedings of SIGPLAN
 [3] Anant Agarwal, Richard Simoni, John Hennessy, and                  '89, Symposium on Programming Languages Design
     Mark Horowitz. An Evaluation of Directory Schemes                  and Implementation, June 1989.
     for Cache Coherence. In Proceedings of the 15th Inter-        [17] Kiyoshi Kurihara, David Chaiken, and Anant Agar-
     national Symposium on Computer Architecture, New                   wal. Latency Tolerance in Large-Scale Multiprocessors.
     York, June 1988. IEEE.                                             MIT VLSI Memo 1990 #90-626. Submitted for publi-
 [4] James Archibald and Jean-Loup Baer. An Economical                  cation., October 1990.
     Solution to the Cache Coherence Problem. In Proceed-          [18] Leslie Lamport. How to Make a Multiprocessor Com-
     ings of the 12th International Symposium on Computer               puter That Correctly Executes Multiprocess Programs.
     Architecture, pages 355{362, New York, June 1985.                  IEEE Transactions on Computers, C-28(9), September
     IEEE.                                                              1979.
 [5] John K. Bennett, John B. Carter, and Willy                    [19] Brian W. O'Krafka and A. Richard Newton. An Em-
     Zwaenepoel. Adaptive Software Cache Management                     pirical Evaluation of Two Memory-Ecient Directory
     for Distributed Shared Memory Architectures. In                    Methods. In Proceedings 17th Annual International
     Proceedings 17th Annual International Symposium on                 Symposium on Computer Architecture, June 1990.
     Computer Architecture, June 1990.
 [6] Lucien M. Censier and Paul Feautrier. A New Solution          [20] Mark S. Papamarcos and Janak H. Patel. A Low-
     to Coherence Problems in Multicache Systems. IEEE                  Overhead Coherence Solution for Multiprocessors with
     Transactions on Computers, C-27(12):1112{1118, De-                 Private Cache Memories. In Proceedings of the 12th
     cember 1978.                                                       International Symposium on Computer Architecture,
                                                                        pages 348{354, New York, June 1985. IEEE.
 [7] David Chaiken, Craig Fields, Kiyoshi Kurihara, and
     Anant Agarwal. Directory-Based Cache-Coherence in             [21] Charles L. Seitz. Concurrent VLSI Architectures.
     Large-Scale Multiprocessors. IEEE Computer, June                   IEEE Transactions on Computers, C-33(12), Decem-
     1990.                                                              ber 1984.
 [8] Mathews Cherian. A study of backo barrier synchro-            [22] SPARC Architecture Manual, 1988. SUN Microsys-
     nization in shared-memory multiprocessors. Master's                tems, Mountain View, California.
     thesis, MIT, EECS Dept, May 1989.                             [23] C. K. Tang. Cache Design in the Tightly Coupled Mul-
 [9] David R. Cheriton, Gert A. Slavenberg, and Patrick D.              tiprocessor System. In AFIPS Conference Proceedings,
     Boyle. Software-Controlled Caches in the VMP Mul-                  National Computer Conference, NY, NY, pages 749{
     tiprocessor. In Proceedings of the 13th Annual Sympo-              753, June 1976.
     sium on Computer Architecture, pages 367{374, New             [24] Charles P. Thacker and Lawrence C. Stewart. Fire-
     York, June 1986. IEEE.                                               y: a Multiprocessor Workstation. In Proceedings of
[10] William J. Dally. A VLSI Architecture for Concurrent               ASPLOS II, pages 164{172, October 1987.
     Data Structures. Kluwer Academic Publishers, 1987.            [25] Wolf-Dietrich Weber and Anoop Gupta. Analysis of
[11] Michel Dubois, Christoph Scheurich, and Faye A.                    Cache Invalidation Patterns in Multiprocessors. In
     Briggs. Synchronization, Coherence, and Event Order-               Proceedings of ASPLOS III, pages 243{256, April 1989.
     ing in Multiprocessors. IEEE Computer, pages 9{21,
     February 1988.
[12] K. Gharachorloo, D. Lenoski, J. Laudon, P. Gibbons,
     A. Gupta, and J. Hennessy. Memory Consistency and
     Event Ordering in Scalable Shared-Memory Multipro-
     cessors. In Proceedings 17th Annual International Sym-
     posium on Computer Architecture, June 1990.
[13] James R. Goodman. Using Cache Memory to Reduce
     Processor-Memory Trac. In Proceedings of the 10th
     Annual Symposium on Computer Architecture, pages
     124{131, New York, June 1983. IEEE.

                                                              11
You can also read