[go: up one dir, main page]

CA1116756A - Cache memory command circuit - Google Patents

Cache memory command circuit

Info

Publication number
CA1116756A
CA1116756A CA000317779A CA317779A CA1116756A CA 1116756 A CA1116756 A CA 1116756A CA 000317779 A CA000317779 A CA 000317779A CA 317779 A CA317779 A CA 317779A CA 1116756 A CA1116756 A CA 1116756A
Authority
CA
Canada
Prior art keywords
memory
unit
data
cache
cache memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000317779A
Other languages
French (fr)
Inventor
Charles P. Ryan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bull HN Information Systems Inc
Original Assignee
Honeywell Information Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honeywell Information Systems Inc filed Critical Honeywell Information Systems Inc
Application granted granted Critical
Publication of CA1116756A publication Critical patent/CA1116756A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Transfer Systems (AREA)
  • Image Input (AREA)

Abstract

ABSTRACT
Apparatus and method for providing a buffer stage, or cache memory command circuit, between a cache memory unit and a main memory unit. The transfer of data between a main memory unit and a cache memory unit can be complicated because the circuits utilized in the cache memory unit and/or the main memory unit in effectuating the data transfer can be pre-empted.
In addition, the data transfers must be executed in sequential order.
According to the present invention, the transfer of data is divided into two portions, a portion involving cache memory unit and a portion involving the main memory unit along with associated interface units. The cache memory unit stores the data transfer commands and the associated data in sequential order. The cache memory unit and the main memory and interface units can execute their respective portions of the data transfer independently permitting overlapped instruction execution. The cache command buffer insures that the operations involving the two units of the data processing unit are executed in sequence. When data transfer has been completed, the cache command circuit continues to the execution of the next data transfer in sequence.

Description

, ., ~67~6 BACKGROUND OF THE INVENTION

Field of the Invention This invention relates generally to a cache memory unit utilized by a data processing system and more particularly to a buffer stage between the cache memory and the main memory unit.

Description of the Prior Art It is known in the prior art to utilize a cache memory unit to provide improved performance in a data processing unit. The performance of a data processing unit is determined, at least in part by the time required to achieve data from the system main memory un;t. The period of time required to retrieve data from the main memory can be minimized by implementing these circuits in the technology currently providing the highest speed. Because of the increasing memory requirements of modern data processing systems, this partial solution can be unacceptably expensive. In addition, delays caused by the physical distance between the central processing unit and the main memory can be unacceptable.
Because of these and other considerations, it has been found that a cache memory unit, associated with the central processing unit, provldes a satisfactory compromise for providing -the central processing unit with a requisite data availability. The cache memory unit is a high speed memory of relatively modest proportions which is conveniently located in relation to the central processing unit. The contents of the cache memory are selected to be those for which there is a high probability that the central processing unit will have an immediate requirement. To the extent that the algorithms of data processing system have transferred data, required by the central processing unit, from the main memory to the cache memory unit, prior to the actual requirement by the central processing unit, the manipulation of data by the data processing system can be efficiently accomplished.
However, the transfer of the data from the main memory to the cache memory can be complicated. In the modern data processing system, an interface unit, which can be referred to as a system interface unit, can be inter-posed between -the main memory and the central processing unit. The system .

111675~i interface unit is in effect a complex electronic switch controlling the interchange of data between the main memory (which may comprise several independent units), the central processing unit, and peripheral devices, which may be utilized in entering data into or retrieving data from the data processing unit. Thus the circuits in the system interface unit necessary to process the data transfer between the main memory and the cache memory may be unavailable, at least temporarily. Similarly, the central processing unit may have initiated activity in the cache memory unit which would similarly render the cache memory temporarily incapable of participating in the data transfer.
In situations where the two units or resources in a data processing system can be independently unavailable for data processing activity, such as a data transfer, it is known in the prior art to provide circuitry, which interrupts present activity of the required units or which prohibits future activity of the two units according to predetermined priority considerations, thereby freeing the resources or units of the data processing system for execution of the data transfer. This type of resource reservation can impact the overall efficiency of the data processing system by delaying execution of certain data manipulations at the expense of other types of manipulations.
It is also known in the prior art to provide circuitry to permit the partial execution of a data transferj a storing of the data at an inter-mediate location and then the completion of the execution at a later time, i.e., when the system resource becomes available. Thus, a buffering between the main memory unit and the cache memory unit can be accomplished, permitting the two units to operate in a generally independent manner. This type of data manipulation execution has the disadvantage that, after completion, the succeeding data transfers are again limited by the availab11ity prior to contlnuation of the sequence of data transfers, of each resource necessary to the completion of the data transfer.
It is therefore an object of the present invention to provide improved transfer of data between a main memory unit and a central processing unit of a data processing system.

It is a further object of the present invention to provide lirmproved transfer of data between a main memory unit and a cache memory unit in a data processing system.
It is still a further object of the present invention to provide a buffer stage, associated with the cache memory unit which controls the transfer of information between the main memory unit and the cache memory unit.
It is a more particular object of the present invention to provide a buffer stage between the cache memory and the system interface unit.
It is still another particular object of the present invention to provide a buffer stage associated with the cache memory which perrnits sequential execution of data transfer activity between the system interface unit and central processing unit.
It is yet another object of the present invention to provide a buffer stage associated with the cache memory un;t which perrnits sequential execution of data transfer instructions stored in the buffer stage while permitting execution of the activity involving the cache memory unit and the activity involving the system interface unit to be completed independently for the stored instructions.

SU~MARY OF THE INVENTION
-:
The aforementioned and other objects are accomplished, according to the present invention, by a cache memory command buffer which includes a series of storage registers, for storing read and write data transfer commands and associated data, apparatus for providing sequential executlon of the portion of ~ -a stored instruction involving the system interface unit, apparatus for providing sequential execution of a portion of the stored instruction involving the cache memory unit, and apparatus for signaling the completion of stored instruction.
The independent execution of the portion of the stored instruction involving the system interface unit and the por-tion of the instruction involving the cache memory permits overlapped instruction execution. In addition, the complete instruction will be executed in the sequential order received by the cache memory command buffer.
In accordance Mith the present invention there is provided in association with a system interface unit and a cache memory unit of a data processing system, a cache memory command bufPer unit for permitting overlapped data transfer of information signals comprising: a plurality of memory ;
locations for storing said information signals being trans-ferred to said system interface unit and to said cache msmory;
means coupled to said system interface unit, to said cache ~
memory, and to said memory locations for storing said informa- ~ `
tion signals into said memory- locations; first means for ex-tracting said information signals from said memory locations in a sequential order of storage in said memory locations for delivery to said cache memory unit of said data processing unit; and second means for extracting said information signals B

.

from said memory locations in said sequential storage order for delivery to said system interface unit of said data pro-cessing unit, wherein said first extracting means can operate independently of said second extracting means.
In accordance with the present invention there is also provided memory buffer apparatus for sequentially con-trolling transfers of data groups to a cache memory unit and to a main memory unit in a data processing unit, wherein the improvement comprises: cache data group storage apparatlls for storing into a plurality of storage locations in response to first control signals from said data processing unit data groups to be entered in said cache memory unit received there-by; main memory data group storage apparatus for temporarily storing into said storage locations in response to second con-trol signals from said data processing unit data groups to be entered in said main memory unit received thereby; apparatus coupled to said cache data group storage apparatus and to said main memory data group storage apparatus for storing said data groups in a sequential order; and apparatus coupled to said storage locations for transferring stored data groups in said sequential order to said cache memory unit and to said memory unit in res-ponse to third control signals from said data pro-cessing unit.
In accordance with the present invention there is also provided a cache memory command buffer for a data pro-cessing system temporarily storing data groups b.eing trans-ferred to a cache memory unit and to a main memory unit, com-prising: a first plurality of memory locations coupled to said cache memory unit and to said main memory unit for storing 3Q said data groups received from said data processing system;

- 7a.-~,, Ei756 a memory stack unit coupled to said first plurality of memory locations and to said data processing system; said memory stack :
unit including a second plurality of memory locations for stor-ing first memory location addresses of said data groups stored in said first plurality of memory locations; said memory stack unit stores each of said first memory location addresses in one of said second memory locations, each memory location address being stored in a predetermined sequence; and apparatus coupled to said second memory location for sequentially addressing said second memory locations in response to control signals from said data processing unit, said apparatus address-ing said one of said second memory locatlons to produce a data group transfer from said first memory location identified by said addressed second memory location, said data group transfer proceeding to said main memory and to said cache memory.
These and other features of the invention will be understood upon reading of the following descritpion along with the drawings.

~1675~

BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1 is a schematic block diagram of a data processing system utilizing a cache memory unit.
Fig. 2 is a schematic diagram of the address format utilized by the data processing system as organized for use in the cache memory unit.
Fig. 3 is a schematic block diagram of the cache memory storage unit showing the general organizational structure.
Fig. 4 is a schematic diagram of the organization of the cache command circuit storage locations according to the preferred embodiment.
Fig. 5A is a schematic diagram of the apparatus controlling the operation of command circuit storage locations.
Fig. 5B is a schematic diagram of a possible stack memory configuration for the cache command buffer circuit according to the preferred embodiment.

~ 52E2720 L67Si~i DESCRIPTION OF THE PREFERRED EMBODIMENT

Detailed Description of the Figures Referring now to Figure 1, the general organization of a data processing system utilizing a cache memory unit is shown. A central processing unit 50 is coupled to a cache memory unit 100 and to a system interface unit 60.
The system interface unit is coupled to memory unit 70. The central processing unit 50, the memory unit 70, and the system interface un;t 60 can be comprised of a plurality of individual units, all appropriately coupled and controlled for accurate execution of signal manipulation.
Referring next to Figure 2, the format of a data address, compr~sed of 24 binary bits of data, utilized by a data processing system is shown. The first 15 most significant bits identify a page address of data. Each page address of data is comprised of 512 data words. In the present embodiment each word is composed of 40 binary data bits" this number being a matter of design choice. Of the 512 data words identified by the remaining 11 binary bits of each data page, each group of the next 7 binary bits of data is associated with aZlocation of groups of memory storage cells in the cache memory and is a locat;on address ;n the cache memory. That ;s, there are 128 memory locat;ons in the cache memory, and each location is identified with a combination of binary bits in the second most significant bit assemblage. The four least significant b;t assemblages of the address format, in the present embodiment, are not utilized in identifying a word addressilin the cache memory unit. For efficient exchange of data between the cache memory unit and the memory unit, a block of four data words is transferred with each data transfer operation.
Because the data transfer occurs ;n blocks, there is no need to utilize the least significant bits in identifying the transferred information to the main memory. The four words comprising the block will, in normal data transfer, always be present in any event. In the illustration in Fig. 2, the address format beg;ns at bit posltion zero. ~lowever, this ;s a matter of deslgn choice and other address formats can be utilized. Similarly, the address format can contain additional information, such as parity or status designa-tions, when the address -format is larger (i.e., than 2~) group of binary data b;ts.

` 52~2720 S~

Referring next to Fig. 3, a schematic block diagram of the principal components of a cache memory unit of a data processing system is shown. The data signals in the cache memory unit are stored in cache memory storage unit 101. This memory is comprised of random access memory devices in which data signals can be both read or stored into addressed memory cells and extracted From addressed memory cells. The organization of the cache memory storage unit 101 is such that there are 128 locat;ons, LOCATION O through LOCATION
127. For each location, there are four groups of blocks of memory cells labelled BLOCK O through BLOCK 3. Each of the four blocks can contain four memory words labelled WORD O through WORD 3. Four data words from a selected block of a selected location in the memory storage unit 101 can be applied to the instruction buffer circuit 300 and for subsequent transfer to the data processing unit. Data signals are entered into the stora~e unit 101 by a data register 140, which is under the control of the cache memory control circuits 200. The cache memory control circuits 200 also control the address register 130. Address register 130 is coupled to the cache memory storage unit 101, the cache memory directory 102, and the cache memory directory control circuits 150. The cache memory directory 102 is divided into four blocks and each block contains 128 storage cells and structure in a manner similar to the storage unit 101, without, however, the additional ~ORD structur~-The cache memory directory is also comprised of random access memory circuits.
The contents of the blocks of an addressed location in the memory directory 102 are appl;ed respectively to four comparison networks 111 through 114.
The output signals of the comparison networks are applied to the data status decision network 120. The output signals of the data status decision network 120 can be applied to the four blocks of storage cells in the cache memory storage unit and to the four blocks of storage cells located in the cache memory directory in order to activate the block recei~ing the appropriate signals. The output signals of data status decision network 120 are also applied to the cache memory directory control circuits 150. The address register 130 is also coupled to the four blocks of memory cells of the cache memory directory 102 and to the comparison networks 111 through 114. The ---~ 52E2720 cache memory directory control circuits 150 are divided into a directory control register and directory control circuits.
Referring to Fig. 4, the cache memory control circuits include two buffer register units, a four register read buffer memory unit 220 and a four register write buffer memory unit 230. The memory units can store data in an addressed location and can deliver signals to two sets of output terminals from memory locations at two independently addressed locations.
The stack sequence control logic 210 is coupled to both memory unit 22Q
and memory unit 230. Each buffer memory receives from the central processing unit address/data and command signals in response to signals from the stack sequence control logic and stores these signals in address locations determined by the control logic. The output signals of either buffer memory unit in r~sponse to other signals from~-the stack sequence control unit 210, can be applied to either the cache circuits and/or applied to the system interface unit circuits, depending on how the memory units are addressed. The stack sequence control logic 210 receives signals from the syst~n interface unit and signals from the cache memory unit. The stack sequence control logic issues status signals for utilization by the data processing unit.
Referring next to Figure 5A, the stack sequence control logic 210 is shown. The control logic includes an 8-address, 3-position memory stack 211, in which one group of data can be entered into an addressed location and two groups of memory stack signals can be simultaneously extracted, independently from addressed locations. One group of memory signals from stack 211 are coupled to first address enable address for read buffer memory 220 and write buffer 230 while a second group of memory signals are coupled to second enable address apparatus associated with read buffer memory 220 and write buffer memory 230. The output signals of counter 213 enable a data write for stack 211 at the addressed location. Output signals of coun~er 214 enable a first group of memory signals from stack 211 and output signals of counter 215 enable a second group of memory signals from stack 211. Counter 214 has signals from the cache unit applied thereto, whi-le counter 21S has signals from the system interface unit applied thereto.

6~56 Address decision network 212 receives signals from buffer storage mem~ries 220 and 230 and applies address signals to stack memory 211 and status signals portions of the data processing system. Address decision network 212 receives signals from counter 213, counter 214, counter 215 and counter 216. Counter 216 has signals applied thereto from address decision network 212, counter 214 and counter 215, and appl;es signals to write buffer storage memory 230.
Fig. 5B illustrates the format in which data is stored in stack 211 and further illustrates the use of pointers for the stack.

Operation of the Preferred Embodiment The basic use of a cache memory unit is to make available to the central processing unit data stored in the main memory unit without the wait normally associated with retrieval of the memory unit data. The cache memory is therefore a high speed memory which contains data required with some immediacy by the central processing unit for uninterrupted operation.
As shown in Fig. 1, the cache memory is electrically coupled to a central processing unit and to the system interface unit. Similarly, the central processing unit can be coupled directly to the system interface unit in certain data processing systems. rhe actual utilization of the electrical paths coupling the system components is dependent on the method of operation, for example, in some data processing systems data can be delivered directly to the central processing unit in certain circumstances. In other systems, the data required by the central processing unit must always be delivered to the cache memory unit before being transferred to the central processing unit. As will be clear to those skilled in the art, there are a variety of methods by which the data processing unit can utilize the cache memory for more effective operation.
In the preferred embodiment, an address format of the form shown in Fig. 2 is utilized for defining an address in the main memory unit. The most significant (15) blts,~indicate a page address, the second most significant (7) bits indicate a location address, while the 2 least significant bits in conjunction with the other 22 bits identify a specific word or group of data signals stored in main memory. In the preferred embodiment, the least significant bits are not used by the main memory unit in normal operation.
In the typical data transfer, four data groups or words are transferred with the issuance of one instruction. Thus after the central processing unit has developed the main memory address, only the 22 most significant bits are utili~ed and all of the four words thereby identif;ed are transferred.
After the central processing unit has developed the address of the required data in main memory, that main memory address is delivered to the cache memory control circuits 200 and entered in address register 130.
At this point the cache memory control circuits 200 begin a directory search cycle. The directory search cycle searches for the address of the data requested by the central processing unit in the cache memory unit.
The main memory address is entered in address register 130 as the most significant 15 bits, the page address portion of the address is applied to the four comparison registers 111 - 114.
Simultaneously the 7 bits of the location address portion of the main memory address are applied to the related one of the 128 locations in the cache memory storage unit, the cache memory directory 102 and the cache - memory directory control reg;ster of the directory control circuits. The location address enables circuits containing four blocks of data in the cache d;rectory and the directory contents are applied to comparison circuits 111 -114. The contents of the 4 blocks of the cache directory are 15 bit page main memory addresses. Thus, when the page address portion of the main memory address in the address register is found in one of the four blocks of the cache directory, a "hit" signal is applied to the data status decision network 120. The "hit" signal indicates that the desired data is stored in the related block of the same location address in the memory storage unit.
The location address portion of address register 130, when applied to the directory control circuits 150, enables the register cell storing status signals and applies these status signals to the decision network 120. In the preferred embodiment, types of status signals utilized are as follows 1) a full/empty indicator which is a positive signal when valid data is stored in the corresponding cache memory storage unit; 2) a pending bit indicator which is positive when data is in the process of being transferred ~ 52E2720 1~ Ei7S~i from main memory to the cache memory storage unit so that page address has already been entered in the cache memory directory; and 3) a failing block indicator which is positive when the related one of the four blocks of memory storage cells has been identified as producing errors in data stored therein.
Assuming that the status signals are appropriate when a "hit" is determined by data status decision network, then the Yalid data is in the cache memory storage unit. The location address of address register 130 has enabled four blocks of data (each containing 4 words), related to the location address in the cache memory directory. The "hit" in page address one of the four blocks of the cache memory directory indicates that the four data words are located ;n the related block of the cache memory data storage unit. The data status decision network applies a signal to the appropriate block of the storage un;t. The four required data words are deposited in th~ instruction buffer and are retrieved by the central processing unit.
The operation of the cache memory command buffer circuit can be under-stood as follows. In response to signa~s from the central processing unit, the stack sequence control logic 210 determines an address in the buffer memory unit 220 or in the buffer memory unit 230. The stack sequence control logic then enables the storing, at the determined address, of address/data signals and command signals from the central processing unit.
When the central processing unit signals a read operation, then the signals are stored in read buffer 220, and when a write operation is signaled by the central processing unit, then the signals are stored in write buffer 230.
In the preferred embodiment, the read buffer has four possible locations and the write buffer has four locations, but only three are utili7ed. It can be necessary to execute certain classes of write commands in the preferred embodiment which require three data group locations for complete specification. Therefore, in the cache memory command buffer locations there are a total of five possible operations which can be identified in the locations at one time, four read operations and a write operation.

~IL67S6 It will be clear to those skilled in the art that for each operation identified in the cache command buffer memory locations, manipulations involving four sets of apparatus are understood in each case. For example, the data requested by the central processing unit can be in main memory and the cache memory or in main memory alone. A command can involve the search in cache memory for a given set of data and/or the extraction from main memory via the systPm interface unit of that data if unavailable.
Because the system interface unit and/or the cache memory can be busy with operations involving a higher priority, it is advantageous for the operation in the system interface unit or in the cache to proceed independent of the aYailability of the other component involved in the transfer. For example, a write operation involves both the cache unit and the system interface unit portions of the data processing system. It is necessary that the commands be executed in sequence in order to avoid generation of erroneous data, and in addition that the portions of the command involving the cache unit or the system interface unit be individually performed in sequence.
Therefore, the stack sequence control logic provides pointer signals - controlling the sequential operation of a series of commands, pointer signals controlling the sequential execution of the portion of command involving the cache unit, and pointer signals controlling the sequential execution of the portion of the command involving the system inter~ace unit. The pointer signals, in each case, are applied to the memory stack by counters.
To store data in the command buffer memories, the address decision network, in response to signals from the read and write buffers, determines the address of the next available location in the buffer. This apparatus signals the availability of a command buffer memory locat~on to the central processing unit. When the address decision network signals to the central processing unit that a command buffer memory location is free, i.e., there is no write operation present and/or there are less than four read operations stored in the command buffer memory, the counter 213 will provide in pointer signals which enahle signals to be entered in the stack memory in the next sequentlal location addressed by the cownter. Upon receipt of a ~2E2720 address/data command and signals from the central processing unit, the address decision network will enter the command buffer memory address into which the signals are to be stored in stack memory:211. If a write operation is to be entered, a positive signal is entered ;n the first (of three) position of the stack memory. If a read operation is to be entered, the logical address of the next empty location in the command read buffer is entered in the last two stack memory locations. The address entered in the stack memory activates the corresponding buffer memory locations so that address~data signals and command signals are entered in the location identified by the stack memory. After the signals are entered in the buffer memory, and if the stack rnemory is not filled, the counter 213 is incremented and the in pointer identifies and can enable the next location in the stack memory.
The cache pointer signals are generated by counter 214 and the system interface unit pointers are generated by counter 215. When the counter 214 receives a cache signal indicating that the cache unit is ready to execute a command, the output signals from counter 214 are activated and the location addressed in the stack memory is enabled. When the location in th~ stack memory is enabled, the output signals of stack memory associated cache operation activates the associated address in the command buffer memory units. The address/data and the command signals are thereby activated and these signals are applied to appropriate portions of the cache unit and the operation is executed. At the completion of the execution, the counter 214 increments to a value indicating the next sequential location and waits until enabled by an appropriate signal from the cache unit. However, the address decision network includes the logical apparatus for preventing the cache pointer (counter 214) from advancing beyond the position in the stack memory indicated by counter 213.
The system interface unit pointer from counter 215 opcrates in analogous manner to execute sequentially the commands delivered from the command memory units which control operation of the system interface unit.

~167S~

The write buffer memory 230 has a write buffer pointer provided by counter 216 which controls the sequential operation of the contents of the write buffer memory. When the write command stored in the write buffer memory has more than one location associated there~lith, the write buffer pointer activates the location in correct sequential order.
Fig. 5B illustrates schematically a potential configuration of the stack memory. The first location is empty, the second location has a read operation for read buffer memory location 00. The cache pointer is shown addressing that memory location. The next stored memory location contains a read operation located at address 01 in read buffer memory. The cache pointer will increment to this addre~s when the current operation involving the cache unit is complete. The fourth s-tack memory location indicates a read operation at address 10 in the read buffer memory and the system interface unit pointer is enabling this stack memory location. The fifth stack memory address contains a write operation. Be-cause only one write operation can be stored in the buffer mernory in the preferred embodiment, and one group of locations is always utilized for the wr;te operation, no further address is necessary. The system in~erface unit pointer will enable this stack memory location next. The sixth stack mernory location identifies a read operation of read buffer memory address 11. The in pointer remains at this location in the stack memory until the operation identified in the second stack location is complete. Then the in pointer will increment to the seventh stack m~mory location, enabling a writing of address/data and command signals in this address. This illustration sllggests the utilization of the read buffer memory locat~on is controlled by a sequential or round-robin algorithm in the address decision network. It will be clear however that another algorithm could be utilized.
Utilizing the apparatus of the preferred embodiment, it is possible to provide sequential and overlapped execution of a plurality of operations involving both the cache unit and the system interface unit. In addition, the cache unit portions of the conYnand execution can be operated in sequence independent of the sequential execution of the command in the system inter-~ace unit of the data processing system. In a normal read operation, the apparatus of the pre~erred embodiment would not permit extraction by the system interface unit of data frQm the main memory until a determinatiQn had been made that the data was not available in the cache StQrage units.
05 Similarly, when the data is available in the cache storage units, the operation involving the system interface unit is aborted. However, the write command can be executed independently in the system interface unit and the cache memory unit and certain read commands, such as a read command which invalidates data in the cache storage unit, while obtaining data from main memory via the system inter~ace unit can be executed independently.
~ he above description is included to illustrate the operation of the preferred embodiment and is not meant to limit the scope of the invention.
The scope of the invention is to be limited on~y by the following claims.
lS From the above discussion, many variations will be apparent to one skilled in the art that would yet be encompassed by the spirit and scope o~ the invention.
What is claimed is:

Claims (4)

THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. In association with a system interface unit and a cache memory unit of a data processing system, a cache memory command buffer unit for permitting overlapped data transfer of information signals comprising: a plurality of memory locations for storing said information signals being trans-ferred to said system interface unit and to said cache memory;
means coupled to said system interface unit, to said cache memory, and to said memory locations for storing said informa-tion signals into said memory locations; first means for ex-tracting said information signals from said memory locations in a sequential order of storage in said memory locations for delivery to said cache memory unit of said data processing unit;
and second means for extracting said information signals from said memory locations in said sequential storage order for delivery to said system interface unit of said data processing unit, wherein said first extracting means can operate indepen-dently of said second extracting means.
2. The cache memory command buffer unit of claim 1, wherein said information signals stored include memory read signals and memory write signals.
3. Memory buffer apparatus for sequentially controlling transfers of data groups to a cache memory unit and to a main memory unit in a data processing unit, wherein the improve-ment comprises: cache data group storage apparatus for storing into a plurality of storage locations in response to first control signals from said data processing unit data groups to be entered in said cache memory unit received thereby; main memory data group storage apparatus for temporarily storing into said storage locations in response to second control signals from said data processing unit data groups to be entered in said main memory unit received thereby; apparatus coupled to said cache data group storage apparatus and to said main memory data group storage apparatus for storing said data groups in a sequential order; and apparatus coupled to said storage locations for transferring stored data groups in said sequential order to said cache memory unit and to said memory unit in response to third control signals from said data processing unit.
4. A cache memory command buffer for a data processing system temporarily storing data groups being transferred to a cache memory unit and to a main memory unit, comprising: a first plurality of memory locations coupled to said cache memory unit and to said main memory unit for storing said data groups received from said data processing system; a memory stack unit coupled to said first plurality of memory locations and to said data processing system; said memory stack unit including a second plurality of memory locations for storing first memory location addresses of said data groups stored in said first plurality of memory locations; said memory stack unit stores each of said first memory location addresses in one of said second memory locations, each memory location address being stored in a predetermined sequence; and apparatus coupled to said second memory location for sequentially addressing said second memory locations in response to control signals from said data processing unit, said apparatus addressing said one of said second memory locations to produce a data group transfer from said first memory location identified by said addressed second memory location, said data group transfer proceeding to said main memory and to said cache memory.
CA000317779A 1977-12-16 1978-12-12 Cache memory command circuit Expired CA1116756A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US86122877A 1977-12-16 1977-12-16
US861,228 1977-12-16

Publications (1)

Publication Number Publication Date
CA1116756A true CA1116756A (en) 1982-01-19

Family

ID=25335228

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000317779A Expired CA1116756A (en) 1977-12-16 1978-12-12 Cache memory command circuit

Country Status (6)

Country Link
JP (1) JPS5489532A (en)
AU (1) AU521383B2 (en)
CA (1) CA1116756A (en)
DE (1) DE2854286A1 (en)
FR (1) FR2412139B1 (en)
GB (1) GB2010547B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU529675B2 (en) * 1977-12-07 1983-06-16 Honeywell Information Systems Incorp. Cache memory unit
US4225922A (en) * 1978-12-11 1980-09-30 Honeywell Information Systems Inc. Command queue apparatus included within a cache unit for facilitating command sequencing
US4345309A (en) * 1980-01-28 1982-08-17 Digital Equipment Corporation Relating to cached multiprocessor system with pipeline timing
US4370710A (en) * 1980-08-26 1983-01-25 Control Data Corporation Cache memory organization utilizing miss information holding registers to prevent lockup from cache misses
JPS59136859A (en) 1983-01-27 1984-08-06 Nec Corp Buffer controller
JPH0337955U (en) * 1989-08-24 1991-04-12

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR111566A (en) * 1974-10-04

Also Published As

Publication number Publication date
GB2010547B (en) 1982-05-19
FR2412139A1 (en) 1979-07-13
DE2854286C2 (en) 1989-10-12
GB2010547A (en) 1979-06-27
AU4242878A (en) 1979-06-21
AU521383B2 (en) 1982-04-01
JPS5489532A (en) 1979-07-16
JPS6148745B2 (en) 1986-10-25
FR2412139B1 (en) 1986-05-09
DE2854286A1 (en) 1979-06-28

Similar Documents

Publication Publication Date Title
US4354232A (en) Cache memory command buffer circuit
US4521850A (en) Instruction buffer associated with a cache memory unit
AU598857B2 (en) Move-out queue buffer
US5530829A (en) Track and record mode caching scheme for a storage system employing a scatter index table with pointer and a track directory
US5530897A (en) System for dynamic association of a variable number of device addresses with input/output devices to allow increased concurrent requests for access to the input/output devices
US4530052A (en) Apparatus and method for a data processing unit sharing a plurality of operating systems
US5301279A (en) Apparatus for conditioning priority arbitration
EP0095033B1 (en) Set associative sector cache
KR920005852B1 (en) Apparatus and method for providing a synthetic descriptor in a data processing system
US4525777A (en) Split-cycle cache system with SCU controlled cache clearing during cache store access period
US3670307A (en) Interstorage transfer mechanism
US4167782A (en) Continuous updating of cache store
EP0071719A2 (en) Data processing apparatus including a paging storage subsystem
EP0292501B1 (en) Apparatus and method for providing a cache memory unit with a write operation utilizing two system clock cycles
US4138720A (en) Time-shared, multi-phase memory accessing system
US3911401A (en) Hierarchial memory/storage system for an electronic computer
US4174537A (en) Time-shared, multi-phase memory accessing system having automatically updatable error logging means
US4371949A (en) Time-shared, multi-phase memory accessing system having automatically updatable error logging means
JPH0457026B2 (en)
CA1116756A (en) Cache memory command circuit
US6973557B2 (en) Apparatus and method for dual access to a banked and pipelined data cache memory unit
EP0386719A2 (en) Partial store control circuit
US4388687A (en) Memory unit
EP0107448A2 (en) Computer with multiple operating systems
EP0473804A1 (en) Alignment of line elements for memory to cache data transfer

Legal Events

Date Code Title Description
MKEX Expiry