CN1119749C - Cache enabling architecture - Google Patents
Cache enabling architecture Download PDFInfo
- Publication number
- CN1119749C CN1119749C CN98118871A CN98118871A CN1119749C CN 1119749 C CN1119749 C CN 1119749C CN 98118871 A CN98118871 A CN 98118871A CN 98118871 A CN98118871 A CN 98118871A CN 1119749 C CN1119749 C CN 1119749C
- Authority
- CN
- China
- Prior art keywords
- read
- write
- data bus
- speed buffer
- buffer processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4282—Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
- G06F13/4286—Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus using a handshaking protocol, e.g. RS232C link
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0873—Mapping of cache memory to specific storage devices or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0674—Disk device
- G06F3/0676—Magnetic disk device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2213/00—Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F2213/0012—High speed serial bus, e.g. IEEE P1394
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
A cache enabling architecture in which an optical storage reading and/or writing device, a caching processor and a mass writing and reading device are each connected to a data bus. The optical storage reading and/or writing device exchanges information directly with the caching processor over the data bus. The caching uses the mass writing and reading device as cache memory.
Description
Technical field
The present invention relates to a kind of cache enabling architecture (cache enabling architecture), wherein can carry out caches the output of memory read and/or write device and/or the information of input.This cache enabling architecture can be for example realized in the computer system that described memory read and/or write device are connected to.In general, connection is undertaken by data bus.
Background technology
From the storage arrangement cache information is known technology.Get on very well as an example particularly, known have many solutions to come random-access memory (ram), hard disk drive and other mass storage devices are carried out high-speed cache.Described various memory storage uses in computing machine usually or combines use with computing machine.The requirement that one memory storage is carried out high-speed cache provides a storer faster basically, information is comparable therein obtains more effectively access in described memory storage, and is that fixed information is copied to this storer or opposite faster from described memory storage basically.Described fixed information can for example be most possibly need or the information of frequent needs.Determine duplicating and discern with high-speed buffer processor and carrying out of information in the information in being included in memory storage (or storer) faster.High-speed buffer processor can for example be the software program of operation on calculating.Therefore high-speed cache has improved total performance of information handling system, and described information is the microprocessor processes information among the RAM that for example is stored in or is stored in computer treatmenting information in the mass storage peripheral hardware.
Computing machine generally with such as magnetic and/or optical storage peripheral hardware use.These memory storages directly or indirectly are connected on the data bus.Microprocessor is in the message exchange of carrying out on the data bus between each device that is connected on this data bus.Change with the characteristic of the performance that is stored in the message reference number of times expression in each memory storage with each memory storage.For example the performance of magnetic hard drive significantly is better than the performance of optical disc apparatus.Known to using disc driver to come optical disc apparatus is carried out high-speed cache as faster memory.
In a kind of realization of high-speed cache, high-speed buffer processor carries out high-speed cache by use direct connection of exchange message between optical disc apparatus and hard disk drive.This direct connection is essential, will exchange message not have other way between optical disc apparatus and magnetic hard-disk device because do not relate to microprocessor, and relates to the speed that microprocessor significantly reduces computing machine.On the other hand, directly connecting is a hardware, thereby this hardware does not belong to the production cost that the computer equipment of standard may increase the computing machine that is equipped with the storer peripheral hardware.
Nearest computer hardware comprises a kind of data bus, and two peripheral hardwares can swap data and not obvious interference is connected other peripheral hardwares on this data bus on this data bus.This means that the microprocessor that is also referred to as CPU (central processing unit) also can carry out other tasks except carrying out two message exchanges between the peripheral hardware.For example, microprocessor can be handled the data that are stored among the RAM.This data bus can be for example based on IEEE 1394 buses.
Summary of the invention
An object of the present invention is to seek a solution, make to come the optical storage peripheral hardware is carried out high-speed cache with another memory storage peripheral hardware, and need not between this two peripheral hardware, to have the direct connection of controlling oneself.This solution should be utilized existing computer hardware as far as possible.
According to the present invention, found a solution of the problems referred to above, it is a kind of cache enabling device, be used for optical memory is read and/or the output of write device and/or the information of input are carried out high-speed cache, this cache enabling device comprises at least one high capacity write and read device, a sets of data bus and a high-speed buffer processor, and described high capacity write and read device is based on magnetic hard drive; Parallel described high capacity write and read device and the high-speed buffer processor of being connected with on the described data bus also arrives described high capacity write and read device by described data bus from the instruction of other some devices except that optics is deposited removal apparatus; Described high-speed buffer processor comes information is carried out high-speed cache by using this high capacity write and read device.High-speed buffer processor is directly connected to high capacity write and read device.Optical memory is read and/or the output of write device and/or input and high-speed buffer processor are connected by data bus, so as between described output and/or input and described high-speed buffer processor direct exchange message.
According to the present invention, found another solution of the problems referred to above, promptly a kind of magnetic hard drive of in computer system, using.This computer system comprises that at least one CPU (central processing unit), an optical memory read and/or a write device and a sets of data bus, and described CPU (central processing unit) and described optical memory are read and/or write device is connected to described data bus indirectly or directly.Described magnetic hard drive also comprises a connecting circuit and a high-speed buffer processor, connecting circuit is used for concurrently magnetic hard drive being connected to data bus with high-speed buffer processor, high-speed buffer processor receives request from data bus, be intended to be used for to read and/or to write that optical memory is read and/or the information of write device, high-speed buffer processor also magnetic hard drive and optical memory is read and/or write device between carry out message exchange by data bus, optical memory is read and/or write device carries out high-speed cache.
Brief Description Of Drawings
According to the following explanation of 1 couple of each embodiment with reference to the accompanying drawings, the other objects and features of the invention will become apparent.
Fig. 1 is the synoptic diagram of cache memory architectures.
Embodiment
Described each embodiment is not restrictive, and those skilled in the art can consider still other embodiment within the scope of the present invention.
Fig. 1 represents it can is the data bus 1 of a part of computing machine (not shown).This data bus 1 can for example be the bus based on IEEE 1394.IEEE 1394 buses are the high-speed serial bus that allow to transmit numerical data.In addition, IEEE 1394 also allow and be connected to this bus each device direct communication and mutually between exchanges data.
Optical memory is read and/or write device 2 is connected to data bus 1 by output and/or input connector connecting circuit 22.Optical memory is read and/or write device 2 can for example be CD-ROM, DVD-ROM/RAM or CD-RW (can rewrite) driver, and promptly data are read/are write with optics or magnetic-light method.CD drive provides relatively cheap method to visit/store bulk information.
High capacity write and read device 3 is connected to data bus 1 by connecting 4.This high capacity write and read device 3 can for example be a magnetic hard drive.Magnetic hard drive provides favourable P/C ratio, so be used in the most computers.
High-speed buffer processor 5 is connected to high capacity write and read and device 3 by connecting 6, is connected to data bus 1 by connecting 4.
Be better than generally with the performance of the high capacity write and read device 3 that the access times and the transfer rate of information are represented that optical disc memory is read and/or the performance of write device 2.High-speed buffer processor 5 is directly undertaken by data bus 1 and optical memory is read and/or write device 2 between message exchange.High-speed buffer processor 5 can be for example to the request that optical memory is read and/or write device 2 sends information, receive request after, optical memory is read and/or write device 2 sends institute's information requested to high-speed buffer processor 5.High-speed buffer processor 5 sends institute's information requested of receiving in the high capacity write and read device 3 of this information of storage.
Therefore, optical memory read and/or write device and high capacity write and read device between do not need special direct the connection.This cache enabling architecture has utilized two kinds of devices to pass through the data bus possibility of exchange message each other.
In the ordinary course of things, another device 7 is connected to data bus 1.This another device 7 can for example be a microprocessor.Another device 7 is to high capacity write and read device 3 or to representing optical memory to read and/or the high-speed buffer processor 5 of write device 2 sends request to information.These requests that high-speed buffer processor 5 is handled information, if institute's information requested has been stored in the high capacity write and read device 3, just obtain institute's information requested therefrom, otherwise read and/or write device 2 acquisition institute information requested, give another device 7 this information at last from optical memory.
High-speed buffer processor 5 also can be in a period of time be analyzed request to information according to cache policies.For a person skilled in the art, cache policies is well-known.As the result who analyzes, high-speed buffer processor 5 can determine which another device 7 is than definite information of the more frequent request of other information.As long as this fixed information often is requested, high-speed buffer processor 5 just can be stored in it in high capacity write and read device.High-speed buffer processor 5 also can be realized the cache policies that is referred to as to read in advance, thereby waits for the request of 7 pairs of information of another device in advance.
In another embodiment, also can use high-speed buffer processor 5 on data bus 1, to receive to send by another device 7, be intended to be stored in optical memory is read and/or write device 2 in information.High-speed buffer processor 5 will at first send to high capacity write and read device 3 to the information of receiving, the latter stores this information earlier, then it is copied to from high capacity write and read device 3 that optical memory is read and/or write device 2.By utilizing the write attribute of high capacity write and read device 3, improved in fact that optical memory is read and/or the write attribute of write device 2.
Each device that is connected on the data bus 1 uses communication protocol to come exchange message.In a preferred embodiment, optical memory read and/or write device 2 and high level cache processor 5 between communication protocol can be the optimization version of the communication protocol between another device 7 and high-speed buffer processor 5, to strengthen simplicity and performance.
In general, high capacity write and read device 3 can comprise himself carrying out oneself private cache processor of high-speed cache.In a preferred embodiment, the functional function that comprises the high-speed buffer processor that this is special-purpose of high-speed buffer processor 5, thereby removed to one physically each other private cache processor needs and further reduced cost.
Claims (5)
1. read and/or the cache enabling device of high-speed cache is carried out to information in the output of write device (2) and/or input in optical memory, comprising:
Based at least one high capacity write and read device (3) of magnetic hard drive,
Data bus (1), parallel thereon described high capacity write and read device (3) and the high-speed buffer processor (5) of being connected with, from read except that described optical memory and/or write device (2) the instruction of other device (7) also arrive described high capacity write and read device (3) by described data bus (1)
High-speed buffer processor (5), it comes described information is carried out high-speed cache by using described high capacity write and read device (3), and described high-speed buffer processor (5) is directly connected to described high capacity write and read device (3),
Wherein said optical memory reads and/or the described output of write device (2) and/or input are connected by described data bus (1) with described high-speed buffer processor (5), so as between described output and/or input and described high-speed buffer processor (5) the direct described information of exchange.
2. according to the described cache enabling device of claim 1, it is characterized in that described high-speed buffer processor (5) is the ingredient of described high capacity write and read device (3).
3. according to the described cache enabling device of claim 1, it is characterized in that described data bus (1) is based on IEEE 1394 buses.
4. according to the described cache enabling device of claim 2, it is characterized in that described data bus (1) is based on IEEE 1394 buses.
5. the magnetic hard drive of in computer system, using, described computer system comprises: at least one CPU (central processing unit), optical memory read and/or write device (2) and data bus (1), wherein said CPU (central processing unit) and described optical memory reads and/or write device (2) is connected to described data bus (1) indirectly or directly, described magnetic hard drive further comprises:
High-speed buffer processor (5), it from described data bus (1) receive request with read and/or write be intended to be used for described optical memory is read and/or the described high-speed buffer processor of information (5) of write device (2) also described magnetic hard drive and described optical memory reads and/or write device (2) between carry out message exchange described optical memory is read and/or write device (2) carries out high-speed cache by described data bus (1); With
Connecting circuit is used for concurrently described magnetic hard drive being connected to described data bus (1) with described high-speed buffer processor (5).
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US058,452 | 1987-06-05 | ||
US5845297P | 1997-09-08 | 1997-09-08 | |
EP97115527.0 | 1997-09-08 | ||
EP97115527A EP0901077A1 (en) | 1997-09-08 | 1997-09-08 | Cache enabling architecture |
US058452 | 1997-09-08 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1211008A CN1211008A (en) | 1999-03-17 |
CN1119749C true CN1119749C (en) | 2003-08-27 |
Family
ID=26145768
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN98118871A Expired - Fee Related CN1119749C (en) | 1997-09-08 | 1998-09-04 | Cache enabling architecture |
Country Status (7)
Country | Link |
---|---|
JP (1) | JPH11167469A (en) |
KR (1) | KR100580933B1 (en) |
CN (1) | CN1119749C (en) |
HK (1) | HK1017115A1 (en) |
ID (1) | ID20659A (en) |
MY (1) | MY118599A (en) |
SG (1) | SG70114A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101403982B (en) * | 2008-11-03 | 2011-07-20 | 华为技术有限公司 | Task distribution method, system for multi-core processor |
-
1998
- 1998-08-19 JP JP10247716A patent/JPH11167469A/en active Pending
- 1998-08-20 SG SG1998003160A patent/SG70114A1/en unknown
- 1998-09-02 KR KR1019980036131A patent/KR100580933B1/en not_active IP Right Cessation
- 1998-09-04 CN CN98118871A patent/CN1119749C/en not_active Expired - Fee Related
- 1998-09-07 MY MYPI98004072A patent/MY118599A/en unknown
- 1998-09-08 ID IDP981206A patent/ID20659A/en unknown
-
1999
- 1999-05-17 HK HK99102170A patent/HK1017115A1/en not_active IP Right Cessation
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101403982B (en) * | 2008-11-03 | 2011-07-20 | 华为技术有限公司 | Task distribution method, system for multi-core processor |
US8763002B2 (en) | 2008-11-03 | 2014-06-24 | Huawei Technologies Co., Ltd. | Method, system, and apparatus for task allocation of multi-core processor |
Also Published As
Publication number | Publication date |
---|---|
MY118599A (en) | 2004-12-31 |
SG70114A1 (en) | 2000-01-25 |
KR100580933B1 (en) | 2006-10-24 |
CN1211008A (en) | 1999-03-17 |
JPH11167469A (en) | 1999-06-22 |
KR19990029463A (en) | 1999-04-26 |
ID20659A (en) | 1999-02-11 |
HK1017115A1 (en) | 1999-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20030188045A1 (en) | System and method for distributing storage controller tasks | |
CN102541468B (en) | Dirty data write-back system in virtual environment | |
JPH02103649A (en) | Control equipment and information processing systems | |
CN111309266A (en) | Distributed storage metadata system log optimization system and method based on ceph | |
CN109857545A (en) | A kind of data transmission method and device | |
US7725654B2 (en) | Affecting a caching algorithm used by a cache of storage system | |
JPH06175786A (en) | Disk array device | |
RU2183850C2 (en) | Method of performance of reading operation in multiprocessor computer system | |
CN100351767C (en) | Method, system for an adaptor to read and write to system memory | |
CN1119749C (en) | Cache enabling architecture | |
CN1299098A (en) | Equity elevator scheduling calculating method used for direct access storage device | |
JPH08212178A (en) | Parallel computer | |
CN1258714C (en) | Network optical disc database | |
US6434592B1 (en) | Method for accessing a network using programmed I/O in a paged, multi-tasking computer | |
US6175895B1 (en) | Cache enabling architecture | |
CN119003410B (en) | A method, device, equipment and storage medium for optimizing communication between storage controllers | |
EP0901078A1 (en) | Cache enabling architecture | |
CN114816260B (en) | A parallel file storage system and method | |
US20020087752A1 (en) | Method of real time statistical collection for I/O controllers | |
JP2994917B2 (en) | Storage system | |
JPH02310649A (en) | Received frame transfer method and communication control device | |
CN118427135A (en) | PCIE DMA data transmission method and system based on FPGA | |
JPH0351912A (en) | Spool area return system for each data set | |
JP2000330949A (en) | Method and device for paging control over virtual storage system | |
KR100825724B1 (en) | Object-based storage system using PMEM useful for high speed transmission with DMA and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20030827 Termination date: 20160904 |