AU605598B2 - Fault-tolerant, error-correcting storage system - Google Patents
Fault-tolerant, error-correcting storage system Download PDFInfo
- Publication number
- AU605598B2 AU605598B2 AU18033/88A AU1803388A AU605598B2 AU 605598 B2 AU605598 B2 AU 605598B2 AU 18033/88 A AU18033/88 A AU 18033/88A AU 1803388 A AU1803388 A AU 1803388A AU 605598 B2 AU605598 B2 AU 605598B2
- Authority
- AU
- Australia
- Prior art keywords
- disk drive
- data
- disk
- error
- read
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/18—Error detection or correction; Testing, e.g. of drop-outs
- G11B20/1833—Error detection or correction; Testing, e.g. of drop-outs by adding special lists or symbols to the coded information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
Description
bux 2 13 4 3 SYDNEY, NSW 2001'
AUSTRALIA
4 ft 7/ AU-AI -18033/88 WORLD INTELLECTU: 0 IN. 5A N N0.
P(71 Intr~, nt5 INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (51) International Patent Classification 4.
G06F I1I/ 10, 13/12, GI IB 20/18 (11) International Publication Numtber: Al1 (43) International Publication Dote: is WO 88/ 09968 December 1988 (15,12.88) (21) International Application Number: PCT/US88/0l826 International Filing Date: 31 May 1988 (3l10Z,88) (31) Priority Application Number: 057,347 (32) Priority Date: 2 Junc 1987 (02,06.87) (33) Priority Country: us (71)Applicant: CAB.TEK, INC. 11 Riverside Street, Nashua, NH 03062 (US), (72) Inventor: GOODLANDER, Theodore, J, 4 Victoria DrIve, Nashua, NIH 03063 (US).
(74) Agents: SOLO WAY, Norman, P. of aht, Hayes, Davis Sotoway, 175 Canal Street, Manchester, 'NH- 03101 (U S).
(8I) Designated States: AU, BR, 31P, KR, SU.
Published Withi international search report.
Ax .1P 9 MAR 1989 wl (54) Tit ,le: FAULT.TOLERANT, ERROR.CORRECTING STORAGE SYSTEM (57) Abstract A storage device System (40) for computers, capable or dynamically find transparently reconstructing lost data In.
cludes f'irst Individual storage devices (16) (or storing digital lnrormatlon and a second Individual storage device for storing efrror/recovery code bits. There Is logic for generating and storing error/recovery code bits In the second individual storage device according to a parity checking algorithm and logic for using the error/recovery code bits to reconstruct aN changed bit In error. An Interface (60, 56, 46, 18, 14) receives read and write requests from a user CPU (10, 54). Storage device controllers are connected betwtrn the Interface and respective storage devices (16) (or operating them concur.
rently. The CPU (44) includes logic (48) (or Immediately acknowledging a write to an associated storage device (16) upon the data being placed in a bufror memory for checking data In the buffer memory (64) and Indicating It as having been read from an associated storage device (16) without an actual read thereof when a read request therefor Is received from a user (10, 54) whereby thie bufferiimemory (64) acts as a cache memory., 1 WO 88/09968 PCr/US88/01826 -1- FAULT-TOLERANT, ERROR-CORRECTING STORAGE SYSTEM The present invention relates to mass storage devices for use with computers such as disk drives, and the like. The present invention has particular utility with disk drives used for mass storage with computers and will be described in connection with such utility.
However, the benefits thereof can also be used to advantage with other mass storage devices (such as optical disks, high density RAM arrays, bubble memories, and the like).
In the present state of computer technology, disk drives of the so-called "Winchester" variety, and the like, are the primary devices employed for mass storage of programs and date. Because of their low cost, they will probably remain in wide use in the future even in the presence of more exotic devices being commercially available.
Prior art disk drives generally operate in the manner shown in Figures 1-4. As shown in Figure 1, the user CPU 10 is typically connected to a BUS 12 whiuh, In turn, is connected to, among other things a non-intelligent system disk controller 14 for inputting to and outputting from an equally non-intelligent disk drive generally indicated as 16. The controller 14 and disk drive 16 are said to be non-intelligent in that, generally, they only do what they are asked by the user CPU 10. The disk drive 16 is connected to the controller 14 by I/O cable 18. within the disk drive 16, there is a mechanical/electronic drive assembly which positions the heads of the disk drive, does analog to digital conversion, digital to analog conversion, etc., as necessary to read and write to the storage disk 22 itself. This process is shown in more detail in Figures 2 and 3.
The storage diK 22 G.mprises one or more physical disks 24 which rotate about b central hub 26 as 1 WO 88/09968 PCT/US88/01826 2 indicated by the arrow 28. Typically, for addressing purposes, the disks 24 ore divided into concentric tracks 30 which, in turn, are divided into sectors 32.
Any number of vertically aligned tracks 30 form a "cylinder", which is the maximum amount of data that can be read without repositioning the heads 34. The disks 24 have a sensible peripheral indicator (not shown) by which the addressing logic contained within the drive assembly 20 can determine the rotational position of the disks 24. Read/write heads 34 are positioned on the end of arms 36 connected to head positioning mechanisms 38 by which the heads 34 can be moved in and out, as indicated by the arrows 39, under the control of the drive iissembly 20. To read from or write to a specific location on the disks 24, the correct head 34 is electronically selected and the arms 36 moved in unison to position all the heads 34 radially at the proper cylinder 30. The rotational position of the disks 24 is then monitored until the desired sector 32 for the read or write is under the selected head 34. At that time, the read/write takes place at a speed determined by the rotational speed of the disks 24.
Such disk drives have numerous problems that have been tolerated to date for lack of any improvement being available. For one example, head and magnetic surfacing materials technology has developed such that higher packing densities on the disk 24 are possible. Thai; has permitted more sector' per cylinder and more cylinders per disk. This has provided higher capacities and higher speeds (relatively speaking). In this latter regard, while the electronics and other areas of disk drive technology have grown so as to permit vastly higher transfer rates, the physical rotational aspects have remained fixed so as to create a bottleneck to any meaningful increase in transfer rates. The earliest WO 88/09968 PCT/US88/01826 3 computers employed rotating drum memories as the main memory of the computer. The outer surface of the drum was coated with magnetic material and the read/write heads were permanently attached adjacent the magnetic surface. Each head represented one track of the drum with each track being divided into sectors. Addressing was by selection of a head tract) and rotational position. Those early drum memories rotated at 3,600 rpm. Today's "high technology" disk drive still rotate at 3,600 rpm because of physical limitations which are I not important to the discussion herein. Since the speed of rotation determines how fast the data can be transferred into or out of the read/write heads 34, it can be appreciated that if the rotational speed cannot increase above 3,600 rpm and bit densities are substantially maximized at their present level, there is j not much potential for increasing disk drive transfer rates, Another limitation relative to prior art disk drives such as represented by the simplified drawings of i Figures 1-3 is the "seek time" associated with i physically moving the arms 36 and heads 34 in and out |I between selected cylinders. Particularly where movements are between radial extremes between locations close adjacent the rotating center and the periphery of the disk), the seek time for movement can be substantial; and, such time is lost time when the disks 24 are rotating'beneath the head 34 but no reading or writing can take place. In the presence of repeated read and write requests between radial extremes, there is also the problem of "thrashing" that is, the arms and heads must be accelerated in one radial direction and then braked only to be accelerated back in the opposite direction and then braked once again. Where the radial distances are great, the repeated starting and stopping WO 88/0968 PCT/US88/01826 -4creates high detrimental forces on the components accomplishing the moves. This, of course, can Lead to shortened life and/or failure of the drive and its components. To the System Control for Disk 14, BUS 12, and CPU 10, "seek time" appears as a wait state where no other useful work can be performed until the disk request is completed. Seek time averages the majority of the entire disk request cycle time, directly degrading the performance of CPU 10. The greater the number of I/O disk requests, the greater the degradation system performance until an or "disk bound" condition is reached, at which point no greater system performance can be achieved.
Yet another detrimental aspect of prior art disk drive technology, which can best be appreciated with respect to Figure 4, is reliability with a corollary consideration of reconstructability; that is, how do we protect against lost data and can we reconstruct lost data? With respect to the prior art, the answers are "poorly" and Figure 4 represents four consecutive i eight-bit "bytes" in storage on a typical prior art disk 24. The bytes were written and are read sequentially in the form of sectors blocks of data commonly 256, 512, 1024 or 2048 bytes long) from the top to the bottom in the direction of the airow as the figure is viewed.
Thus, the first byte is the binary number 10101010 white the last byte depicted is 11111111. To "protect" against error from a d'ropped or added bit during read or write, however, the prior art developed and has continued to empLov a so-called "parity" bit (designated as bit position in the figure) with each data entity, byte, nibble, etc., in storage. Parity schemes can be either "even" or "odd". The scheme depicted is an even parity system where the sum of the bits comprising the byte plus the parity bit must always be even in number. In the first byte (10101010) the 5 WO 88/09968 PCT/US88/01826 number of "1"s is four, i.e. an even number. Thus, the parity bit is when the first byte is read, the hardware sums the bits (including the parity) bit) and if the sum is even, there is no error. If a "1"'bit is lost or added, the sum will be odd and a "parity error" condition will exist. Since the b 4 t position of the bit in error is not known, however, there is insufficient informatiol to accomplish any corrective action.
Additionally, as data is transferred there is a cyclic redundancr code (CRC) associated with each serially transferred sector of data. The CRC for each sector of data is checked and a sector integrity error condition exists if the CRC test fails. With the above-described parity error within the sector, the CRC test of sector integrity will fail. Typically in such instances, the only "corrective" action taken is to repeat the read or write (a pre-established value in the system) times to see if the CRC error was a transient. If the CRC error persists, the only action possible is to print an error message to the human operator asking for instructions as to how to proceed such as (DISK READ ERROR, RETRY-CONTINUE-ABORT?). Where it is desired and/or necessary to be able to reconstruct lost data, the prior art has relied upon costly and time consuming approaches like redundant disks and "backing up" or copying of the data and programs on the disk to another disk, tape, or the like. In a redundant disk system, everything is duplicated dynamically with the intention that if one disk has an error, the data will still be available on the "duplicate" disk. Disregarding the cost factor, that philosophy is all well and good until a transcient voltage spike (a common source of disk errors) causes the same erroneous data to be written on both disks simultaneously. Backup systems have been used from the very beginning of computer usage. Early WO 88/09968 PCT/US88/01826 -6systems did their backing up by punching out the data in memory on punched paper on a Teletype machine (a very time consuming project). More contemporary backup systems typically employ some sort of magnetic tape or disk technology for the storage of the data being backed up. Even so, the process is still costly and time consuming, and loses any data lost between the time of last backup and the time of the failure.
With respect to the prior art of controllers and storage devices, it should also be noted that all controllers are hardwired with respect to an associated storage device. If the size of the storage device is fixed, the controller associated with it has the size fixed in Its internal logic. If the size of the storage device can vary within fixed limits and size increments, at best, the controller is able to query the storage device as to which model it is and select from pre-established sizes in its internal logic for the various models. There is no ability to automatically adapt to another size or kind of storage device other than that for which the controller was designed and constructed. If the user wants to get a new kind and/or size of device, a new controller must be obtained as well. Likewise, on the user interface side, if a new interface convention is adopted, the controller must be replaced by one having the proper interface. The same thing takes place on the storage device side a new interface convention means a totally new controller.
With respect to the seek time problem, there has been some minor recognition of seek time as a degrader of system performance and even Less attempt to provide some sort of correction to the problem. This is because the attempts have been made within the prior art controller/storaoe device manner of construction and operation as described above. Thus, the only 1 S 1 WO 88/09968 PCT/US88/01826 -7commercially viable attempt at such seek time reduction has been the interposing of "round robin"-based optimization hardware between the user CPU and a plurality of controllers connnected to individual disk drives. Upon using read and write requests to the various controllers, the optimizing hardware thereafter sequentially queries the controllers to see if they are done yet. If not, the hardware moves onto the next and the next until it finds one that is complete and handles that request. This is better than handling the requests on a first in, first out (FIFO) basis as in the balance of the prior art, but far from optimum, within the confines of the mode of operation of prior art controllers and storage devices, however, it is probably the best that can be hoped for.
Within the past few years, solely in recognition of the transfer rate bottleneck of serial disk drives (i.e.
actually discounting the drawbacks to performance of I seek time), some initial work has been done with parallel transfer drives (PTDs). This work, contrary to i the findings of the applicant herein, assumes that seek time is irrelevant to the data transfer rate problem.
The present state of PTD development is reported in an i |article entitled "The bottleneck in many applications h 25 created by serial channel disk drives is overcome with PTDs, but the price/Mbyte is high and the technology is still being refined" by Michael Gamerl of Fujitsu America, Inc., which dppears beginning at page 41 of the February 1987 issue of HARDCOPY magazine. Generally, according to that article, the approach employed with PTDs as developed to date is the employing of multiple read/write heads moved in unison on arms with the data written in parallel to multiple magnetic disks which are mechanically or electronically linked to spin actually or virtually in unison. As with so-called "dumb I-li-C WO 88/0)9968 PCT/US88/01826 8 terminals", which include little or no decision-making capability, prior art PTDs could be classified as "dumb disks" in that the only logic provided generally is in the form of a FIFO buffer with associated logic "de-skewing circuitry") employed in the path for the transfer of the data to compensate for slight differences in parts alignment and, therefore, latency of data transfer bit positions in the time domain.
While some PTD developers advocate providing "intelligence", it appears that what they consider intelligence is only part of the user interface and actually degrades performance potential. As stated in the article, "To support each PTD arm separately, drive hardware is duplicated for each. Otherwise, the structure of a PTL is similar to high performance serial drives." No mention is made of providing for self-checking and correction of transferred data, or the Like. No mention is made of providing for interface indepqndent either on the user or storage device side. Optimization of seek time is to not only not mentioned, but actually discounted.
Finally, the concept of "fault tolerance" and the inability of prior art storage device systems to achieve that goal should be addressed. A recent article on fault tolerant computer systems described a fault tolerant system as "a system in which no single failure will be functionally apparent to the user. In other words, fault tolerance' means that a system will continue to process even when a component has failed." There are five characteristics required for fault tolerance Redundancy, betection, Isolation, Reconfiguration, and Repair. First, every element of the system must have a backup, so that if a component fails, there is another to assume its responsibi ities. Second, a fault must be detecttible by the system so that the fault can be 7 WO 88/09968 PCT/US88/O 1826 4
A
9 identified and then repaired. Third, the failed component must be isolated from the rest of the system so the failure of one comiponent will not adversely affect any other component. Fourth, the system must be abLe to reconfigure itself to eliminate effects from the failed component and to continue operation despite the failure. Finally, when repaired, the failed component must be brought back into service without causing any interruption in processing. With regard to present storage systems, the concept of fault tolerance simply *0 does not exist. None of the five above-enumerated **•characteristics are met. As described above, in a •typical prior art disk storage system, a CRC error which 9.9.
9. is not a transient and therefore correctable by a 15 reperformance of the operation results in a very 9e.**e apparent inability of the system to continue.
Wherefore, it is bhe principal object o£ the poreferred ombodiment of the present invention to provide a So 20 now approach to controllers and associated storage devices a' of such as disk drives, and the like, which provides the bonefits of parallel operation employing a plurality of lb'1 Individual devices opoerating in an intelligent environment .4 making optimum use of their capabilities through the reduction of sook time, and tho liko.
It is anothaer object of the prefEorrod embodiment o£ tho prosont invention to provide high capacity without bhe noeod to employ moreo oxotic and hi'3h priced storage oechnologios.
It is yet another object of the preferred embodiment of tho present invention to provide fault toleranco, high roliability, and tho ability "o roconstructb lost data simply and easily, It is still another object o£ the prefoerred embodiment of the present invention to provide a now approaeh to storage system tdchnology which dramatically reduces, and in some cases eliminates, the necessity for S backing up the mass data storage system.
\i 88/09968rUS8/1 It is yet a further object of the preferred embodiment of the present invention to permit vast increases in the transfer rates for data to and from a storage device beyond the limits normally imposed by speeds of rotation and seek times.
It is another object of the preferred embodiment of the present invention to provide a heretofore non-existent device to be interposed between conventional computer storage device controllers Pnd conventional storage devices 10 which provides interface transparency on both sides and a communications and operation intelligence between the conventional devices.
According to the present invention there is provided SOS a high-capacity, error-correcting virtual disk 15 drive storage device for a computer comprising: a single interfacing logic portion for interfacing with a user CPU, the inter~lAcing logic portion including disk drive controller interface means for SO interfacing with a standard disk drive controller through which the user CPU is connected to the virtual disk drive storage device; 0000 a plurality of standard disk drives adapted to interface with a standard disk drive controller at an input interface thereof, :o 25 It a plurality of disk drive controllers connected between the interfacing logic portion and respective ones of the plurality of disk drives, each of the disk drive controllers including controller interface means for presenting a standard system disk drive controller interface to the input interface of its respective disk drive, a data buffer for transferring data to and from its respective disk drive, and parity generator means for generating a parity bit for standard length groups of bits being transferred to its respective disk drive,* virtual disk drive parity generation meant for generating parity bits associated as part of data groupings of bits being transferred to the disk drives;, /,and, reconstruct logic means for sensing a parity error condition from one of the disk drives in combination wi.th a parity error in one of the data groupings of bits as indicated by a parity bit generated by the virtual disk drive parity generation means and for using the knowledge of which of the disk drives' data is i~n error in combination with the paxity bit generated by the virtual disk drive parity generation means to logically reconstruct an erroneous bit creating the joint parity :e error condition, wherein t~or assuring high-speed operation of the virtual disk drive storaqre device: the data buffers in combination comprise FIFO buffer means for holding a plurality of sequential data groupings; and, the disk drive controllers each including data transfer logic for transferring respective ones of a 0 a sequence of the data groupings from their respective disk drives into said FIFO data buffer means asynchronously and for informing the standard disk drive controller through which the user CPU is connected to the virtual disk drive storage device that a requested read operation is complete 4 only when there are sufficient ones of the sequence of the 'se data, groupings being read from all the disk drives to assure that the user CPU cannot remove the sequence of the S data groupings being read from all the disk drives from said FIF'O data buffer means faster than re.
1 aining ones of t he sequence of the data groupings being read from all the disk drives are read asynchronously into said FIF~O data buffer means.
-12 Preferred embodiments of the presenL invention will now be described by way oC example only with reference to the accompanying drawings in which I Fig. 1 is a O.mpLified drawing of a prior art disk drive system and its manner of interfacing with a user
CPU.
Fig. 2 is a simplified pLan view drawing of -the disk Iand head positioning system empLoyed in the prior art disk drive system of Fig. 1.
og %to* *ig. 3 is a simpLified side view drawing of the disk and head positioning system employed in the prior art disk drive system of Fig. 1.
Fig. 4 is a drawing shoving the prior art method of 0 parity checking employed in apparatus such as that of Figs, 1-3.
I Fig,,* 5 is a simpLified block diagram of the inteLLigent disk drive system of the present invention.
j 9 Fig. 6 is a simolied drawing showing the basic structure of the virtual disk controller empLoyed in the WO 88/09968 PCT/US88/01826 13 present invention.
Fig. 7 is a simplified drawing showing data storage across a plurality of disk drive units according to one embodiment of the present invention and corresponding to the data storage in Fig. 4 for comparison purposes.
Fig. 8 is a simplified drawing showing how the individual disk drives transfer the data of Fig. 7 into a buffer for subsequent access by the user as contiguous bits comprising sequential bytes.
Fig. 9 is a simplified drawing showing how the individual disk drives asynchronously transfer the data of Fig. 10 into separate buffers for subsequent access by the urer.
Fig. 10 is a simplified drawing showing data storage across a plurality of disk drive units according to a second embodiment of the present invention and again corresponding to the data stored in Fig. 4 for comparison purposes.
Fig. 11 is a more detailed block diagram of a virtual disk drive system according to the present invention.
Fig. 12 is a simplified drawing showing the prior art effect of a sequence of commands requiring movement to different cylinders on a single disk.
Fig. 13 is a simplified drawing showing the present invention as employed in an embodiment where the disks comprising the virtual disk are cylinder oriented and the beneficial effect to users realized thereby.
Fig. 14 is a block diagram showing how the battery backed up asynchronous que memory of the present invention through which data is transferred to the actual disk drives acts as a cache memory to improve speed of operation in mpny applications.
The present invention is based on replacing the single prior art disk drive with a virtual disk drive "Cili"-liili-Il I~~1I1I.- WO 88109968 PCT/US88/01826 14 comprised of a plurality of individual and separate conventional prior art disk drives for the data and one additional disk dedicated to the containing of error recovery code (ERC) hits associated with the data wherein the plurality of disk drives operate concurrently and intelligently in parallel. As will be developed in detail hereinafter, such a hither-to-untried approach provides the basis for all the above-enumerated objects of the present invention.
The capacity of the virtual disk is times that of its individual disks, thus achieving the object of increased capacity. Even employing the same rotational speeds within the individual disk drives at their fixed transfer rates, the virtual disk, operating in parallel instead of serially, should be able to achieve transfer rates at least seven or eight times that realizable by the individual, serial., and rotationally limited disk drives. With respect to fault tolerance, reliability, and reconstructability, as will be appreciated from the description hereinafter, the intelligent virtual disk approach of the present invention maximizes reliability and provides reconstructability such that frequent backing-up truly can be eliminated as a redundanz non-necessity. Moreover, it meets all five of the required characteristics for basic fault tolerance.
The intelligent virtual disk drive of the present invention is generally indicated as 40 in Fig. 5. To the user CPU 10 of Fig. 1, the virtual disk drive 40 of the present invention is "transparent"; that is, it appears to the CPU 10 as any other disk drive, such as prior art disk drive 20. Connection is a simple matter. The I/O cable 18 is simply unplugged from the prior art disk drive 16 of Fig. 1 and plugged into the virtual disk drive 40 of the present invention. That is all that is required. This is an important
I
WO 88/09968 PCT/US88/018t6 15 Zonsideration since, to be truly useful, any device must be "plug compatible" with existing hardware and software if it is to be commercially acceptable. Thus, with the pesent invention, the CPU 10 still sends its read/write requests to its associated disk controller 14 for interfacing with "disk drive" 40 over I/O cable 18 in its normal and expected manner. Within the virtual disk drive 40, however, I/O cable 18 actually interfaces with a CPU-based intelligent interface and control portion 43 connected to the virtual disk controller 42, which is the heart of the present invention. Virtual disk controller 42 is, in turn, connected to use a plurality of individual conventional prior art disk drives 16, 16' such as that of Figure 1. The outputs from and inputs to the virtual disk controller 42 to and from the disk drives 16, 16' must be according to whatever format is |presently being employed, which they are, as will be Sdescribed shortly. Since the disk drives 16 are conventional in nature and the interfaces thereto are also conventional, the disk drives 16, 16' could easily be replaced by bubble memories, or the like to achieve S the non-rotationally based benefits of the present I invention such as increased speed, capacity and error protection and reconstructability in conjunction I 25 therewith. Moreover, as will be described shortly, the interface definition is separately replaceable so that reconfiguration for changes in the "conventional" interface on either side can be made by merely changing that interface definition. This internal reconfigurable interface means that new storage devices can be employed at ary time with the user interface remaining constant and unchanged. Thus, the "disk drive" being addressed by the user could, in fact, be a bubble memory or some other storage device. The present invention, therefore, embodies a previously non-existent device interposed WO 88/09968 PCT/US88/01826 16 between the user and the storage device providing transparency in both directions.
Turning briefly to Figure 6, the physical structure of the present invention as built by the applicant herein for maximum flexibility and reconfigurability is shown in simplified form. In tested embodiments, the CPU 44 employed to provide the "intelligence" which distinguishes the present invention over the prior art is connected to computer BUS 46. A plurality of cards 48 plugged in the computer BUS 46 contain the logic for interfacing with the CPU 44 and the disk drives 16.
Card 48' controls and detects failure of error/recovery disk 16'. These together contain logic and data to correct any failure from drives 16, 16'. The system disk control interface is contained on a separate card 62 so that if conventions relative to the devices employed change, the virtual disk control and interface can be quickly and easily adapted to the new convention. The cards 48, 48' are also separately interconnected by a private BUS 50 so as to be independent of the computer BUS 46.
1 According to the present invention, data (where the term "data" includes computer programs which, too, are nothing more than binary numbers to the disk drive) can be allocated to the parallel disk drives 16, 16' comprising the virtual disk drive 40 in several ways.
As with most aspects of computer technology, there are tradeoffs in the present invention which occur relative to time, space and cost. Each manner of allocation is a separate embodiment of the present invention and provides certain advantages and disadvantages in this regard with respect to the other. Certain applications will best be served by one embodiment while others will operate best with another. Thus, the choice is which will best serve the end application. Several typical k 41 r WO 88/09968 PCT/US88/01826 17 embodiments possible and the characteristics of each will now be described. Those skilled in the art will recognize that other possible configurations for the data beyond those to be described are possible within the scope and spirit of the present invention and, therefore, the specific examples to be described as not intended to be limiting in their effect.
In an actual prior art disk, it is typical that each of the cylinders is divided into a plurality of tracks of a fixed number of sectors. Consider the following data on two commercial disks of 75 and 300 Mbyte capacity designations, respectively: Tracks per cylinder 5 19 Cylinders 823 823 Sector size 512 512 Sectors per track 36 36 Total storage (Mbytes) 75.85 288.22 Seek average (msec) 30 Seek max 55 Seek min 6 6 Full rotation time (msec) 16.66 16.66 Data transfer rate 1.2M 1.2M As can be seen, the two disks are virtually Sidentical except for the number of tracks per cylinder.
This, however, has important ramifications relative to performance. While it would appear that both disks should operate the same with the only difference being in the total capacity of the larger disk, in actuality, the larger disk performs better than the smaller. The reason is simple, to change cylinders, there must be movement of the ams and read/write heads as previously described. That takes lost time and lost time reduces performance. In the smaller disk, 45 consecutive sectors (5 tracks x 9 sectors/track) can be accessed without access-arm movement. By contrast, in the Larger B xai^4 r ;r r- I_-eo~uosi.i; e WO 88/09968 PCT/US88/01826 18 disk it is 171 consecutive sectors. Thus, consecutive sectors addressible without access-arm movement (i.e.
associated seek time) is a significant measure of performance.
The first embodiment of the present invention relative to data storage is depicted in Figures 7 and 8. According to this approach, the eight individual bits comprising each byte are spread across eight individual disks. For ease of comparison and understanding, the data depicted in Figures 7 and 8 corresponds to the data used for the example of Figure 4. As shown in Figure 7, for example, there are nine disks for the eight data bits and a single error/recovery bit. As with the example of Figure 4, the data of Figures 7 and 8 is written and read from the top down as the figures are viewed. Thus, it can be seen that the error/recovery bit (in this case equal to ithe prity bit for the same data) and data of the first byte (010101010) in Figure 4 is the same in this embodiment of the present invention, only spread out j across a particular identifiable bit position in the nine disk drives 16, 16' of the virtual disk drive 40 as 0-1-0-1-0-1-0-1-0. As represented by the drawing of Figure 8, in this embodiment, the data from each of the disk drives 16, 16' is asynchronously written into and read out of a buffer 52 as individual bit streams into and out of the bit positions of the sequential bytes.
The user 54 interface's with the buffer 52 and is completely unaware of the asynchronous nature of the data transfer which ttkes place with respect to the buffer 52 and the disk drives 16, 16' comprising the virtual disk drive 40. In this embodiment maximum speed is sacrificed for simplicity of control logic and lower cost. This embodiment also provides the ability to reconstruct erroneous data "on-the-fly"; that is, while S WO 88/09968 PCT/US88/01826 19 data is being transferred, lost or added bit within a byte can be found and corrected. Not only that, such dynamic failure reconstruction can be continuous in nature from byte to consecutive byte. What that means is that one disk drive 16, 16' can completely fail (or be removed for maintenance) and the virtual disk drive system of the present invention will continue in operation uninterrupted with virtually no loss of performance. Once the removed disk drive 16 is re-established in the system, the data contained on it will be automaticaly restored dynamically during use according to the same process. The way this works employing prior art disk drives which could not individually accomplish the same thing can be understood by comparing Figures 3 and 4 to Figure 5. In prior art I disk drive 16 containing the data of Figure 4, if the first byte (010101010) drops a bit and now contains, for example, 010101000, the three bits is odd in number and a parity error within the first byte will cause a CRC error in the sector integrity. The logi;, however, does not know which bit position is involved and cannot.
take corrective action. Consider the same failure in the virtual disk drive 40 as depicted in Figure 7. The data within "Disk 2" representing the bit stream of bit 2 is still maintained in eight bit bytes with an associated parity bit since it is a "standard" prior art disk drive. Thus, the reconstruction logic of the present invention is informed of two facts. First, that Disk 2 had a CRC error in reading the sector which contained the bit 2 bit for the first byte, i.e. that it is the Disk 2 bit position bit 2) which is in error. Second, that the error/recovery bit test across the first byte (010101010) is in error (since 010101000) was read). Since bit 2 of the first byte is reading as a and is in error, in a binary system it can only
I
WO 88/09968 PCT/US88/01826 correctly be a By making that correction, the erroneous first byte is dynamically corrected from 010101000 to 010101010. In actual practice, this is accomplished by simply logically XORing the contents of the bit position and its corresponding error/recovery bit together in a manner well known in the art. Note that if it is the error/recovery bit drive, i.e. Disk E/R, which fails, the correction takes place in the same manner.
The second embodiment of the present invention to be described is based on the principal of performance maximization, i.e. reduction of seek time, etc. what is sacrificed is simplicity of control logic. The trade-off is not hard to justify, however. Complexity of the logic within the virtual disk controller 42 of the present invention is a small consideration which is of no importance to the end user whatsoever and of manageable impact on the overall cost of producing a commercial system. On the other hand, day to day performance is a continuous thing of direct relevance to the end user.
The second embodiment and its manner of operation is shown in Figures 9 and 10. As shown in Figure 10, the data in this embodiment is distributed on the disks on a designated sector per dik drive basis. For purposes of example, the first byte of the contents of the three sector's 2, 6 and 8 are shown in Figure 10 as being contained on Disks 2, 6 and 8. The present confusion, the data contents of the remaining disks is undesignated. As the figure is viewed, the first byte of sector 2 is 11111111, the first byte of sector 6 is 0000010, and the first byte of sector 8 is 10101010.
As shown in Figure 9, the advantage of this embodiment is that the nine disk drives 16, 16' corresponding to 36 sectors 1-8 plus error/recovery across the drives 16,
L
f S WO 88/09968 PCT/US88/01826 21 16' can transfer data into and out of separate buffers 52 asynchronously and simultaneously.
As can be appreciated, the contents and manipulation of the error/recovery disk (disk E/R) 16' is somewhat more complex in this embodiment, particularly when an error is discovered. Error/recovery across the disk drives 16, 16' comprising the virtual disk drive 40 in this embodiment is on a sector by sector basis. The only time that the error/recovery disk 16' may need to be updated is when a write to one of th) disks takes place. To this end, as will be described in greater detail shortly, the logic of the present invention accomplishing the transfer of data to an individual disk drive 16 checks the status of the bits looking for bits that have changed from to and vice verse). Only when a bit position has changed does the corresponding bit on the error/recovery disk 16' have to be changed. That change is made as a separate write to the error/recovery disk 16' which occurs before any other changes to the data disks 16 can take place which would affect the error/recovery disk 16' in that area.
STo reconstruct lost data in this embodiment, of course, the data for the sectors and the corresponding error/recovery data from Disk E/R which include the "lost" data must be brought into a common buffer area so that the across-the-drives error/recovery correction procedure described above with respect to the previous embodiment can be accomplished. Obviously, this is a more complex and time consuming procedure; but, statistically, will occur very rarely. In the event of a complete disk drive removal or failure, in this embodiment there would be a significant reduction in performance while continual reconstruction and correction in the above-described manner was taking place; however, as with the previous embodiment, there WO 88/09968 PCT/US88/01826 22 would be no actual "crash" of the virtual disk drive and computation would continue, albeit in a somewhat degraded performance mode which is certainly better than in the prior art where a crash results in no performance whatsoever.
A third possible embodiment for the assignment and storage of data is shown in simplified form with reference to Figures 12 and 13. In this case, it is assumed that the application is heavily cylinder oriented. To maximize performance and reduce seek time, the disks 16 comprising the virtual disk 40 are assigned and the data allocated on a "cylinder" basis. To see the effect of this data and address structuring within the overall environment of the present invention, I 15 reference should first be made to Figure 12 wherein a prior art single disk is shown along with the effect of a simple typical sequence of commends by a plurality of users. As can be seen, the various cylinders C1, SC2 Cn) are located radially on the disk 16. For simplicity, only one disk and head assembly are shown for each of the disks 16 whereas it is understood that, |in actuality, the disks 16 in Figures 12 and 13 contain multiple disks as in Figure 3. As each user does a read or write to his various cylinder (remembering that the users are cylinder oriented), the heads must move in and out to be positioned vertically at the desired cylinder. Accordingly, each user must wait until the preceding command hasbeen accomplished, i.e. first in, first out. In the embodiment of the present invention of Figure 13, each disk 16 is identified with an individual cylinder. Thus, the users are able to concurrently access their individual "cylinders".
Moreover, as depicted in the figure, a user making sequential read requests to data previously read has no seek time whatsoever with the present invention since I S WO 88/09968 PCT/US88/01826 23 the data previously read will be retained in cache memory and, therefore, be instantly available without rereading from disk in a manner to be described in more detail shortly. This is an important distinction of the present invention over the prior art. As will be remembered from the discussion of round robin "optimization" discussed previously, sequentially doing a query of the status of several disk drives is the best that, the prior art can accomplish given its "dumb" drive status. Such is not the case with the intelligent storage system of the present invention. The logic knows where each disk 16, 16' has the heads thereof positioned as a result of the last read or write request thereto. They are dynamically changing values maintained by the logic of the CPU for this very purpose. Thus, in the present invention, seek time can be minimized because the logic can keep the drives 16, 16' working in an optimal manner as, for example, by giving priority to requests that require the minimum movement of each drive.
i Having now described several embodiments of the present invention as they relate to the manner and mode of data storage across a plurality of conventional disk drives comprising a virtual disk drive, the construction of the virtual disk drive controller of the present invention and its preferred manner of operation to achieve additional benefits over and above those enumerated above will now be addressed.
Turning first to Figure 11, the virtual disk drive controller 42 of the present invention as configure( to accomplish the second embodiment as described above sector-assigned disks) is seen as comprising an interface and control portion 56 connected via the computer BUS 46 to a plurality of disk drive interfacing WO 88/09968 PCT/US88/01826 24 portions 48. The dotted line in the figure stands for the proposition that in the tested embodiment of the applicant, two interfacing portions 48 are contained on a single card. The standard interface presently employed by disk drives and disk drive controllers is the so-called "SCSI" interface. As can be seen, to be compatible with the user 54 and its system disk controller 60 on the one hand and the standard disk drives 16 employed in the virtual disk drive 40 on the other hand, the interface and control portion 43 includes a system disk controller interface 62 which presents the proper interface to the system disk controller 60 while the disk drive interfacing portions each terminate in a device controller 60' which presents the proper device interface to the disk drives 16 connected thereto. The type of interface employed is not considered to be a feature of the present invention. The ability to employ any type of interface and the ability to easily reconfigure to a new interface, however, are considered as important features and points of novelty of the present invention over the prior art. Control of the virtual disk drive controller 42 is accomplished by the programming logic contained within the CPU 44. It is preferred that the programming logic performed by the CPU be in the form of firmware residing in read only memory (ROM) but that other methods of accomplishment could be employed if desired.
The interface and control portion 56 also includes what is, because of special use made thereof to be described shortly, designated as "cache memory" 64.
Each of the disk drive interfacing portions 48 have a status controller 66 connected to the computer BUS 46. The device controller 60' previously mentioned is connected between its associated disk drive 16 and the WO 88/lo09968 PCT/US88/01826 25 computer BUS 46. Also connected to the computer BUS 46, in each case, is a data buffer 68 controlLed by a DMA These elements operate in a manner well known in the art and, therefore, no additional description thereof will be provided in the interest of simplicity.
There is also error/recovery logic 72 associated with each disk drive interfacing portion 48. It is this Logic which, as previously mentioned, checks the bits of data being written to the disk for changes by XORing it with the prior data, When changes are found, this information is forwarded to the master error/recovery Logic 74.
Error/recovery bit generation across the individual disk drive interfacing portions 48 according to the algorithm being employed is contained in the master error/recovery Logic 74 which interfaces back to each of the error/recovery logic blocks 72 through a private BUS as well uS to the controller 60' and asseiated components for the error/recovery drive 16' to cause H 20 update the-eof when necessary due to a change in the Scontents of a bit position requiring changing of the corresponding error/recovery bit on the error/recovery drive 16'. Reconstruct Logic 76 iv connected to the computer BUS 46 and, therefore, can Communicate with the master error/recovery Logic 74 as well as with all the other error/recOvery logic blocks 72 and the other system con;ponent. As will be appreciated by those skilled in the art without necessity for a further detailed description, the reconstruct Logic 76 has access to all the information necessary to accomplish reconfiguratior, of Lost data. As mentioned earlier, this is accomplished in the manner suited to the embodiment for data stoadie being impleinented. For example, n the case of sector data orientation across the disk drives 16, 16', in the event of Lost data being S 4 WO 88/09968 PCT/US88/01826 26 detected, the reconstruct Logic would have to read the sectors including the lost data from the variotL drives 16, 16', sense the drive error signal from the status controllers 66 appearing on the computer BUS 46 to determine which drive 16, 16' contains the data in error, and then OXR the data from the bad drive with the error/recovery bits to assure proper reconstruction of Lost data, and then present the reconstructed data to the CPU 44 for use. Note that, as mentioned before, provision is made for recognizing and correcting the error/recovery drive 16' if it is bad, just as with any of the other drives 16. It will be appreciated that simultaneous multiple error detection and correction can be accommodated with additional error correction drives.
Turning now to Figure 14, a final, yet major, aspect of the present invention will now be discussed. The actual writing into and reading from the individual disk drives 16, 16' is via the individual data buffers 68.
Cache memory 64 is a large 3.6 Mbytes in a tested embodiment) memory used for the storage of data going to and coming from the luser CPU 54. It is into the memory 64 that asynchronously read sector lata is moved when the virtual disk drive 40 is operating in the manner as described with respect to Figures 9 and 10. In this regard memory 64 is an asynchronous que for the movement of data to and from the disk drives 16. To maximize the performance increases possible with the present invention, when the user CPU 54 presents a block of data to be written to "disk" the virtual disk '"ive 4 which is transparent to him), the data is moved into an available area of the memory 64 and an immediate acknowledgement made to the user CPU 54. Thus, the user CPU believes that the requested disk write has been accomplished. The actual write to the appropriate disk drive 16 for the sector involved takes place whenever WO 8809968 PCT/US88/01826
I
.j k ki *n Bi 'W T m sw ir ih i- I_ I A I SWO 88/09968 PCT/US88/01826 27 possible thereafter. The logic of the CPU 44 in the interface and control portion 56 asynchronously writes from the memory 64 into the appropriate data buffer 68 when it is next available for a write to disk. In this regard, the logic maximizes the transfers out of the memory 64 without regard to traditional FIFO or LIFO procedures. Rather, it attempts to keep disk transfers maximized by writing out the best data for minimizing seek times and employing disk drives which would otherwise be idle.
In many data base operations, for example, it is quite common to write to disk and and the almost immediate re-access the same data. In such cases, the present invention operates memory 64 as a cache memory as depicted in Figure 14; that is, if a read request is made for data already queued within memory 64 to be written to disk, the actual read request is aborted and the data is presented back to the requestor as having just been read from disk in response to the request. As should be readily apparent to those skilled in the art, without more, the use of the Large asynchronous queing memory 64 would be at odds with the reliability aspects of the present invention. Thus, one cost factor which must be absorbed to fully realize the benefits of the present invention is the use of a battery backup power supply 78 in conjunction with the memory 64 such that in the event of a power failure, the data contained therein will not be Lost. Cache memory in and of itself is not new. However, cache memory configured as in the present invention and operating in the manner as described herein is believed to be novel over the prior art.
Having thus now described several possible data orientations possible with the present invention and the physical structure thereof, the overall manner of operation thereof to maximize performance through the WO 88/09968 PCT/US88/01826 28 use of intelligence in work allocation will now be addressed with particularity. This is important to a complete understanding and appreciation of the difference between prior art disk drives (including the PTD variety) and the highly intelligent, parallel, virtual disk drive system of the present invention. In this regard, in addition to the fact that a plurality of individual disk drives are employed and the fact that detection and reconfiguration of lost data is possible, the most important factor of the present invention is the incorporation of a microcomputer to intelligently and efficiently optimize all the mechanical movements of the individual drives. As can be appreciated, this is a two edged sword; that is, there must be the individual disk drives with their separately posItionable mechanical mechanism and there must be intelligence in the manner in which the drives are positioned. In the present invention, the CPU 44 is able to concurrently allocate the read/write operations to the various disk 16, 16' in the most optimum manner, looking for operations that maximize efficiency. For example, in a conventional disk drive, operations are performed sequentially. By contrast, in the present invention, the intelligence of the logic contained within the CPU 44 is designed to concurrently and asynchronously employ the various drives 16, 16' (and the cache memory 64) to maximize efficiency. ,For example, if drive is at cylinder 13 and there is a request queued for the same drive at a nearby cylinder, the CPU 44 can be programmed to perform that request prior to one requiring that the arm and head assembly move to a more removed position.
Again, the various possibilities for the "intelligence" of the CPU 44 made possible by the unique structure of the virtual disk drive of the present invention providing for true concurrent operation are largely a WO 88/09968 PCT/US88/01826 29 function of the application to which it is applied. In some applications, for example, sequential operation might be a necessity and the above-described example of taking requests out of turn to take advantage of cylinder positioning might not be desirable.
Claims (6)
1. A high-capacity, error-correcting virtual disk drive storage device for a computer comprising: a sin~gle interfacing logic portion for interfacing with a user CPU, the interfacing logic portion including disk drive controller interface means tor interfacing with a standard disk drive controller through which the user CPU is connected to the virtual disk drive 1 0 storage device; a plurality of standard disk drives adapted to interface with a standard disk drive controller at an .9,'..input interface thereof, a plurality of disk drive controllers connected Is between the interfacing logic portion and respective ones of the plurality of disk drives, each of the disk drive controllers including controller interface means for ji preventing a standard system disk drive controller it interface to the input interface of its respective disk 09 9 0 drive, a data buffer for transferring data to and from its it 0090respective disk drive, and parity generator means for generating a parity bit for stnndard length groups of bits *:'':being transferred to its respective disk drive; virtual disk drive parity generation means t~or 25 generating parity bits associated as t-Irt of daita 4 90groupings of bits being transferred t- link drives; and, 4reconstruct logic means fo* ing a parity error condition from one of the disk drive*s in combination with a pairity error in one of the data groupings of bits as indicated by a parity bit generated by the virtual disk drive parity generation means and for using the knowledge of which of the disk drives' data it in error in comvbination with the parity bit genorated by the virtual disk drive parity generation meoans to logically 'reconstruct an erroneous bit creatin~g the joint parity A t -31- error condition, wherein for assuring high-speed operation of the virtual disk drive storage device: the data buffers in combination comprise 4 FIFO buffer means for holding a plurality of sequential data groupings; and, the disk drive controllers each including data transfer logic for transferring respective ones of a sequence of the data groupings from their respective disk ij drives into said FIFO data buffer means asynchronously and 10 for informing the standard disk drive controller through I which the user CPU is connected to the virtual disk drive storage device that a requested read operation is complete only when there are safficient ones of the sequence of the data groupings being road from all the disk drives to i i assure that the user CPU cannot remove the sequence of the |data groupings being read from all the disk drives from 'said FIFO data buffer means faster than remaining ones of the sequence of the data groupings being read from all tho disk drives are road asynchronously into said FIFO data i 20 buffer moans,
2 A high-capacity, orror-corrocting virtual disk *drive storage device to provide high-speed operation thereof of claim 1, wheroini I the interfacing logic portion includoes a j 25 control portion having CPU means operably connected to the S*plurality of disk drive controllers for controlling tho I sequence of operation of the disk drive controllers in }i writing dota to and roading daba from the disk drivoes; and Sadditionally comprising: a buffoer memory into which data to be written to all the disk drives is placed prior to being transferreod to a data buffer for transferring the data to ai associated disk drive; and wherein, said CPU means ineludes logic for immediately acknowledging a write to the associated disk drive upon data to be written being placed in said bufoer SA i 0~ <f -32- memory whereby read throughout of the virtual disk drive storage device is increased.
3. A high-capacity, error-correcting virtual disk drive storage device according to claim i comprisingi the interfacing logic portion including a control portion having CPU means operably connected to the plurality of disk drive controllers for controlling the sequence of operation of the disk drive controllers in writing data to and reading data from the disk drives; 10 and, 6* a buffer memory into which data to be written to all the disk drives is placed prior to being transferred to a data buffer for transferring the data to .tn aasociated disk drive; and wherein additionally, 15 said CPU moans includes logic for immediately acknowledging a write to the associated disk drive upon data to be written being placed in said buffer memory whereby read throughput of the virtual disk drive 00 ;storage device in increased, 9,& 0 20
4, A high-capacity, orror-correcting virtual disk drive astorage device according to claim 1 comprisingti the interfacing logic portion including a S9control portion having CPU mans operably connocted to the plurality aof disk drive controllers for controlling the atoqudnca of opration of the disk drive controllers in 0 writing data to and reading data from the disk drives; ando nd a buffer memory into which data to be writton to all the disk drive in placed prior to being trnatnuieod to a daba buffer for transferring the data to an asnociated disk drive; and wherein additionally, #bid CPU means includes logic for immdiately acknowlodging a write to the asnociated disk drive upon data to be written being placed in aid buffer 36 memory and logic for oheeking data in said bufter memory and indicating it an having beon road from afn ainalated At disk drive without an afbual read thereof when a road 4,' -33- request therefor is received from a user whereby read throughout of the virtual disk drive storage device is increased and said buffer memory also acts as a cache memory,
5, A high-capacity, error-correcting virtual disk drive storage device to provide high-speed operation thereof of claim 2 wherein additionally: said CPU means includes logic for checking data in said buffer m'xiory and indicating it as having been 10 read from an associated disk drive without an actual read thereof when a read request therefor is received from a user whereby said buffer memory also acts as a cache memory in such cases,
6. A high-capacity, error-correcting virtual disk drive storage device to provide high-speed operation thereof of claim i, whereini the plurality of disk drive controllors and i the interfacing logic portion are interconnected to communicate with one another over a first computer BUS; S• 20 and, the controller interface means, the data S buffers, and parity generator means of the disk drive controllers are interconnected to communicate with one another over a second computer BUS, 25 A high-capacity, orror-correcting virtual disk drive storage device for a computer an claimed in claim I, substantially as herainbefore doscribed with reference to the aceompanying drawings aecording to the deacription and drawings. Dated this 8th day of October 1990, By their Patent Attorneys RIFITiH HIIACK CO.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US5734787A | 1987-06-02 | 1987-06-02 | |
US057347 | 1987-06-02 | ||
PCT/US1988/001826 WO1988009968A1 (en) | 1987-06-02 | 1988-05-31 | Fault-tolerant, error-correcting storage system |
Publications (2)
Publication Number | Publication Date |
---|---|
AU1803388A AU1803388A (en) | 1989-01-04 |
AU605598B2 true AU605598B2 (en) | 1991-01-17 |
Family
ID=26736375
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU18033/88A Ceased AU605598B2 (en) | 1987-06-02 | 1988-05-31 | Fault-tolerant, error-correcting storage system |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU605598B2 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4228496A (en) * | 1976-09-07 | 1980-10-14 | Tandem Computers Incorporated | Multiprocessor system |
US4410942A (en) * | 1981-03-06 | 1983-10-18 | International Business Machines Corporation | Synchronizing buffered peripheral subsystems to host operations |
US4644545A (en) * | 1983-05-16 | 1987-02-17 | Data General Corporation | Digital encoding and decoding apparatus |
-
1988
- 1988-05-31 AU AU18033/88A patent/AU605598B2/en not_active Ceased
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4228496A (en) * | 1976-09-07 | 1980-10-14 | Tandem Computers Incorporated | Multiprocessor system |
US4410942A (en) * | 1981-03-06 | 1983-10-18 | International Business Machines Corporation | Synchronizing buffered peripheral subsystems to host operations |
US4644545A (en) * | 1983-05-16 | 1987-02-17 | Data General Corporation | Digital encoding and decoding apparatus |
Also Published As
Publication number | Publication date |
---|---|
AU1803388A (en) | 1989-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US4942579A (en) | High-speed, high-capacity, fault-tolerant error-correcting storage system | |
EP0294287B1 (en) | Fault-tolerant, error-correcting storage system and method for storing digital information in such a storage system | |
JP2831072B2 (en) | Disk drive memory | |
US5077736A (en) | Disk drive memory | |
US4989206A (en) | Disk drive memory | |
US4989205A (en) | Disk drive memory | |
US5418925A (en) | Fast write I/O handling in a disk array using spare drive for buffering | |
EP0572564B1 (en) | Parity calculation in an efficient array of mass storage devices | |
US5720027A (en) | Redundant disc computer having targeted data broadcast | |
US5442752A (en) | Data storage method for DASD arrays using striping based on file length | |
US5124987A (en) | Logical track write scheduling system for a parallel disk drive array data storage subsystem | |
US5537567A (en) | Parity block configuration in an array of storage devices | |
US5287462A (en) | Bufferless SCSI to SCSI data transfer scheme for disk array applications | |
EP0501818A1 (en) | Fault tolerant data storage apparatus including computer disk drives | |
EP0594464A2 (en) | Method of operating a data storage disk array | |
JPH10149262A (en) | Disk array device and control method therefor | |
US6611897B2 (en) | Method and apparatus for implementing redundancy on data stored in a disk array subsystem based on use frequency or importance of the data | |
AU605598B2 (en) | Fault-tolerant, error-correcting storage system | |
KR100364895B1 (en) | Method of controlling data access and system thereof | |
JP3699797B2 (en) | Disk array device | |
GB2298307A (en) | A disk array with multiport memory |