CN103488582A - Method and device for writing cache memory - Google Patents
Method and device for writing cache memory Download PDFInfo
- Publication number
- CN103488582A CN103488582A CN201310400488.2A CN201310400488A CN103488582A CN 103488582 A CN103488582 A CN 103488582A CN 201310400488 A CN201310400488 A CN 201310400488A CN 103488582 A CN103488582 A CN 103488582A
- Authority
- CN
- China
- Prior art keywords
- stored
- data block
- block
- flash card
- lba
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000000977 initiatory effect Effects 0.000 claims description 6
- 238000005265 energy consumption Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- XIHCLMPHWOVWAZ-UHFFFAOYSA-N 6-[[3,5-bis(trifluoromethyl)phenyl]methyl-[2,3,5,6-tetrahydroxy-4-[3,4,5-trihydroxy-6-(hydroxymethyl)oxan-2-yl]oxyhexyl]amino]-3-[3,4,5-trihydroxy-6-(hydroxymethyl)oxan-2-yl]oxyhexane-1,2,4,5-tetrol Chemical compound O1C(CO)C(O)C(O)C(O)C1OC(C(O)CO)C(O)C(O)CN(CC=1C=C(C=C(C=1)C(F)(F)F)C(F)(F)F)CC(O)C(O)C(C(O)CO)OC1OC(CO)C(O)C(O)C1O XIHCLMPHWOVWAZ-UHFFFAOYSA-N 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 206010008190 Cerebrovascular accident Diseases 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 240000007643 Phytolacca americana Species 0.000 description 1
- 208000006011 Stroke Diseases 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention provides a method for writing a cache memory. The method comprises the following steps of applying a flash memory card (namely a FLASH memory) as the cache memory; judging whether an old data block of which the address is the same with that of a magnetic disk logical block of a data block to be stored is cached in the flash memory card or not during data caching; directly writing the data block to be stored in the flash memory card in an asynchronous mode if the old data block is cached in the flash memory card; and writing the data block to be stored in an idling block of the flash memory card in an asynchronous mode if the old data block is not cached in the flash memory card. By the method, written data are cached. Because the flash memory card is not required to be powered by a standby battery, the data cannot be lost even if the flash memory card is powered off, and the energy consumption is low. In addition, the price of the flash memory card is lower than that of a dynamic random access memory, so that the implementing cost for a memory system is reduced by using the method for writing the cache memory. The invention also provides a device for writing the cache memory.
Description
Technical field
The present invention relates to technical field of data storage, relate in particular to a kind of method and device of reading and writing cache memory.
Background technology
Cache memory (cache) is the impact damper between disk storage inside and extraneous interface.A vital role of cache memory is carried out buffer memory as writing cache to data writing exactly, when having data to be stored to disk, can not write data in disk at once, but first by data, temporarily write in cache memory, then return to the signal of " data write " to system, at this moment system will think that data write, and follow-up work is carried out in continuation, when the data in cache memory acquire a certain degree, then data are written to disk from cache memory.The use of cache memory, on the one hand reduced actual disk operating, effectively protects disk to avoid the read-write operation of repetition and the damage that causes, and also can reduce data writes the required time on the one hand, thereby has improved the write performance of storage system.
Yet, use is with reserve battery (Battery Backup Unit more at present, BBU) protect electric dynamic RAM (Dynamic Random Access Memory, DRAM) as writing cache, and reserve battery is expensive, and the internal memory that can support is limited, therefore, safeguard that dynamic RAM need to consume more electric energy, thereby, use dynamic RAM to realize that as what write that cache makes storage system cost is higher.
Summary of the invention
The embodiment of the present invention provides a kind of method of writing cache memory, to reduce the cost of realizing of storage system.
A first aspect of the present invention provides a kind of method of writing cache memory, and institute's cache memory is flash card, and described method comprises:
Judge in described flash card and whether be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored;
If so, described data block to be stored is write in described flash card with asynchronous system;
If not, described data block to be stored is write in the free block in described flash card with asynchronous system.
In the first of first aspect, in possible implementation, the described free block that described data block to be stored is write to described flash card comprises:
Judge in described flash card and whether have free block;
If exist, described data block to be stored write in the free block in described flash card with asynchronous system;
If there is no, described data block to be stored is joined to waiting list, and while in described flash card, free block occurring, carry out described data block to be stored is write to the step in the free block in described flash card with asynchronous system.
At the second of first aspect, in possible implementation, after the free block in described data block to be stored is write to described flash card, also comprise:
Preserve the corresponding relation of the flash memory LBA (Logical Block Addressing) of the disk LBA (Logical Block Addressing) of described data block to be stored and described data block to be stored.
May implementation in conjunction with the second of first aspect, in the third possible implementation of first aspect, describedly judge in described flash card that whether being cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored comprises:
Judge whether to exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored;
If exist, determine in described flash card and be cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored; Otherwise, determine that described stating in flash card is not cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
In conjunction with any one implementation of first aspect or first aspect, in the 4th kind of possible implementation of first aspect, also comprise:
Described data block to be stored is labeled as to the dirty data piece, and described data block to be stored is added to the queue of dirty data piece.
In conjunction with the 4th kind of possible implementation of first aspect, in the 5th kind of possible implementation of first aspect, also comprise:
Obtain the current free space ratio of described flash card;
When the ratio of described free space is less than the first predetermined threshold value, determine the number of dirty data piece to be read;
Each dirty data piece in the queue of described dirty data piece is carried out to heapsort according to the disk LBA (Logical Block Addressing), obtain the first sequence heap;
Initiate several the first Asynchronous Requests, according to the number of described dirty data piece to be read, from described the first sequence heap heap head, start the dirty data piece asynchronous flash memory card reading successively;
According to the disk LBA (Logical Block Addressing) of the dirty data piece read, the dirty data piece read is carried out to heapsort, obtain the second sequence heap;
Initiate the second Asynchronous Request, by the described dirty data piece read since the heap head of the second sequence heap asynchronous write disk successively, the number that the number of described the second Asynchronous Request is described dirty data piece to be read.
In conjunction with the 5th kind of possible implementation of first aspect, in the 6th kind of possible implementation of first aspect, described heapsort is minimum heapsort.
In conjunction with the 5th kind of possible implementation of first aspect, in the 7th kind of possible implementation of first aspect, described heapsort is maximum heapsort.
In conjunction with the 5th kind of possible implementation of first aspect, in the 8th kind of possible implementation of first aspect, the number of described the first Asynchronous Request is definite according to the first formula, and described the first formula is:
V=N+(M-N)*R
Wherein, the number that V is the first Asynchronous Request; The lower limit that N is default Asynchronous Request number; The upper limit that M is default Asynchronous Request number; R is the current space availability ratio of flash card.
A second aspect of the present invention provides a kind of device of writing cache memory, and described cache memory is flash card, and described device comprises:
Whether judge module, be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored for judging described flash card;
The first writing module, while for judge described flash card at described judge module, being cached with the identical old data block of disk LBA (Logical Block Addressing) with described data block to be stored, write described data block to be stored in described flash card with asynchronous system;
The second writing module, while for judge described flash card at described judge module, not being cached with the identical old data block of disk LBA (Logical Block Addressing) with described data block to be stored, described data block to be stored is write in the free block of described flash card with asynchronous system.
In the first of second aspect, in possible implementation, described the second writing module comprises:
Whether the first judging unit, exist free block for judging described flash card;
The first r/w cell, while in described the first judgment unit judges, going out described flash card, having free block, write described data block to be stored in free block in described flash card with asynchronous system;
The second r/w cell, while in described judgment unit judges, going out described flash card, not having free block, described data block to be stored is added to waiting list, and while in described flash card, free block occurring, described data block to be stored is write in the free block in described flash card with asynchronous system.
At the second of second aspect, in possible implementation, also comprise:
Preserve module, after at described the second writing module, described data block to be stored being write to the free block of described flash card, preserve the corresponding relation of the flash memory LBA (Logical Block Addressing) of the disk LBA (Logical Block Addressing) of described data block to be stored and described data block to be stored.
In the third possible implementation of second aspect, described judge module comprises:
The second judging unit, for judging whether to exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored;
The 3rd judging unit, for when described the second judgment unit judges goes out to have the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored, determine in described flash card and be cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored;
The 4th judging unit, for when described the second judgment unit judges goes out not exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored, determine in described flash card and be not cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
In conjunction with any one possible implementation of second aspect or second aspect, second aspect the 4th in possible implementation, also comprise:
Mark module, for after described the first writing module or the second writing module write flash card by described data block to be stored, be labeled as the dirty data piece by described data block to be stored, and described data block to be stored added to the queue of dirty data piece.
In conjunction with the 4th kind of possible implementation of second aspect, in the 5th kind of possible implementation of second aspect, also comprise:
Acquisition module, for obtaining the current free space ratio of described flash card;
Determination module, for when described free space ratio is less than the first predetermined threshold value, determine the number of dirty data piece to be read;
The first order module, carry out heapsort for each dirty data piece by the queue of described dirty data piece according to the disk LBA (Logical Block Addressing), obtains the first sequence heap;
The first read through model, for initiating several the first Asynchronous Requests, start the dirty data piece asynchronous flash memory card reading successively according to the number of described dirty data piece to be read from described the first sequence heap heap head;
The second order module, carry out heapsort for the disk LBA (Logical Block Addressing) of the dirty data piece according to reading by the dirty data piece read, and obtains the second sequence heap;
The 3rd writing module, for initiating the second Asynchronous Request, start the described dirty data piece read to be written to successively disk from the heap head of described the second sequence heap, the number that the number of described the second Asynchronous Request is described dirty data piece to be read.
A third aspect of the present invention provides a kind of device of writing cache memory, and described cache memory is flash card, and described device comprises:
At least one processor, it is configured to:
Judge in described flash card and whether be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored;
If have, described data block to be stored is write in described flash card with asynchronous system;
If no, described data block to be stored is write in the free block in described flash card with asynchronous system; And
Storer, it is coupled to described at least one processor.
In the first of the third aspect in possible implementation, at least one processor that is configured to described data block to be stored is write in the free block of described flash card further is configured to:
Judge in described flash card and whether have free block;
If exist, described data block to be stored write in the free block in described flash card with asynchronous system;
If there is no, described data block to be stored is joined to waiting list, and while in described flash card, free block occurring, carry out described data block to be stored is write to the step in the free block in described flash card with asynchronous system.
At the second of the third aspect, in possible implementation, described at least one processor further is configured to:
After free block in described data block to be stored is write to described flash card, preserve the corresponding relation of the flash memory LBA (Logical Block Addressing) of the disk LBA (Logical Block Addressing) of described data block to be stored and described data block to be stored.
The possible implementation in conjunction with the second of the third aspect, in the third possible implementation of the third aspect, be configured to judge in described flash card that at least one processor that whether is cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored further is configured to:
Judge whether to exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored;
If exist, determine in described flash card and be cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored; Otherwise, determine that described stating in flash card is not cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
In conjunction with any one possible implementation of the third aspect or the third aspect, in the 4th kind of possible implementation of the third aspect, described at least one processor further is configured to:
After described data block to be stored is write to described flash card, described data block to be stored is labeled as to the dirty data piece, and described data block to be stored is added to the queue of dirty data piece.
In conjunction with the 4th kind of possible implementation of the third aspect, in the 5th kind of possible implementation of the third aspect, described at least one processor further is configured to:
Obtain the current free space ratio of described flash card;
When the ratio of described free space is less than the first predetermined threshold value, determine the number of dirty data piece to be read;
Each dirty data piece in the queue of described dirty data piece is carried out to heapsort according to the disk LBA (Logical Block Addressing), obtain the first sequence heap;
Initiate several the first Asynchronous Requests, according to the number of described dirty data piece to be read, from described the first sequence heap heap head, start the dirty data piece asynchronous flash memory card reading successively;
According to the disk LBA (Logical Block Addressing) of the dirty data piece read, the dirty data piece read is carried out to heapsort, obtain the second sequence heap;
Initiate the second Asynchronous Request, by the described dirty data piece read since the heap head of the second sequence heap asynchronous write disk successively, the number that the number of described the second Asynchronous Request is described dirty data piece to be read.
A kind of method of writing cache memory that the embodiment of the present invention provides, application flash card (being the FLASH storer) is as cache memory, when carrying out data buffer storage, judge in flash card and whether be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored, if exist, directly data block to be stored is write in described flash card with asynchronous system, if there is no, by data block to be stored, with asynchronous system, write in the free block of described flash card, realized writing the buffer memory of data, and protect electricity because flash card does not need reserve battery, after power down, data can not lost yet, therefore energy consumption is low, and, the price of flash card self is lower than the price of dynamic RAM, therefore, the method of writing cache memory that the embodiment of the present application provides, reduced the cost of realizing of storage system.
The accompanying drawing explanation
Fig. 1 is a kind of process flow diagram of writing the method for cache memory that the embodiment of the present application provides;
Fig. 2 is the process flow diagram that another kind that the embodiment of the present application provides is write the method for cache memory;
Fig. 3 be the embodiment of the present application provide another write the process flow diagram of the method for cache memory;
Fig. 4 provide for the embodiment of the present application another write the process flow diagram of the method for cache memory;
The data by cache memory that Fig. 5 provides for the embodiment of the present application are written to the schematic diagram of the process in disk;
A kind of structural representation of writing the device of cache memory that Fig. 6 provides for the embodiment of the present application;
The another kind that Fig. 7 provides for the embodiment of the present application is write the structural representation of the device of cache memory;
Fig. 8 provide for the embodiment of the present application another write the structural representation of the device of cache memory;
Fig. 9 provide for the embodiment of the present application another write the structural representation of the device of cache memory;
Figure 10 provide for the embodiment of the present application another write the structural representation of the device of cache memory;
Figure 11 provide for the embodiment of the present application another write the structural representation of the device of cache memory;
Figure 12 provide for the embodiment of the present application another write the structural representation of the device of cache memory;
The Organization Chart of the cache memory in the integrated machine system that Figure 13 provides for the embodiment of the present application;
The Organization Chart of the piece memory disk array high speed memory buffer that Figure 14 provides for the embodiment of the present application.
Embodiment
In order to make those skilled in the art can further understand feature of the present invention and technology contents, refer to following about detailed description of the present invention and accompanying drawing, accompanying drawing only provide with reference to and explanation, not be used for limiting the present invention.
Please refer to Fig. 1, a kind of process flow diagram of writing the method for cache memory that Fig. 1 provides for the embodiment of the present application, wherein, described cache memory is flash card, i.e. the FLASH storer; Described method comprises:
Step S101: judge in described flash card whether be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored; If so, perform step S102, otherwise, execution step S103;
The piece storage is the most traditional file layout, and the embodiment of the present application just is being based on the method for writing cache memory of piece memory technology, and data are stored in cache memory and disk with the form of data block.
In present application example, described flash card is divided into some logical blocks
The disk LBA (Logical Block Addressing) of described data block to be stored is the information of carrying in write request, is used to indicate the memory location of data block to be stored in disk.
In the embodiment of the present application, after receiving write request, at first judge in cache memory and whether had the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored, judge in flash card and whether have data block, and the disk LBA (Logical Block Addressing) of this data block is identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
Step S102: described data block to be stored is write in described flash card with asynchronous system;
In the embodiment of the present application, if there be the old data block identical with the LBA (Logical Block Addressing) of described data block to be stored in flash card,, no matter whether have free block in flash card, directly initiate the asynchronous write operation to flash card, data to be stored are write in described flash card.
It should be noted that, in the embodiment of the present application, described free block is the free block on logical meaning, therefore, in the situation that there is no free block, it may also have the free space on physical significance, therefore, in the situation that there is no free block, still data block to be stored can be write in flash card, further, in the situation that do not have free block there is no the free space on physical significance yet, flash card can be carried out and wipe flow process, and concrete erase process is the common practise of this area, repeats no more here.
Step S103: described data block to be stored is write in the free block of described flash card with asynchronous system.
In the embodiment of the present application, if there be not the old data block identical with the LBA (Logical Block Addressing) of described data block to be stored in flash card, while in described flash card, having free block, initiate the asynchronous write operation to flash card, data to be stored are write in described flash card.
A kind of method of writing cache memory that the embodiment of the present application provides, application flash card (being the FLASH storer) is as cache memory, when carrying out data buffer storage, judge in flash card and whether be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored, if exist, directly data block to be stored is write in described flash card with asynchronous system, if there is no, by data block to be stored, with asynchronous system, write in the free block of described flash card, realized writing the buffer memory of data, and protect electricity because flash card does not need reserve battery, after power down, data can not lost yet, therefore energy consumption is low, and, the price of flash card self is lower than the price of dynamic RAM, therefore, the method of writing cache memory that the embodiment of the present application provides, reduced the cost of realizing of storage system, in other words, in the situation that same cost, the buffer memory capacity that the embodiment of the present application can realize is larger.
And the FLASH storer can adapt to the hardware platform of current main-stream, do not limit operating system, do not limit hardware platform, as long as a PCI-E slot is arranged.
Above-described embodiment, preferably, in the time of in data block to be stored is write to flash card, can adopt the data layout of daily record form, be about to data block to be stored by the flash memory LBA (Logical Block Addressing) order from small to large store successively, can effectively guarantee the continuity of data I/O like this, thereby can be so that flash card be while carrying out garbage reclamation, valid data in executing garbage seldom, what reduced flash card writes amplification (having reduced the number of times of writing to flash card), thus the problem of avoiding serviceable life of the flash card because writing of flash card caused to shorten.
The another kind that the embodiment of the present application provides write cache memory method process flow diagram as shown in Figure 2, described step S103 specific implementation flow process can comprise:
Step S201, judge in described flash card and whether have free block; If exist, perform step S202, otherwise, execution step S203;
Step S202: described data block to be stored is write in the free block in described flash card with asynchronous system;
Step S203: described data block to be stored is joined in waiting list;
Step S204: judge in flash card and free block whether occurs, if so, perform step S202; Otherwise, continue to wait for.
In the embodiment of the present application, while there is no the identical old data block of the disk LBA (Logical Block Addressing) of buffer memory and described data block to be stored in flash card, first judge in flash card whether available free, if available free, directly flash card is initiated to the asynchronous write operation, data block to be stored is write in flash card; If there is no free block, waited for, until while in flash card, free block occurring, just to flash card, initiate the asynchronous write operation, will treat that the poke data block writes in the free block of flash card.
Above-described embodiment, after the free block in described data block to be stored is write to described flash card, can also comprise:
Preserve the corresponding relation of the flash memory LBA (Logical Block Addressing) of the disk LBA (Logical Block Addressing) of described data block to be stored and described data block to be stored.
Wherein, the flash memory LBA (Logical Block Addressing) of described data block to be stored is exactly the LBA (Logical Block Addressing) of described data block to be stored in described flash card, that is to say, while there is no the buffer memory old data block corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored in described buffer memory card, after in free block in data block to be stored is write to the buffer memory card, preserve the corresponding relation of disk LBA (Logical Block Addressing) and the LBA (Logical Block Addressing) of this data block to be stored in the buffer memory card of data block to be stored.
Specifically, when the corresponding relation of the flash memory LBA (Logical Block Addressing) of the disk LBA (Logical Block Addressing) of preserving described data block to be stored and described data block to be stored, can use Hash table, as shown in table 1:
Table 1 Hash table
In table 1, the value of each index (Hn, n=1,2,3 ...) by the disk LBA (Logical Block Addressing), through hash algorithm, obtained.DLBAm(m=1,2,3 ...) expression disk LBA (Logical Block Addressing); FLBAp(p=1,2,3 ...) expression flash memory LBA (Logical Block Addressing); What after Hash operation, obtain due to different disk LBA (Logical Block Addressing) may be same index value, so, under an index value, the corresponding relation of the corresponding a plurality of disk LBA (Logical Block Addressing) of possibility and flash memory LBA (Logical Block Addressing) (as shown in table 1, the corresponding relation of corresponding two the disk LBA (Logical Block Addressing) of index H1 and flash memory LBA (Logical Block Addressing)).In the embodiment of the present application, in order to reduce conflict (in order to reduce in Hash table the number of corresponding relation under same index), the number of index is set to 1.5 times of LBA (Logical Block Addressing) number in flash card.
Described hash algorithm can be: the disk LBA (Logical Block Addressing) is multiplied by the 16 system integers of signless 32, as as described in the 16 system integers of signless 32 can be 9e370001, then, the product of disk LBA (Logical Block Addressing) and signless 32 s' 16 system integers is done to complementation to the number of LBA (Logical Block Addressing) in flash card, can obtain the corresponding cryptographic hash of disk LBA (Logical Block Addressing) (being index value), due to the disk LBA (Logical Block Addressing) 16 system numbers that are 32, therefore, index value corresponding to disk LBA (Logical Block Addressing) is also the 16 system numbers of 32, concrete, can determine according to the second formula, the second formula is:
H=(DLBA*0x9e370001UL)%N,
Wherein, H means the corresponding index value of disk LBA (Logical Block Addressing) DLBA; The 16 system integer 9e370001 that 0x9e370001UL is signless 32; " % " means N is done to complementation; N means the number of LBA (Logical Block Addressing) in flash card.
Above-described embodiment, preferred, describedly judge in flash card that whether being cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored can comprise:
Judge whether to exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored;
If exist, determine in described flash card and be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored; Otherwise, determine in described flash card and be not cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
That is to say, the embodiment of the present application determines in flash card whether be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored, improved the speed of writing data by the corresponding relation between disk LBA (Logical Block Addressing) and flash memory LBA (Logical Block Addressing).
The embodiment of the present application provide another write cache memory method process flow diagram as shown in Figure 3, after described data block to be stored is write to described flash card, can also comprise:
Step S301: described data block to be stored is labeled as to the dirty data piece, and described data block to be stored is added to the queue of dirty data piece.
The embodiment of the present application provide another write cache memory method process flow diagram as shown in Figure 4, in order to guarantee the validity of storage system, when the data volume of the storage in cache memory reaches certain value, data in cache memory need to be written in disk, therefore, the method for writing cache memory that the embodiment of the present application provides can also comprise:
Step S401: the current free space ratio of obtaining described flash card;
In the embodiment of the present application, the space in addition, space that flash card apoplexy involving the solid organs data block is shared is called free space, and so, the current free space ratio of flash card is the ratio that in flash card, current free space accounts for the flash card gross space.
Step S402: judge whether described free space ratio is less than the first predetermined threshold value, if so, performs step S403, otherwise, execution step S401;
Described the first predetermined threshold value can be 20%, certainly, also can be arranged according to actual needs, is not specifically limited here.
Step S403: the number of determining dirty data piece to be read;
The number of described dirty data piece to be read is to write the number of the data block in disk from described cache memory.The number of described dirty data piece to be read can be according to the ratio-dependent of described free space, as, after reading the dirty data piece, in described flash card, the ratio of free space is greater than the second predetermined threshold value.Described the second predetermined threshold value can equal described the first predetermined threshold value, can certainly be greater than described the first predetermined threshold value, is not specifically limited here.
Step S404: each dirty data piece in the queue of described dirty data piece is carried out to heapsort according to the disk LBA (Logical Block Addressing), obtain the first sequence heap;
Data are write from cache memory to the efficiency in disk in order to improve, the embodiment of the present application is carried out heapsort to each dirty data piece in the queue of dirty data piece according to the disk LBA (Logical Block Addressing), described heapsort can be minimum heapsort, the dirty data piece that the heap head of the first sequence heap is disk LBA (Logical Block Addressing) minimum; Described heapsort also can be for to large heapsort, the dirty data piece that the heap head of the first sequence heap is disk LBA (Logical Block Addressing) maximum.
Concrete which kind of sortord that adopts can be determined according to the performance of disk: if disk LBA (Logical Block Addressing) sequential access is from small to large pressed in the disk support, can select minimum heapsort; And if disk LBA (Logical Block Addressing) sequential access is from big to small pressed in the disk support, can select maximum heapsort;
In this step, due to the time all dirty data pieces queue in the queue of dirty data piece is carried out to heapsort, so, also can be referred to as overall heapsort.
Step S405: initiate several the first Asynchronous Requests, according to the number of described dirty data piece to be read, from the heap head of described the first sequence heap, start the dirty data piece asynchronous flash memory card reading successively;
The number of described the first Asynchronous Request can be preset value;
Step S406: according to the disk LBA (Logical Block Addressing) of the dirty data piece read, the dirty data piece read is carried out sequence, obtain the second sequence heap;
In this step, only the dirty data piece read is carried out to heapsort, therefore, also can be called local heapsort, it should be noted that, twice heapsort all used identical heapsort mode, or all adopts minimum heapsort, or all uses maximum heapsort.
Step S407: initiate the second Asynchronous Request, by the described dirty data piece read since the heap head of second pair of sequence asynchronous write disk successively, the number that the number of described the second Asynchronous Request is described dirty data piece to be read.
After in the dirty data piece by reading writes disk, in flash card, with regard to available free space, now, the data to be stored in waiting list just can write flash card.
It should be noted that, in the process of carrying out minimum heapsort, if the LBA (Logical Block Addressing) of the dirty data piece newly increased in the queue of dirty data piece is less than the LBA (Logical Block Addressing) of the data block of heap head, the dirty data piece this newly increased joins in the dirty data piece waiting list corresponding with this sequence heap, when the dirty data piece on heap all runs through, again the dirty data piece in dirty data piece waiting list is carried out to heapsort, to guarantee that the data block read is all in increasing the order state from flash card; If the LBA (Logical Block Addressing) of the dirty data piece newly increased in the queue of dirty data piece is greater than the LBA (Logical Block Addressing) of the data block of heap head, the dirty data piece this newly increased directly adds in the sequence heap and carries out minimum heapsort.
And in the process of carrying out maximum heapsort, if the LBA (Logical Block Addressing) of the dirty data piece newly increased in the queue of dirty data piece is greater than the LBA (Logical Block Addressing) of the data block of heap head, the dirty data piece this newly increased joins in the dirty data piece waiting list corresponding with this sequence heap, when the dirty data piece on heap all runs through, again the dirty data piece in dirty data piece waiting list is carried out to heapsort, to guarantee that the data block read is all in the descending state from flash card; If the LBA (Logical Block Addressing) of the dirty data piece newly increased in the queue of dirty data piece is less than the LBA (Logical Block Addressing) of the data block of heap head, the dirty data piece this newly increased directly joins in the sequence heap and carries out maximum heapsort.
The data by cache memory that below illustrating the present embodiment provides are written to process in disk, please refer to Fig. 5, and the data by cache memory that Fig. 5 provides for the embodiment of the present application are written to the schematic diagram of the process in disk;
In this example, 10 dirty data pieces are arranged, the disk LBA (Logical Block Addressing) that the label in figure in each dirty data piece is the dirty data piece in the queue of dirty data piece; The number of dirty data piece to be read is 5, so, in flash card, when the ratio of free space is less than the first predetermined threshold value, these 10 dirty data pieces are carried out to minimum heapsort, obtain the first sequence heap, the heap head of the first sequence heap is the dirty data piece that the disk LBA (Logical Block Addressing) is 23, and the disk LBA (Logical Block Addressing) of follow-up dirty data piece is followed successively by 32,33,44,48,79,95,158,189,789; Then, with asynchronous system from the heap head of described the first sequence heap, five dirty data pieces of reading disk LBA (Logical Block Addressing) minimum, in this example, the disk LBA (Logical Block Addressing) of five dirty data pieces that read is respectively: 44,32,23,33,48; Then, these five dirty datas that read are carried out to minimum heapsort again, obtain the second sequence heap, the dirty data piece that the heap head of the second sequence heap is still 23 for the disk LBA (Logical Block Addressing), the disk LBA (Logical Block Addressing) of follow-up dirty data piece is followed successively by 32,33,44,48; Then, the heap head since the second sequence heap, be written to these five dirty data pieces in disk with asynchronous system.
In the embodiment of the present application, in the time of in the data by cache memory write disk, adopt overall heapsort and local heapsort, and be that overall situation sequence heap and partial ordering's heap have all safeguarded that a waiting list (wherein, the queue of the corresponding dirty data piece of overall situation sequence heap, corresponding dirty data piece waiting list is piled by partial ordering) guarantee succession when data block writes disk from cache memory, improve data and be written to the efficiency disk from cache memory, improved brush efficiency.
Above-described embodiment, preferred, the number of described the first Asynchronous Request also can be determined in the following way:
The number of described the first Asynchronous Request also can be according to the upper and lower bound of default Asynchronous Request number, and the space availability ratio of cache memory is determined, suppose the first Asynchronous Request number under be limited to N, the first Asynchronous Request is minimum can initiate N, be limited to M on the number of the first Asynchronous Request, the first Asynchronous Request can be initiated at most M, the space availability ratio of cache memory is R, so, the number of the first Asynchronous Request can be determined according to the first formula, wherein, the first formula is:
V=N+(M-N)*R
Wherein, the number that V is the first Asynchronous Request; R=1-P, the ratio that P is free space in flash card.
Concrete, the value of N can be 256 for the value of 4, M.
In the embodiment of the present application, dynamically adjust the number of the first Asynchronous Request according to the space availability ratio of cache memory, when making space availability ratio at cache memory less, with less Asynchronous Request, disk is operated, and when the space availability ratio of cache memory is larger, with more Asynchronous Request, disk is operated, and because the number of Asynchronous Request is more, speed is faster, operation to disk is also just more, therefore, the embodiment of the present application had both guaranteed when space availability ratio is larger that (when space is more nervous) was written to the data in cache memory in disk with speed faster, when space availability ratio is larger, with the spatial cache of speed recovery faster, can guarantee again to have reduced the operation to disk, to reduce due to the infringement that reading or writing of disk caused disk space utilization hour while taking it easy (be space).
Corresponding with embodiment of the method, as shown in Figure 6, wherein said cache memory is flash card to the structural representation of the embodiment of the present application provides a kind of device of writing cache memory, and described device can comprise:
When the first writing module 602 is cached with the identical old data block of disk LBA (Logical Block Addressing) with described data block to be stored for judge described flash card at described judge module 601, described data block to be stored is write in described flash card with asynchronous system;
When the second writing module 603 is not cached with the identical old data block of disk LBA (Logical Block Addressing) with described data block to be stored for judge described flash card at described judge module 601, described data block to be stored is write in the free block of described flash card with asynchronous system.
The another kind that the embodiment of the present application provides write cache memory device structural representation as shown in Figure 7, described the second writing module 603 can comprise:
The first judging unit 701, the first r/w cells 702 and the second r/w cell 703;
The first judging unit 701 is for judging whether described flash card exists free block;
When there is free block in the first r/w cell 702 at described the first judging unit 701, judging described flash card, described data block to be stored is write in the free block of described flash card with asynchronous system.
When there is not free block in the second r/w cell 703 in described judgment unit judges, going out described flash card, described data to be stored are added to waiting list, and while in described flash card, free block occurring, described data block to be stored is write in the free block of described flash card with asynchronous system.
The embodiment of the present application provide another write cache memory device structural representation as shown in Figure 8, can also comprise:
On basis embodiment illustrated in fig. 8, the embodiment of the present application provide another write cache memory device structural representation as shown in Figure 9, described judge module 601 can comprise:
The second judging unit 901, the three judging units 902 and the 4th judging unit 903;
The second judging unit 901 is for judging whether to exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored;
When there be the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored in the 3rd judging unit 902 for judging at described the second judging unit 901, determine in described flash card and be cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored;
When there be not the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored in the 4th judging unit 903 for judging at described the second judging unit 901, determine in described flash card and be not cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
The embodiment of the present application provide another write cache memory device structural representation as shown in figure 10, can also comprise:
The embodiment of the present application provide another write cache memory device structural representation as shown in figure 11, can also comprise:
The first order module 1103 is carried out heapsort for each dirty data piece by the queue of described dirty data piece according to the disk LBA (Logical Block Addressing), obtains the first sequence heap;
The first read through model 1104 is for initiating several the first Asynchronous Requests, according to the number of described dirty data piece to be read, from the heap head of described the first sequence heap, starts the dirty data piece asynchronous flash memory card reading successively;
The second order module 1105 is carried out the dirty data piece read to sequence for the disk LBA (Logical Block Addressing) of the dirty data piece according to reading, and obtains the second sequence heap;
The 3rd writing module 1106 is for initiating the second Asynchronous Request, the described dirty data piece read started to write successively disk from the heap head of described the second sequence heap, wherein, the number that the number of described the second Asynchronous Request is described dirty data piece to be read.
The embodiment of the present application provide another write cache memory device structural representation as shown in figure 12, wherein, described cache memory is flash card, described device can comprise:
At least one processor 1201, and with the storer 1202 of described at least one processor coupling;
Described at least one processor is configured to:
Judge in described flash card and whether be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored;
If have, described data block to be stored is write in described flash card with asynchronous system;
If no, described data block to be stored is write in the free block in described flash card with asynchronous system.
Above-described embodiment, preferred, at least one processor that is configured to described data block to be stored is write in the free block of described flash card further can be configured to:
Judge in described flash card and whether have free block;
If exist, described data block to be stored write in the free block in described flash card with asynchronous system;
If there is no, described data block to be stored is joined to waiting list, and while in described flash card, free block occurring, carry out described data block to be stored is write to the step in the free block in described flash card with asynchronous system.
Above-described embodiment, preferred, described at least one processor further can be configured to:
After free block in described data block to be stored is write to described flash card, preserve the corresponding relation of the flash memory LBA (Logical Block Addressing) of the disk LBA (Logical Block Addressing) of described data block to be stored and described data block to be stored.
Above-described embodiment, preferred, be configured to judge in described flash card that at least one processor that whether is cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored further can be configured to:
Judge whether to exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored;
If exist, determine in described flash card and be cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored; Otherwise, determine that described stating in flash card is not cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
Above-described embodiment, preferred, described at least one processor further can be configured to:
After described data block to be stored is write to described flash card, described data block to be stored is labeled as to the dirty data piece, and described data block to be stored is added to the queue of dirty data piece.
Above-described embodiment, preferred, described at least one processor further can be configured to:
Obtain the current free space ratio of described flash card;
When the ratio of described free space is less than the first predetermined threshold value, determine the number of dirty data piece to be read;
Each dirty data piece in the queue of described dirty data piece is carried out to heapsort according to the disk LBA (Logical Block Addressing), obtain the first sequence heap;
Initiate several the first Asynchronous Requests, according to the number of described dirty data piece to be read, from described the first sequence heap heap head, start the dirty data piece asynchronous flash memory card reading successively;
According to the disk LBA (Logical Block Addressing) of the dirty data piece read, the dirty data piece read is carried out to heapsort, obtain the second sequence heap;
Initiate the second Asynchronous Request, by the described dirty data piece read since the heap head of the second sequence heap asynchronous write disk successively, the number that the number of described the second Asynchronous Request is described dirty data piece to be read.
The embodiment of the present application can apply with integrated machine system in the Cache framework, different from the Cache framework in integrated machine system of the prior art, substitute NVDIMM(Non-Volatile DIMM with the FLASH storer in the integrated machine system that the embodiment of the present application provides, a kind of Nonvolatile memory) as cache memory, as shown in figure 13;
In the Cache of all-in-one framework, Cache is not present in Service Processing Module, but is present in each memory node, and the function of Service Processing Module, with identical in prior art, is mainly used in the I/O(I/O) request distribution processor.I/O request is by the distribution of Service Processing Module, just can find its access the storage area place be which memory node.
The processing unit of I/O request during memory node, comprise oneself independently CPU, Cache independently in each memory node, and one or several disks, disk number and the Cache size of each memory node are identical, and all memory nodes form the storage pool of whole storage system.
In the embodiment of the present application, write request for integrated machine system, route by Service Processing Module arrives memory node, and the method for writing cache memory that the CPU in memory node provides by the embodiment of the present application is returned to " writing successfully " information after write request is write to the FLASH storer.When the writing data quantity of FLASH storer acquires a certain degree, the method of writing cache memory provided by the embodiment of the present application again, by the FLASH storer write Refresh Data in disk, to reclaim the space on the FLASH storer, for follow-up write request.
The embodiment of the present application also can be for piece memory disk array, different from of the prior art memory disk array, substitute NVRAM(Non-Volatile Random Access Memory with the FLASH storer in the array storage system that the embodiment of the present application provides, nonvolatile random access memory) as cache memory, as shown in figure 14.
Form whole storage system in the general mode of dragging N(to be responsible for the disk cluster of the head of business processing+N platform Multi-disk that adopts of piece memory disk array), cache memory is present in the head section of storage system, cache memory in head is carrying the I/O caching function of left and right, back extension cabinet disk, and head generally is connected by the SAS bus with disk cluster.In the embodiment of the present application, the CPU of head section realizes writing the function of cache memory.
The software module that the method for describing in conjunction with embodiment disclosed herein or the step of algorithm can directly use hardware, processor to carry out, or the combination of the two is implemented.To the above-mentioned explanation of the disclosed embodiments, make professional and technical personnel in the field can realize or use the present invention.Multiple modification to these embodiment will be apparent for those skilled in the art, and General Principle as defined herein can be in the situation that do not break away from the spirit or scope of the present invention, realization in other embodiments.Therefore, above-described embodiment of the present invention, do not form limiting the scope of the present invention.Any modification of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in claim protection domain of the present invention.
Claims (21)
1. a method of writing cache memory, is characterized in that, described cache memory is flash card, and described method comprises:
Judge in described flash card and whether be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored;
If so, described data block to be stored is write in described flash card with asynchronous system;
If not, described data block to be stored is write in the free block in described flash card with asynchronous system.
2. method according to claim 1, is characterized in that, the described free block that described data block to be stored is write to described flash card comprises:
Judge in described flash card and whether have free block;
If exist, described data block to be stored write in the free block in described flash card with asynchronous system;
If there is no, described data block to be stored is joined to waiting list, and while in described flash card, free block occurring, carry out described data block to be stored is write to the step in the free block in described flash card with asynchronous system.
3. method according to claim 1, is characterized in that, after the free block in described data block to be stored is write to described flash card, also comprises:
Preserve the corresponding relation of the flash memory LBA (Logical Block Addressing) of the disk LBA (Logical Block Addressing) of described data block to be stored and described data block to be stored.
4. method according to claim 3, is characterized in that, describedly judges in described flash card that whether being cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored comprises:
Judge whether to exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored;
If exist, determine in described flash card and be cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored; Otherwise, determine that described stating in flash card is not cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
5. according to the described method of claim 1-4 any one, it is characterized in that, after described data block to be stored is write to described flash card, also comprise:
Described data block to be stored is labeled as to the dirty data piece, and described data block to be stored is added to the queue of dirty data piece.
6. method according to claim 5, is characterized in that, also comprises:
Obtain the current free space ratio of described flash card;
When the ratio of described free space is less than the first predetermined threshold value, determine the number of dirty data piece to be read;
Each dirty data piece in the queue of described dirty data piece is carried out to heapsort according to the disk LBA (Logical Block Addressing), obtain the first sequence heap;
Initiate several the first Asynchronous Requests, according to the number of described dirty data piece to be read, from described the first sequence heap heap head, start the dirty data piece asynchronous flash memory card reading successively;
According to the disk LBA (Logical Block Addressing) of the dirty data piece read, the dirty data piece read is carried out to heapsort, obtain the second sequence heap;
Initiate the second Asynchronous Request, by the described dirty data piece read since the heap head of the second sequence heap asynchronous write disk successively, the number that the number of described the second Asynchronous Request is described dirty data piece to be read.
7. method according to claim 6, is characterized in that, described heapsort is minimum heapsort.
8. method according to claim 6, is characterized in that, described heapsort is maximum heapsort.
9. method according to claim 6, is characterized in that, the number of described the first Asynchronous Request is definite according to the first formula, and described the first formula is:
V=N+(M-N)*R
Wherein, the number that V is the first Asynchronous Request; The lower limit that N is default Asynchronous Request number; The upper limit that M is default Asynchronous Request number; R is the current space availability ratio of flash card.
10. a device of writing cache memory, is characterized in that, described cache memory is flash card, and described device comprises:
Whether judge module, be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored for judging described flash card;
The first writing module, while for judge described flash card at described judge module, being cached with the identical old data block of disk LBA (Logical Block Addressing) with described data block to be stored, write described data block to be stored in described flash card with asynchronous system;
The second writing module, while for judge described flash card at described judge module, not being cached with the identical old data block of disk LBA (Logical Block Addressing) with described data block to be stored, described data block to be stored is write in the free block of described flash card with asynchronous system.
11. device according to claim 10, is characterized in that, described the second writing module comprises:
Whether the first judging unit, exist free block for judging described flash card;
The first r/w cell, while in described the first judgment unit judges, going out described flash card, having free block, write described data block to be stored in free block in described flash card with asynchronous system;
The second r/w cell, while in described judgment unit judges, going out described flash card, not having free block, described data block to be stored is added to waiting list, and while in described flash card, free block occurring, described data block to be stored is write in the free block in described flash card with asynchronous system.
12. device according to claim 10, is characterized in that, also comprises:
Preserve module, after at described the second writing module, described data block to be stored being write to the free block of described flash card, preserve the corresponding relation of the flash memory LBA (Logical Block Addressing) of the disk LBA (Logical Block Addressing) of described data block to be stored and described data block to be stored.
13. device according to claim 12, is characterized in that, described judge module comprises:
The second judging unit, for judging whether to exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored;
The 3rd judging unit, for when described the second judgment unit judges goes out to have the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored, determine in described flash card and be cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored;
The 4th judging unit, for when described the second judgment unit judges goes out not exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored, determine in described flash card and be not cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
14. according to the described device of claim 10-13 any one, it is characterized in that, also comprise:
Mark module, for after described the first writing module or the second writing module write flash card by described data block to be stored, be labeled as the dirty data piece by described data block to be stored, and described data block to be stored added to the queue of dirty data piece.
15. device according to claim 14, is characterized in that, also comprises:
Acquisition module, for obtaining the current free space ratio of described flash card;
Determination module, for when described free space ratio is less than the first predetermined threshold value, determine the number of dirty data piece to be read;
The first order module, carry out heapsort for each dirty data piece by the queue of described dirty data piece according to the disk LBA (Logical Block Addressing), obtains the first sequence heap;
The first read through model, for initiating several the first Asynchronous Requests, start the dirty data piece asynchronous flash memory card reading successively according to the number of described dirty data piece to be read from described the first sequence heap heap head;
The second order module, carry out heapsort for the disk LBA (Logical Block Addressing) of the dirty data piece according to reading by the dirty data piece read, and obtains the second sequence heap;
The 3rd writing module, for initiating the second Asynchronous Request, start the described dirty data piece read to be written to successively disk from the heap head of described the second sequence heap, the number that the number of described the second Asynchronous Request is described dirty data piece to be read.
16. a device of writing cache memory, is characterized in that, described cache memory is flash card, and described device comprises:
At least one processor, it is configured to:
Judge in described flash card and whether be cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored;
If have, described data block to be stored is write in described flash card with asynchronous system;
If no, described data block to be stored is write in the free block in described flash card with asynchronous system; And
Storer, it is coupled to described at least one processor.
17. device according to claim 16, is characterized in that, at least one processor that is configured to described data block to be stored is write in the free block of described flash card further is configured to:
Judge in described flash card and whether have free block;
If exist, described data block to be stored write in the free block in described flash card with asynchronous system;
If there is no, described data block to be stored is joined to waiting list, and while in described flash card, free block occurring, carry out described data block to be stored is write to the step in the free block in described flash card with asynchronous system.
18. device according to claim 16, is characterized in that, described at least one processor further is configured to:
After free block in described data block to be stored is write to described flash card, preserve the corresponding relation of the flash memory LBA (Logical Block Addressing) of the disk LBA (Logical Block Addressing) of described data block to be stored and described data block to be stored.
19. device according to claim 18, is characterized in that, is configured to judge in described flash card that at least one processor that whether is cached with the old data block identical with the disk LBA (Logical Block Addressing) of data block to be stored further is configured to:
Judge whether to exist the flash memory LBA (Logical Block Addressing) corresponding with the disk LBA (Logical Block Addressing) of described data block to be stored;
If exist, determine in described flash card and be cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored; Otherwise, determine that described stating in flash card is not cached with the old data block identical with the disk LBA (Logical Block Addressing) of described data block to be stored.
20. according to the described device of claim 16-19 any one, it is characterized in that, described at least one processor further is configured to:
After described data block to be stored is write to described flash card, described data block to be stored is labeled as to the dirty data piece, and described data block to be stored is added to the queue of dirty data piece.
21. device according to claim 20, is characterized in that, described at least one processor further is configured to:
Obtain the current free space ratio of described flash card;
When the ratio of described free space is less than the first predetermined threshold value, determine the number of dirty data piece to be read;
Each dirty data piece in the queue of described dirty data piece is carried out to heapsort according to the disk LBA (Logical Block Addressing), obtain the first sequence heap;
Initiate several the first Asynchronous Requests, according to the number of described dirty data piece to be read, from described the first sequence heap heap head, start the dirty data piece asynchronous flash memory card reading successively;
According to the disk LBA (Logical Block Addressing) of the dirty data piece read, the dirty data piece read is carried out to heapsort, obtain the second sequence heap;
Initiate the second Asynchronous Request, by the described dirty data piece read since the heap head of the second sequence heap asynchronous write disk successively, the number that the number of described the second Asynchronous Request is described dirty data piece to be read.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310400488.2A CN103488582B (en) | 2013-09-05 | 2013-09-05 | Write the method and device of cache memory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310400488.2A CN103488582B (en) | 2013-09-05 | 2013-09-05 | Write the method and device of cache memory |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103488582A true CN103488582A (en) | 2014-01-01 |
CN103488582B CN103488582B (en) | 2017-07-28 |
Family
ID=49828830
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310400488.2A Active CN103488582B (en) | 2013-09-05 | 2013-09-05 | Write the method and device of cache memory |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103488582B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095116A (en) * | 2014-05-19 | 2015-11-25 | 华为技术有限公司 | Cache replacing method, cache controller and processor |
CN105117351A (en) * | 2015-09-08 | 2015-12-02 | 华为技术有限公司 | Method and apparatus for writing data into cache |
CN105988719A (en) * | 2015-02-07 | 2016-10-05 | 深圳市硅格半导体有限公司 | Storage device and data processing method thereof |
CN109189726A (en) * | 2018-08-08 | 2019-01-11 | 北京奇安信科技有限公司 | A kind of processing method and processing device for reading and writing log |
CN109783023A (en) * | 2019-01-04 | 2019-05-21 | 平安科技(深圳)有限公司 | The method and relevant apparatus brushed under a kind of data |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5636355A (en) * | 1993-06-30 | 1997-06-03 | Digital Equipment Corporation | Disk cache management techniques using non-volatile storage |
CN1527973A (en) * | 2000-06-23 | 2004-09-08 | 英特尔公司 | Non-volatile cache |
CN1862475A (en) * | 2005-07-15 | 2006-11-15 | 华为技术有限公司 | Method for managing magnetic disk array buffer storage |
CN102004706A (en) * | 2009-09-01 | 2011-04-06 | 联芯科技有限公司 | Flash erasing power-fail protection method based on FTL(Flash Translation Layer) |
CN102136274A (en) * | 2009-12-30 | 2011-07-27 | 爱国者电子科技有限公司 | Mobile hard disk with two storage media |
CN102169464A (en) * | 2010-11-30 | 2011-08-31 | 北京握奇数据系统有限公司 | Caching method and device used for non-volatile memory, and intelligent card |
CN102981783A (en) * | 2012-11-29 | 2013-03-20 | 浪潮电子信息产业股份有限公司 | Cache accelerating method based on Nand Flash |
-
2013
- 2013-09-05 CN CN201310400488.2A patent/CN103488582B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5636355A (en) * | 1993-06-30 | 1997-06-03 | Digital Equipment Corporation | Disk cache management techniques using non-volatile storage |
CN1527973A (en) * | 2000-06-23 | 2004-09-08 | 英特尔公司 | Non-volatile cache |
CN1862475A (en) * | 2005-07-15 | 2006-11-15 | 华为技术有限公司 | Method for managing magnetic disk array buffer storage |
CN102004706A (en) * | 2009-09-01 | 2011-04-06 | 联芯科技有限公司 | Flash erasing power-fail protection method based on FTL(Flash Translation Layer) |
CN102136274A (en) * | 2009-12-30 | 2011-07-27 | 爱国者电子科技有限公司 | Mobile hard disk with two storage media |
CN102169464A (en) * | 2010-11-30 | 2011-08-31 | 北京握奇数据系统有限公司 | Caching method and device used for non-volatile memory, and intelligent card |
CN102981783A (en) * | 2012-11-29 | 2013-03-20 | 浪潮电子信息产业股份有限公司 | Cache accelerating method based on Nand Flash |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095116A (en) * | 2014-05-19 | 2015-11-25 | 华为技术有限公司 | Cache replacing method, cache controller and processor |
CN105095116B (en) * | 2014-05-19 | 2017-12-12 | 华为技术有限公司 | Cache method, cache controller and the processor replaced |
CN105988719A (en) * | 2015-02-07 | 2016-10-05 | 深圳市硅格半导体有限公司 | Storage device and data processing method thereof |
CN105988719B (en) * | 2015-02-07 | 2019-03-01 | 深圳市硅格半导体有限公司 | Storage device and its method for handling data |
CN105117351A (en) * | 2015-09-08 | 2015-12-02 | 华为技术有限公司 | Method and apparatus for writing data into cache |
CN105117351B (en) * | 2015-09-08 | 2018-07-03 | 华为技术有限公司 | To the method and device of buffering write data |
US10409502B2 (en) | 2015-09-08 | 2019-09-10 | Huawei Technologies Co., Ltd. | Method and apparatus for writing metadata into cache |
CN109189726A (en) * | 2018-08-08 | 2019-01-11 | 北京奇安信科技有限公司 | A kind of processing method and processing device for reading and writing log |
CN109189726B (en) * | 2018-08-08 | 2020-12-22 | 奇安信科技集团股份有限公司 | A processing method and device for reading and writing logs |
CN109783023A (en) * | 2019-01-04 | 2019-05-21 | 平安科技(深圳)有限公司 | The method and relevant apparatus brushed under a kind of data |
CN109783023B (en) * | 2019-01-04 | 2024-06-07 | 平安科技(深圳)有限公司 | Method and related device for data scrubbing |
Also Published As
Publication number | Publication date |
---|---|
CN103488582B (en) | 2017-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11068170B2 (en) | Multi-tier scheme for logical storage management | |
US10860477B2 (en) | Apparatus and method for low power low latency high capacity storage class memory | |
US9940261B2 (en) | Zoning of logical to physical data address translation tables with parallelized log list replay | |
US8423710B1 (en) | Sequential writes to flash memory | |
TWI416323B (en) | Method,system and semiconductor device for management workload | |
US20140115352A1 (en) | Asynchronous management of access requests to control power consumption | |
CN112632069B (en) | Hash table data storage management method, device, medium and electronic equipment | |
US20100050007A1 (en) | Solid state disk and method of managing power supply thereof and terminal including the same | |
US10754785B2 (en) | Checkpointing for DRAM-less SSD | |
CN109164976B (en) | Optimizing storage device performance using write caching | |
CN103838676B (en) | Data-storage system, date storage method and PCM bridges | |
CN104461964A (en) | Memory device | |
CN101989183A (en) | Method for realizing energy-saving storing of hybrid main storage | |
JP2012234254A (en) | Memory system | |
CN103488582A (en) | Method and device for writing cache memory | |
CN104246719A (en) | Prearranging data to commit to non-volatile memory | |
CN105607862A (en) | Solid state disk capable of combining DRAM (Dynamic Random Access Memory) with MRAM (Magnetic Random Access Memory) and being provided with backup power | |
CN108228483B (en) | Method and apparatus for processing atomic write commands | |
CN102915282A (en) | Block device data cache management method and system for memory system | |
CN102520885B (en) | Data management system for hybrid hard disk | |
CN107766002A (en) | A Virtual Hybrid File System Based on Hybrid Storage Devices | |
CN102637148B (en) | DDR SDRAM (double data rate synchronous dynamic random-access memory) based stacked data caching device and method thereof | |
CN113590505B (en) | Address mapping method, solid state disk controller and solid state disk | |
CN105630699B (en) | A kind of solid state hard disk and read-write cache management method using MRAM | |
CN113626253A (en) | Data recovery method, device, equipment and medium for failed solid-state memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C41 | Transfer of patent application or patent right or utility model | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20160726 Address after: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen Applicant after: Huawei Technologies Co., Ltd. Address before: Building 2, B District, Bantian HUAWEI base, Longgang District, Shenzhen, Guangdong Applicant before: Shenzhen Huawei Technologies Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |