[go: up one dir, main page]

CN117992366A - Cache processing method and device, computer equipment and storage medium - Google Patents

Cache processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN117992366A
CN117992366A CN202410149517.0A CN202410149517A CN117992366A CN 117992366 A CN117992366 A CN 117992366A CN 202410149517 A CN202410149517 A CN 202410149517A CN 117992366 A CN117992366 A CN 117992366A
Authority
CN
China
Prior art keywords
cache
hash
storage units
storage unit
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410149517.0A
Other languages
Chinese (zh)
Inventor
潘豪
徐泽明
肖蔓君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Union Memory Information System Co Ltd
Original Assignee
Shenzhen Union Memory Information System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Union Memory Information System Co Ltd filed Critical Shenzhen Union Memory Information System Co Ltd
Priority to CN202410149517.0A priority Critical patent/CN117992366A/en
Publication of CN117992366A publication Critical patent/CN117992366A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1045Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1081Address translation for peripheral access to main memory, e.g. direct memory access [DMA]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention provides a cache processing method, a device, computer equipment and a storage medium, wherein the cache processing method acquires a non-paged memory as a cache by receiving an initialization application initiated by an application layer, the cache acquisition mode is simple, the size of the non-paged memory is determined by the application layer, and the requirements of different scenes can be met; the method is simple to realize and faster in efficiency of searching corresponding data in the hash table through the corresponding index.

Description

Cache processing method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a cache processing method, a device, a computer device, and a storage medium.
Background
Cache (Cache) is a hardware or software component for temporarily storing data, and is generally used in a scenario where the data transmission speed between two hardware is greatly different, so as to improve the data access speed. In the prior art, there are two different caches, the first is by adding related hardware storage devices as caches; the second is a pass-through list or array, which serves as a cache. However, the first type of cache requires additional hardware storage devices, which are very costly; the second type of caching is time consuming and inefficient in finding the relevant data.
In addition, for cached data, most of the prior art uses a redundant mode as a mapping mode of a hash function, the hash function has simple implementation process, but more conflict scenes, the storage space is easy to be occupied by the conflict scenes, and the time for searching related data is longer under the condition; if a complex hash function is used to realize the mapping mode of the hash function, the realization process is complex and the cost is high.
Disclosure of Invention
The embodiment of the invention provides a cache processing method, a device, computer equipment and a storage medium, which aim to solve the problems of high cost and low efficiency of a cache and a mapping mode thereof in the prior art.
In a first aspect, an embodiment of the present invention provides a cache processing method, including:
Receiving an initialization application initiated by an application layer, and acquiring a non-paging memory as a cache;
Partitioning the cache according to a preset size to obtain a plurality of storage units, calculating the addresses of the storage units through a hash function to obtain indexes of the storage units, creating a hash bucket in advance, generating corresponding indexes, mapping the indexes of the storage units and the indexes of the hash bucket, and storing the mapping relation into a hash table;
obtaining an address of an IO request, and calculating the address through a hash function to obtain an index;
searching a corresponding storage unit in a hash table according to the index;
and obtaining an offset according to the address and the length of the IO request, judging whether the data of the IO request hit the searched storage data in the storage unit according to the offset, and if so, updating the corresponding storage unit.
In a second aspect, an embodiment of the present invention provides a cache processing apparatus, including:
The acquisition unit is used for receiving an initialization application initiated by the application layer and acquiring the non-paging memory as a cache;
the mapping unit is used for blocking the cache according to a preset size to obtain a plurality of storage units, calculating the address of each storage unit through a hash function to obtain a corresponding index, mapping the index of each storage unit, and storing the mapping relation into a hash table;
The computing unit is used for acquiring the address of the IO request, and computing the address through a hash function to obtain an index;
the searching unit is used for searching the corresponding storage unit in the hash table according to the index;
The hit judgment unit is used for obtaining the offset according to the address and the length of the IO request, judging whether the data of the IO request hit the searched storage data in the storage unit according to the offset, and updating the corresponding storage unit if hit.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements a cache processing method as described above when executing the computer program.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor implements a cache processing method as described above.
The embodiment of the invention provides a cache processing method, a device, computer equipment and a storage medium, wherein the cache processing method acquires a non-paged memory as a cache by receiving an initialization application initiated by an application layer, the cache acquisition mode is simple, the size of the non-paged memory is determined by the application layer, and the requirements of different scenes can be met; the method is simple to realize and faster in efficiency of searching corresponding data in the hash table through the corresponding index.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a cache processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a cache application provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of hash table storage according to an embodiment of the present invention;
FIG. 4 is a schematic sub-flowchart of a cache processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another sub-flowchart of a cache processing method according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a cache processing apparatus according to an embodiment of the present invention;
Fig. 7 is a schematic block diagram of a cache processing apparatus according to another embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, an embodiment of the present invention provides a cache processing method, which includes steps S10-S50:
s10, receiving an initialization application initiated by an application layer, and acquiring a non-paging memory as a cache;
In this step, the application layer initiates an initialization application, and after receiving the initialization application initiated by the application layer, the system obtains the non-paged memory, where the size of the non-paged memory is determined by the application layer, so as to meet the requirements of different application scenarios. It should be noted that, the application layer initiates one or more initialization applications, according to the number of the initialization applications, the system obtains a corresponding number of non-paged memories as a Cache, as shown in fig. 2, volume in fig. 2 represents a disk, cache represents a Cache, a user initiates two Cache tasks for five disks, the application layer initiates two initialization applications according to the number of the Cache tasks, then, according to the number of corresponding disks in each Cache task, the application layer allocates two non-paged memories with corresponding sizes as a Cache, as can be known from fig. 2, the Cache task 1 includes two disks, and the Cache task 2 includes three disks, so that the Cache space of the Cache task 1 is smaller than that of the Cache task 2. In the following embodiments, specific description is made for an initialization application initiated by the application layer.
In one embodiment, S10 includes:
Obtaining a non-paging memory through memory allocation function allocation;
and initializing the non-paged memory to obtain a cache for copying the hard disk content.
The process of obtaining the non-paged memory is mainly described in this step, the system allocates a continuous block of memory space, i.e. the non-paged memory, through a memory allocation function (e.g. ExAllocatePool function), then initializes the non-paged memory, clears the data in the non-paged memory, and then uses the initialized non-paged memory to copy the hard disk content, thereby obtaining the cache with the hard disk content. The hard disk content stored in the cache is also called storage data.
S20, partitioning the cache according to a preset size to obtain a plurality of storage units, calculating the addresses of the storage units through a hash function to obtain indexes of the storage units, creating a hash bucket in advance, generating corresponding indexes, mapping the indexes of the storage units and the indexes of the hash bucket, and storing the mapping relation into a hash table;
The method mainly comprises the steps of describing a mapping mode of a buffer memory, specifically, firstly partitioning the buffer memory according to a preset size (such as a minimum data unit stored in the buffer memory) to obtain a plurality of equal storage units, wherein each storage unit has a starting address (namely an address), then calculating the starting addresses of all the storage units through a hash function to obtain indexes of the storage units, mapping the indexes of all the storage units and the indexes of all the hash buckets, associating the storage units with the hash buckets according to a mapping relation, and storing the hash tables. The index of the storage units obtained by calculating the addresses of the storage units is the same, and then the storage units are associated to the corresponding same hash bucket according to the mapping relation. By the mapping mode, the access efficiency and the data searching speed of the cache can be improved, and unnecessary traversing operation can be reduced by quick mapping and inquiring of indexes, so that the response speed and the overall performance of the system are improved.
As shown in fig. 3, fig. 3 shows a hash table, in which a letter block (e.g., a) shows storage units, different letter blocks show storage units storing different data, the hash table includes a plurality of hash buckets (Hdl n), each hash bucket corresponds to an index of a hash bucket, then the index of a storage unit is mapped with the index of the hash bucket, storage units with the same index value of the storage unit are associated in the corresponding hash bucket to be managed, as can be seen from fig. 3, the number of storage units in the hash buckets is different, because after the address of each storage unit is calculated through a hash function, the indexes of the obtained storage units may be the same, possibly different, then the storage units with the same index are associated in the same hash bucket, and the storage units with different indexes are associated in different hash buckets, and in addition, the hash buckets also have the situation of unassociated storage units, such as Hdl, hdl and Hdl.
S30, acquiring an address of the IO request, and calculating the address through a hash function to obtain an index;
In this step, in conjunction with fig. 3, after the system receives the IO request, the address (key) of the IO request is obtained, for example, if the IO request is a read-write request, the read-write address is obtained, and then the hash function (HashFunc) is used to calculate the address of the IO request, so as to obtain an index, where the index is a request index (V value), and the index is used in step S40.
S40, searching a corresponding storage unit in the hash table according to the index;
In this step, the searching principle of the request index is that the request index is matched with the hash bucket index, if the matching is successful, the corresponding storage unit is searched in the hash bucket, and then S50 is executed according to the searched storage unit.
S50, obtaining an offset according to the address and the length of the IO request, judging whether the data of the IO request hit the searched storage data in the storage unit according to the offset, and if so, updating the corresponding storage unit.
The step mainly judges whether the searched storage unit contains data required by the IO request. Because a lot of data are stored in one storage unit, all data in the searched storage unit are not required in the IO request, the position of the read data of the system and the length of the read data are known according to the address and the length of the IO request, so that an offset is obtained, the data in the searched storage unit are compared one by one according to the offset, whether the data of the IO request exist in the searched storage unit is judged according to the comparison result, if so, the hit is indicated, then the storage unit is updated, and the storage unit is used (accessed).
In one embodiment, S50 includes:
Adjusting the corresponding storage units to the front columns of the storage unit sequences;
And integrally adjusting the hash bucket and the corresponding mapping relation to the front column of the hash table.
In this step, for the case that at least one hash bucket maps with a plurality of storage units, and the plurality of storage units associated with the hash bucket have corresponding storage unit ranks, the process of updating the storage units is described, that is, after the hit is determined by S50, the position of the corresponding storage unit and the corresponding mapping relation are adjusted to the front of the whole hash table, for convenience of description, if the hit is a storage unit J in Hdl hash bucket, as shown in fig. 3, the storage unit J in Hdl is adjusted to be in front of the storage unit I, at this time, the order of the storage units in Hdl 5 from left to right is J-I-K-L, and at the same time, hdl is adjusted to be in front of Hdl 0, and the order of the hash bucket from top to bottom is Hdl 5-Hdl 0-Hdl 1-Hdl 2-Hdl 3-Hdl 4-Hdl 6-Hdl-Hdl n.
If the memory cell K is hit again and the memory cell K in Hdl hash buckets is hit, the sequence of the hash buckets is kept unchanged, the memory cell K in Hdl is adjusted to be before the memory cell K hit last time (namely the memory cell J), and the sequence of the memory cell K in Hdl from left to right is K-J-I-L;
If hit again and hit is the memory cell H in Hdl 3, then the order of the memory cells in Hdl 3 from left to right is H-F-G, and the order of the hash bucket is Hdl-Hdl 5-Hdl 0-Hdl 1-Hdl 2-Hdl-Hdl 6-Hdl7-Hdl n before the hash bucket is adjusted to the latest order (Hdl 5) for Hdl, and so on.
The cache processing method further includes step S60:
cleaning each storage unit at intervals of the time period;
wherein the step of cleaning comprises: judging whether the storage space of the hash table is full, and if so, triggering a clearing operation to release a first preset number of storage units.
The cache processing method comprises two cleaning modes, wherein the first cleaning method is adopted in the step, and because the storage space in the hash table is limited, namely the number of the storage units mounted in each hash bucket is limited, when new storage units need to be mounted, whether the index of the new storage units and the mounting capacity of the hash bucket corresponding to the new storage units are full or not is judged first, if so, the first preset number of storage units are released, and then the new storage units are mounted in the hash bucket. Wherein the first predetermined number is 10% of the number of storage units in the hash bucket.
In a specific embodiment, as shown in FIG. 4, S60 includes S61-S63:
s61, sorting hash tables after position adjustment of corresponding storage units and hash buckets is recorded;
s62, acquiring the positions of storage units associated with each hash bucket in the current hash table sequencing every other time period;
s63, releasing the tail storage unit associated with the corresponding hash bucket.
The method comprises the steps of releasing a rule of a storage unit, adjusting positions of the storage unit and a hash bucket to indicate that a corresponding storage unit under the association of the current hash bucket is used, recording the sequence of a current hash table, indicating the most recently used storage unit in the front storage unit of the hash table, and indicating the storage unit which is not used for the longest time in the tail storage unit of the hash table. When the IO requests access to the cache, if the data of the IO requests are in the cache, moving a hit storage unit to the front of the hash table to indicate that the latest access is performed; if the data requested by the IO is not in the cache, whether the storage unit at the tail of the hash table needs to be eliminated is determined according to the condition of the residual space of the cache (namely the storage space of the hash table), so that a new storage unit can be inserted into the front of the hash table. Therefore, the storage unit in the front of the hash table can be kept to be accessed recently, the storage unit in the tail of the hash table is not accessed for the longest time, and when the cache space is insufficient, the storage unit in the tail of the hash bucket can be eliminated to make room for new data items. Among other things, this rule of freeing memory locations is also called LRU policy, i.e., freeing memory locations that are least recently unused (not accessed). The LRU strategy is adopted for cleaning, so that system resources can be effectively released, dirty data accumulation is prevented, data consistency is ensured, the system releases a storage unit according to the busy state of the system, different workloads can be adapted, and the stability and flexibility of the system are improved.
In one embodiment, as shown in FIG. 5, S60 further comprises S64-S65:
s64, detecting whether dirty data exist in each storage unit every other time period, and if so, flushing the dirty data back to a disk;
In this step, when the data is modified in the cache, the corresponding storage unit will be marked as "dirty", that is, dirty data, which means that the data in the storage unit is inconsistent with the data on the disk, in order to keep the data in the cache and the data on the disk synchronous, the dirty data needs to be detected every time period, and if the dirty data is detected, the dirty data is flushed back to the disk, that is, the data in the corresponding storage unit is written back to the disk, so as to ensure consistency and durability of the data.
S65, judging the busy state of the system according to the index change, and if the system is not busy, triggering a clearing operation to release a second preset number of storage units, wherein the second preset number is larger than the first preset number.
The second cleaning method is that the use condition of the storage unit in the cache can be judged through index change, the busy state of the system can be judged according to the use condition, if the system is in the busy state, the system is indicated to perform data interaction at the moment, the cleaning operation cannot be triggered, and otherwise, the data interaction process is influenced; it is preferable to release a second predetermined number of memory cells when the system is not busy, wherein the second predetermined number is greater than the first predetermined number, i.e., the second predetermined number is for all hash buckets in the hash table, and then release memory cells in all hash buckets that are unused for a long time.
In one embodiment, step S65 includes:
If the system is in the delayed writing mode, judging the busy state of the system by detecting the number of times that dirty data is brushed back to a disk;
If the system is in the write-through mode, the busy state of the system is judged by detecting the number of the idle storage units.
In this step, since the system includes a delayed write mode and a direct write mode, according to the modes, the corresponding index changes are different, in the delayed write mode, the index changes to the number of times dirty data is flushed back to the disk, in the direct write mode, the index changes to the number of free storage units, where the free storage units represent the storage units that are not currently used, then the busy state of the system can be known by detecting the index changes in the corresponding modes, and then step S65 is executed according to the result of the busy state.
The embodiment of the invention solves the problem that extra storage hardware equipment is required to be added to serve as the limitation of cache in the traditional cache management technology through flexible cache application, hash table management storage unit and application of LRU strategy, improves the data access efficiency and system performance, and is suitable for various computer systems and hardware equipment.
The embodiment of the invention also provides a cache processing device which is used for executing any embodiment of the cache processing method. Specifically, referring to fig. 6, fig. 6 is a schematic block diagram of a cache processing apparatus according to an embodiment of the present invention. The cache processing apparatus 700 includes:
An obtaining unit 710, configured to receive an initialization application initiated by an application layer, and obtain a non-paged memory as a cache;
in one embodiment, the acquiring unit 710 includes:
the allocation unit is used for obtaining the non-paging memory through the allocation of the memory allocation function;
and the initialization unit is used for initializing the non-paged memory to obtain a cache for copying the hard disk content.
The mapping unit 720 is configured to block the cache according to a predetermined size to obtain a plurality of storage units, calculate addresses of the storage units through a hash function to obtain indexes, map the indexes of the storage units, and store the mapping relationship in a hash table;
A calculating unit 730, configured to obtain an address of the IO request, and calculate the address through a hash function to obtain an index;
A searching unit 740, configured to search a corresponding storage unit in the hash table according to the index;
the hit determination unit 750 is configured to obtain an offset according to the address and the length of the IO request, determine whether the data of the IO request hits the stored data in the searched storage unit according to the offset, and update the corresponding storage unit if the data hits.
In one embodiment, hit determination unit 750 includes:
The adjusting unit is used for adjusting the corresponding storage units to the front columns of the storage unit sequences;
And integrally adjusting the hash bucket and the corresponding mapping relation to the front column of the hash table.
As shown in fig. 7, the cache processing apparatus further includes:
a cleaning unit 760, configured to clean each storage unit at intervals of a time period;
wherein the step of cleaning comprises: judging whether the storage space of the hash table is full, and if so, triggering a clearing operation to release a first preset number of storage units.
In one embodiment, the cleaning unit 760 includes:
the recording unit is used for recording the hash table ordering after the position adjustment of the corresponding storage unit and the hash bucket;
The position acquisition unit is used for acquiring the positions of all storage units associated with the hash buckets in the current hash table sequencing every other time period;
And the first releasing unit is used for releasing the tail storage unit associated with the corresponding hash bucket.
In one embodiment, the cleaning unit 760 further comprises:
the detection unit is used for detecting whether dirty data exist in each storage unit or not every other time period, and if so, the dirty data are brushed back to a disk;
And the second releasing unit is used for judging the busy state of the system according to the index change, and triggering a clearing operation to release a second preset number of storage units if the system is not busy, wherein the second preset number is larger than the first preset number.
In an embodiment, the second release unit comprises:
The first state judging unit is used for judging the busy state of the system by detecting the number of times that dirty data is brushed back to the disk if the system is in a delayed writing mode;
And the first state judging unit is used for judging the busy state of the system by detecting the number of the idle storage units if the system is in the direct writing mode.
The embodiment of the invention provides a computer device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the cache processing method in the previous embodiment when executing the computer program.
Embodiments of the present invention provide a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, implements the cache processing method described in the previous embodiments.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The cache processing method is characterized by comprising the following steps:
Receiving an initialization application initiated by an application layer, and acquiring a non-paging memory as a cache;
Partitioning the cache according to a preset size to obtain a plurality of storage units, calculating the addresses of the storage units through a hash function to obtain indexes of the storage units, creating a hash bucket in advance, generating corresponding indexes, mapping the indexes of the storage units and the indexes of the hash bucket, and storing the mapping relation into a hash table;
obtaining an address of an IO request, and calculating the address through a hash function to obtain an index;
searching a corresponding storage unit in a hash table according to the index;
and obtaining an offset according to the address and the length of the IO request, judging whether the data of the IO request hit the searched storage data in the storage unit according to the offset, and if so, updating the corresponding storage unit.
2. The cache processing method of claim 1, wherein at least one of the hash buckets maps with a plurality of storage units, and wherein the plurality of storage units associated with the hash bucket have respective storage unit ordering, the updating the corresponding storage unit comprising:
Adjusting the corresponding storage units to the front columns of the storage unit sequences;
And integrally adjusting the hash bucket and the corresponding mapping relation to the front column of the hash table.
3. The cache processing method according to claim 2, further comprising:
Cleaning each storage unit at intervals of time period;
wherein the step of cleaning comprises: judging whether the storage space of the hash table is full, and if so, triggering a clearing operation to release a first preset number of storage units.
4. The cache processing method according to claim 3, wherein the determining whether the storage space of the hash table is full, and if so, triggering a purge operation to release the first predetermined number of storage units, comprises:
Recording the hash table ordering after the position adjustment of the corresponding storage unit and the hash bucket;
Acquiring the positions of storage units associated with each hash bucket in the current hash table sequencing every other time period;
And releasing the tail storage unit associated with the corresponding hash bucket.
5. The cache processing method of claim 3, wherein the step of cleaning further comprises:
Detecting whether dirty data exist in each storage unit every other time period, and if so, flushing the dirty data back to a disk;
And judging the busy state of the system according to the index change, and if the system is not busy, triggering a clearing operation to release a second preset number of storage units, wherein the second preset number is larger than the first preset number.
6. The cache processing method according to claim 5, wherein the determining the busy state of the system according to the index change includes:
If the system is in the delayed writing mode, judging the busy state of the system by detecting the number of times that dirty data is brushed back to a disk;
If the system is in the write-through mode, the busy state of the system is judged by detecting the number of the idle storage units.
7. The cache processing method according to claim 1, wherein the receiving the initialization application initiated by the application layer and obtaining the non-paged memory as the cache includes:
Obtaining a non-paging memory through memory allocation function allocation;
and initializing the non-paged memory to obtain a cache for copying the hard disk content.
8. A cache processing apparatus for implementing the cache processing method according to any one of claims 1 to 7, comprising:
The acquisition unit is used for receiving an initialization application initiated by the application layer and acquiring the non-paging memory as a cache;
The mapping unit is used for blocking the cache according to a preset size to obtain a plurality of storage units, calculating the addresses of the storage units through a hash function to obtain indexes of the storage units, creating a hash bucket in advance, generating corresponding indexes, mapping the indexes of the storage units and the indexes of the hash bucket, and storing the mapping relation into a hash table;
The computing unit is used for acquiring the address of the IO request, and computing the address through a hash function to obtain an index;
the searching unit is used for searching the corresponding storage unit in the hash table according to the index;
The hit judgment unit is used for obtaining the offset according to the address and the length of the IO request, judging whether the data of the IO request hit the searched storage data in the storage unit according to the offset, and updating the corresponding storage unit if hit.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the cache processing method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, which when executed by a processor implements the cache processing method according to any one of claims 1 to 7.
CN202410149517.0A 2024-02-02 2024-02-02 Cache processing method and device, computer equipment and storage medium Pending CN117992366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410149517.0A CN117992366A (en) 2024-02-02 2024-02-02 Cache processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410149517.0A CN117992366A (en) 2024-02-02 2024-02-02 Cache processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117992366A true CN117992366A (en) 2024-05-07

Family

ID=90894895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410149517.0A Pending CN117992366A (en) 2024-02-02 2024-02-02 Cache processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117992366A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119149452A (en) * 2024-11-19 2024-12-17 杭州计算机外部设备研究所(中国电子科技集团公司第五十二研究所) Cache cleaning method for random small IO

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119149452A (en) * 2024-11-19 2024-12-17 杭州计算机外部设备研究所(中国电子科技集团公司第五十二研究所) Cache cleaning method for random small IO

Similar Documents

Publication Publication Date Title
US11010102B2 (en) Caching of metadata for deduplicated luns
US6785771B2 (en) Method, system, and program for destaging data in cache
JP5943096B2 (en) Data migration for composite non-volatile storage
US9411742B2 (en) Use of differing granularity heat maps for caching and migration
JP4101907B2 (en) Method for selecting data to be cached in a computer system, computer system, and cache system for a computer system
US9779027B2 (en) Apparatus, system and method for managing a level-two cache of a storage appliance
US20170024140A1 (en) Storage system and method for metadata management in non-volatile memory
US9779026B2 (en) Cache bypass utilizing a binary tree
US10061706B2 (en) System and method for eviction and replacement in large content-addressable flash caches
JPH06161898A (en) Method and means for dynamic division of cache
US20220398201A1 (en) Information processing apparatus and method
US10102147B1 (en) Phased based distributed LRU for shared cache systems
CN114296630B (en) Machine-readable storage medium, data storage system, and method of data storage system
JP4068185B2 (en) Effective selection of memory storage mode in computer systems
US9471252B2 (en) Use of flash cache to improve tiered migration performance
CN117992366A (en) Cache processing method and device, computer equipment and storage medium
CN107562806B (en) Self-adaptive sensing acceleration method and system of hybrid memory file system
US11269544B1 (en) Deleting an object from an object storage subsystem for managing paged metadata
CN108228088B (en) Method and apparatus for managing storage system
JP4189342B2 (en) Storage apparatus, storage controller, and write-back cache control method
CN108984432B (en) Method and device for processing IO (input/output) request
JP3788121B2 (en) Cache server performance value calculation method and apparatus, and storage medium storing cache server performance value calculation program
US20210263648A1 (en) Method for managing performance of logical disk and storage array
US11734185B2 (en) Cache management for search optimization
JP2010160544A (en) Cache memory system and method for controlling cache memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination