[go: up one dir, main page]

CN118897655B - Data request processing method, device, computer equipment and storage medium - Google Patents

Data request processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN118897655B
CN118897655B CN202411367369.6A CN202411367369A CN118897655B CN 118897655 B CN118897655 B CN 118897655B CN 202411367369 A CN202411367369 A CN 202411367369A CN 118897655 B CN118897655 B CN 118897655B
Authority
CN
China
Prior art keywords
data
data block
cache
request
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411367369.6A
Other languages
Chinese (zh)
Other versions
CN118897655A (en
Inventor
王永刚
李大生
仇锋利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202411367369.6A priority Critical patent/CN118897655B/en
Publication of CN118897655A publication Critical patent/CN118897655A/en
Application granted granted Critical
Publication of CN118897655B publication Critical patent/CN118897655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • G06F3/0641De-duplication techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to the technical field of computers and discloses a data request processing method, a device, computer equipment and a storage medium, wherein the method comprises the steps of obtaining a cache occupied water level; and determining a target cache space from the idle cache space in the cache under the condition that the data request is received from the host, storing data corresponding to the data request into the target cache space, and responding to the data request. The problem that in the process of processing the data request, if the buffer space is full, the time delay for processing the data request is increased due to replacement of the existing data in the buffer is solved.

Description

Data request processing method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data request processing method, a data request processing device, a computer device, and a storage medium.
Background
In the process of responding to the data request by the storage system, the performance of the storage system can be improved by caching IO (Input/Output) data corresponding to the data request into a DRAM (Dynamic Random Access Memory ), for example, for a write request, the data writing cache can be completed in response to a host, and the data written in the cache is written in a nonvolatile medium of a slow speed of an HDD (HARD DISK DRIVE, a mechanical hard disk) or an SSD (Solid STATE DISK, a Solid state disk) and the like relative to the DRAM, so that the response time of the write request is reduced. For the read request, if the read data is in the cache, the read data does not need to be read from a storage medium such as an HDD or SSD, and the read data can be directly read from the cache and returned to the host, so that the response time of the read request is reduced.
However, in the process of processing a data request, if the buffer space is full, the existing data in the buffer needs to be replaced by a buffer replacement algorithm, and new data is stored in the buffer, and the time consumption of these processes is contained in the processing time of the I/O process, while the buffer space is limited, and the buffer space is full, which is the normal state in the use process of the storage system, so each data request related to the buffer data needs to be processed in the process of processing the buffer replacement process, resulting in an increase in the delay of processing the data request, and the processing delay of a large number of data requests can be accumulated in the use process of the storage system.
Therefore, the related art has a problem that if the buffer space is full during the processing of the data request, the replacement of the existing data in the buffer may cause an increase in the delay of processing the data request.
Disclosure of Invention
In view of the above, the present invention provides a data request processing method, apparatus, computer device and storage medium, so as to solve the problem that if the buffer space is full, replacing the existing data in the buffer will cause an increase in the delay of processing the data request.
In a first aspect, the present invention provides a data request processing method, including:
Acquiring the occupied water level of the cache;
Under the condition that the occupied water level of the cache is higher than a preset threshold value, determining a data block to be cleaned, and releasing the cache space occupied by the data block to be cleaned;
And under the condition that the data request is received from the host, determining a target cache space from the idle cache space in the cache, storing the data corresponding to the data request into the target cache space, and responding to the data request.
According to the data request processing method provided by the embodiment, under the condition that the buffer occupancy water level is higher than the preset threshold value, the data block to be cleaned is determined, and the buffer space occupied by the data block to be cleaned is released. The process of making the cache replacement is not included in the flow of processing the data request, and the processing time of the cache replacement is not included in the time of processing the data request. And by timely releasing the buffer memory space occupied by the clean data block, the idle buffer memory space in the buffer memory can be directly used in the process of processing each data request, and the response time of the data request processing can be reduced. The processing time delay of the data request is reduced, and the read-write performance of the storage system is improved. The problem that in the process of processing the data request, if the buffer space is full, the time delay for processing the data request is increased due to replacement of the existing data in the buffer is solved.
In some alternative embodiments, before the obtaining the buffer occupancy level, the method further includes:
Acquiring the thread number for processing the cache replacement task;
and generating a first preset number of data block processing queues according to the thread number, wherein the data block processing queues are used for determining and releasing the data blocks to be cleaned.
In this embodiment, according to the number of threads for processing the cache replacement task, a first preset number of data block processing queues are generated, the data blocks are managed by using the data block processing queues, and the data blocks to be cleaned are determined and released, so that an idle cache space in the cache can be directly used in the process of processing each data request, and the response time of data request processing can be reduced.
In some alternative embodiments, after generating the first preset number of data block processing queues, the method further comprises:
Determining the identification of a logic volume to which the data block belongs in the cache;
And determining a data block processing queue corresponding to the data block according to the identification of the logical volume and the first preset quantity, and placing the data block into the corresponding data block processing queue.
In this embodiment, a data block processing queue corresponding to a data block is determined, and the data block is placed in the corresponding data block processing queue, and the data block is managed by using the data block processing queue, so that the data block is cleaned conveniently and the buffer space occupied by the data block is released.
In some alternative embodiments, after placing the data block in the corresponding data block processing queue, the method further comprises:
determining a first replacement priority of the data block;
And generating sequence numbers of the data blocks in a first preset number of data block processing queues according to the first replacement priority.
In this embodiment, according to the first replacement priority, the sequence numbers of the data blocks are generated in the first preset number of data block processing queues, so that the data blocks are convenient to manage, and the data blocks to be cleaned are determined and released according to the sequence numbers, so that the method is accurate and efficient.
In some optional embodiments, in a case where the cache occupancy level is higher than a preset threshold, determining the data block to be cleaned includes:
under the condition that the buffer occupancy water level is higher than a first preset threshold value and lower than a second preset threshold value, taking the data blocks with sequence numbers smaller than a first preset value in a first preset number of data block processing queues as data blocks to be cleaned;
Under the condition that the buffer occupancy water level is higher than a second preset threshold value and lower than a third preset threshold value, taking the data blocks with sequence numbers smaller than the second preset value in the first preset number of data block processing queues as data blocks to be cleaned;
under the condition that the buffer occupancy water level is higher than a third preset threshold value and lower than a fourth preset threshold value, taking the data blocks with sequence numbers smaller than the third preset value in the first preset number of data block processing queues as data blocks to be cleaned;
and under the condition that the buffer occupancy water level is higher than a fourth preset threshold value, taking the data blocks with sequence numbers smaller than the fourth preset value in the first preset number of data block processing queues as data blocks to be cleaned.
In this embodiment, the buffer occupancy levels are compared between the first preset threshold and the fourth preset threshold, and the number of data blocks to be cleaned in the data block processing queue is determined according to the comparison result, so as to modify the buffer replacement speed, and ensure that the free buffer space in the buffer can be directly used in the process of processing each data request by reasonably utilizing the computing resources of the system.
In some optional embodiments, after determining the data block to be cleaned and releasing the buffer space occupied by the data block to be cleaned, the method further includes:
determining a second replacement priority of the remaining data blocks in the first preset number of data block processing queues;
And updating the sequence numbers of the rest data blocks according to the second replacement priority.
In some alternative embodiments, in a case of receiving a data request from a host, determining a target cache space from a free cache space in a cache, saving data corresponding to the data request to the target cache space, and responding to the data request, including:
If the data request is a read request, if no data block corresponding to the read request exists in the cache, determining a target cache space from an idle cache space in the cache;
Acquiring data corresponding to a read request from a preset storage medium, writing the data corresponding to the read request into a target cache space, and generating a read request data block corresponding to the read request;
And returning the data in the read request data block to the host.
In the embodiment, the processing process of cache replacement is separated from the flow of processing the data request, so that the free cache space in the cache can be directly used in the process of processing each read request, and the processing time delay of the read request is reduced.
In some optional embodiments, releasing the buffer space occupied by the data block to be cleaned includes:
deleting the data block to be cleaned in the data block processing queue under the condition that the data block to be cleaned is a read request data block;
and replacing the positions of the rest data blocks in the data block processing queue by using a preset algorithm.
In some alternative embodiments, in a case of receiving a data request from a host, determining a target cache space from a free cache space in a cache, saving data corresponding to the data request to the target cache space, and responding to the data request, including:
Determining a target cache space from the free cache space in the cache under the condition that the data request is a write request;
Writing the data corresponding to the write request into the target cache space, generating a write request data block corresponding to the write request, and responding to the host.
In the embodiment, the processing process of cache replacement is separated from the flow of processing the data request, so that the free cache space in the cache can be directly used in the process of processing each write request, and the processing time delay of the write request is reduced.
In some optional embodiments, releasing the buffer space occupied by the data block to be cleaned includes:
Judging whether the data of the data block to be cleaned is written into a preset storage medium or not under the condition that the data block to be cleaned is a writing request data block;
if the data block is written, deleting the data block to be cleaned in the data block processing queue;
And if the data is not written in, writing the data of the data block to be cleaned in a preset storage medium, and deleting the data block to be cleaned in the data block processing queue.
In some optional embodiments, obtaining the cache occupancy level includes:
starting a preset timer, and calling a timer processing function when the duration of the preset timer is the preset duration;
and obtaining the occupied water level of the cache by using a timer processing function.
In this embodiment, when the duration of the preset timer is the preset duration, the buffer occupancy water level is obtained, and when the buffer occupancy water level is ensured to be timely obtained and the buffer replacement process is started, excessive computing resources are prevented from being occupied by the buffer occupancy water level obtained in real time.
In some optional embodiments, obtaining the cache occupancy level includes:
Acquiring a used cache space and a total cache space of a cache;
and taking the ratio of the used buffer space to the total buffer space as the buffer occupied water level.
In a second aspect, the present invention provides a data request processing apparatus, comprising:
The acquisition module is used for acquiring the occupied cache water level;
The release module is used for determining the data blocks to be cleaned and releasing the cache space occupied by the data blocks to be cleaned under the condition that the cache occupied water level is higher than a preset threshold value;
and the processing module is used for determining a target cache space from the idle cache space in the cache under the condition that the data request is received from the host, storing the data corresponding to the data request into the target cache space and responding to the data request.
In a third aspect, the present invention provides a computer device, including a memory and a processor, where the memory and the processor are communicatively connected to each other, and the memory stores computer instructions, and the processor executes the computer instructions, thereby executing the data request processing method of the first aspect or any implementation manner corresponding to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions for causing a computer to execute the data request processing method of the first aspect or any of the embodiments corresponding thereto.
In a fifth aspect, the present invention provides a computer program product comprising computer instructions for causing a computer to perform the data request processing method of the first aspect or any of its corresponding embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the related art, the drawings that are required to be used in the description of the embodiments or the related art will be briefly described, and it is apparent that the drawings in the description below are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a flow chart of processing a read request using a conventional scheme according to an embodiment of the present invention;
FIG. 2 is a flow chart of a data request processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a data block processing queue according to an embodiment of the invention;
FIG. 4 is a flow diagram of processing a read request according to an embodiment of the invention;
FIG. 5 is a block diagram of a data request processing apparatus according to an embodiment of the present invention;
Fig. 6 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Since the DRAM Used as a buffer memory in a storage system is more costly than nonvolatile storage devices such as HDD and SSD, the capacity of the DRAM configured in the storage system is limited, and data corresponding to all data requests cannot be buffered in the DRAM, so that when the storage space of the DRAM is full in the process of processing the data requests, if new data needs to be buffered, the existing data in the DRAM needs to be replaced for buffering the new data, and the currently Used buffer memory replacement algorithms include FIFO (FIRST IN FIRST Out), LRU (LEAST RECENTLY Used, least recently unused), LFU (Least Frequently Used, least recently Used), and the like. The flow of processing a data request in the conventional scheme is shown in fig. 1, taking a read request processing flow as an example, in the read request processing flow, issuing the read request to a cache module, inquiring whether the cache can be hit, if so, reading data from the cache, returning the data to a host, ending, if the cache cannot be hit, reading the data from a rear-end data disk, returning the read data to the cache module, judging whether the cache space is full, if not, putting the read data into the cache, then returning the data to the host, if so, invalidating part of the data according to a replacement algorithm, releasing the cache space, putting the read data into the cache, then returning the data to the host, ending. Similarly, for a write request, when data is written into the cache, if the cache space is full, the existing data needs to be replaced by a cache replacement algorithm, and new data is written.
In the above conventional scheme, if data is required to be put into a cache and the cache space is full, the existing data in the cache needs to be replaced and fetched by a cache replacement algorithm, new data is stored, the process involves the processing of the cache replacement algorithm, the release of cache resources, the application of the cache resources for the data request, and other processing flows, the time consumption of the processing flows is contained in the processing time of the cache requests, and the cache space is limited, and the cache space is full and is the normal state in the use process of the storage system, so that each data request related to the cache data needs to be executed in the processing process, the processing time delay of each data request is prolonged, and the system performance is reduced.
Based on the above, the embodiment of the invention provides a data request processing method, which ensures that the processing process of cache replacement is not included in the flow of processing data requests, determines a data block to be cleaned and releases the cache space occupied by the data block to be cleaned when the cache occupation water level is higher than a preset threshold, wherein the processing time of the cache replacement is not included in the time of processing the data requests, and ensures that the free cache space in the cache can be directly used in the processing of each data request, so that the response time of data request processing can be reduced. In order to achieve the technical effects that the free cache space in the cache can be directly used in the process of processing each data request, the processing time delay of the data request is reduced, and the read-write performance of the storage system is improved.
According to an embodiment of the present invention, there is provided a data request processing embodiment, it should be noted that the steps shown in the flowchart of the drawings may be performed in a computer device having data processing capability, such as a computer, a server, etc., and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from that shown or described herein.
In this embodiment, a data request processing method is provided, fig. 2 is a flowchart of a data request processing method according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
step S201, obtaining the buffer occupancy level.
Specifically, DRAM is a cache in a memory system. The buffer occupancy level is obtained, e.g., the size of used space in the DRAM divided by the total size of space in the DRAM. By the cache occupancy level, the use of the DRAM may be determined, determining whether the DRAM has sufficient free cache space for use in a subsequent process of processing data requests.
Step S202, under the condition that the buffer occupancy water level is higher than a preset threshold value, determining a data block to be cleaned, and releasing the buffer space occupied by the data block to be cleaned.
Specifically, the preset threshold values are, for example, 70%, 80%, 90%, etc., and specific values are set according to actual requirements. If the buffer occupancy level is higher than the preset threshold, the buffer occupancy level indicates that the available free buffer space in the DRAM for the process of processing the data request is insufficient, and buffer replacement is needed.
In the process of processing the data request, the data corresponding to the data request is stored in the DRAM in the form of the data block, so that the host computer which sends the data request can be conveniently and efficiently responded.
In addition, a plurality of preset thresholds can be set, and when the buffer occupancy water level is higher than different preset thresholds, different buffer replacement speeds are adopted.
In step S203, when a data request is received from the host, a target cache space is determined from the free cache space in the cache, data corresponding to the data request is saved to the target cache space, and the data request is responded.
Specifically, the data request is, for example, an IO request. In the case of receiving a data request from a host, including a write request and a read request, a target cache space is determined from a free cache space in the DRAM, the size of the target cache space being greater than or equal to the data size of the data request, and the target cache space is preferably a continuous space, so that only one data transfer is initiated in response to the host sending the data request, thereby improving the response speed. And storing the data corresponding to the data request into a target cache space in the form of a data block, and returning the data in the data block to the host computer to realize responding to the data request.
According to the data request processing method provided by the embodiment, under the condition that the buffer occupancy water level is higher than the preset threshold value, the data block to be cleaned is determined, and the buffer space occupied by the data block to be cleaned is released. The process of making the cache replacement is not included in the flow of processing the data request, and the processing time of the cache replacement is not included in the time of processing the data request. And by timely releasing the buffer memory space occupied by the clean data block, the idle buffer memory space in the buffer memory can be directly used in the process of processing each data request, and the response time of the data request processing can be reduced. The processing time delay of the data request is reduced, and the read-write performance of the storage system is improved. The problem that in the process of processing the data request, if the buffer space is full, the time delay for processing the data request is increased due to replacement of the existing data in the buffer is solved.
In some alternative embodiments, before the obtaining the buffer occupancy level, the method further includes:
Acquiring the thread number for processing the cache replacement task;
and generating a first preset number of data block processing queues according to the thread number, wherein the data block processing queues are used for determining and releasing the data blocks to be cleaned.
Specifically, in the process of processing the data request, the data corresponding to the data request is stored in the DRAM in the form of a data block, so that efficient response to the host sending the data request is facilitated.
The embodiment stores the data blocks in the DRAM in a plurality of data block processing queues, which is convenient for managing the data blocks, and comprises the steps of determining and releasing the data blocks to be cleaned. Therefore, a first preset number of data block processing queues are required to be generated first, and the process comprises the steps of obtaining the thread number m for processing the cache replacement task. According to the thread number, a first preset number of data block processing queues is generated, wherein the first preset number can be equal to the thread number m, namely, how many threads process buffer memory replacement tasks and how many bidirectional queues are needed.
Taking the first preset number as the thread number m as an example, the present embodiment will be described with reference to fig. 3. The number of threads for processing the cache replacement task is m, m bidirectional queues are correspondingly created to serve as data block processing queues, and as shown in fig. 3, m task queues including a task queue 0, a task queue 1, a task queue m-1 and a task queue m-1 are created, wherein the task queues are the data block processing queues, and a plurality of cache data blocks are stored in each task queue.
In this embodiment, according to the number of threads for processing the cache replacement task, a first preset number of data block processing queues are generated, the data blocks are managed by using the data block processing queues, and the data blocks to be cleaned are determined and released, so that an idle cache space in the cache can be directly used in the process of processing each data request, and the response time of data request processing can be reduced.
In some alternative embodiments, after generating the first preset number of data block processing queues, the method further comprises:
Determining the identification of a logic volume to which the data block belongs in the cache;
And determining a data block processing queue corresponding to the data block according to the identification of the logical volume and the first preset quantity, and placing the data block into the corresponding data block processing queue.
Specifically, the identification of the logical volume (Logical Unit Number, LUN) to which the data block belongs in the cache is determined, where the LUN is a logical unit number and is a logical volume in the storage system.
The identification (LUN ID) of the logical volume to which the data block belongs in the cache is determined. Taking the first preset number as the thread number m as an example, which queue the data block in the cache is put in is determined according to the LUN (logical volume) to which the data block belongs, and the calculation method can be that the queue ID=LUN ID% m, and the% represents the remainder calculation. The data block processing queue ID corresponding to the data block is determined by the method, for example, m is the queue number which is obtained by taking the remainder of m according to the number of the LUN to which the data belongs, namely, the data of the LUN 0 is placed in the queue 0, the data of the LUN 1 is placed in the queue 1, the data of the LUN m-1 is placed in the queue m-1, the data of the LUN m is placed in the queue 0, and the like. As shown in fig. 3, the data block processing queues corresponding to the cache data blocks 0-1, 0-2, and 0-n are task queues 0, and the cache data blocks 0-1, 0-2, 0-n are put into the task queues 0, and other task queues are shown in fig. 3, and are not described herein.
And determining a data block processing queue corresponding to the data block according to the data block processing queue ID, and placing the data block into the corresponding data block processing queue.
In this embodiment, a data block processing queue corresponding to a data block is determined, and the data block is placed in the corresponding data block processing queue, and the data block is managed by using the data block processing queue, so that the data block is cleaned conveniently and the buffer space occupied by the data block is released.
In some alternative embodiments, after placing the data block in the corresponding data block processing queue, the method further comprises:
determining a first replacement priority of the data block;
And generating sequence numbers of the data blocks in a first preset number of data block processing queues according to the first replacement priority.
Specifically, in each data block processing queue, according to the corresponding cache replacement algorithm, the data block with priority to be replaced is placed at the head of the data block processing queue, and other data blocks are sequentially stored in the linked list according to the replacement sequence.
The replacement sequence may be determined according to a first replacement priority of the data blocks, for example, the first replacement priority of the data blocks in each data block processing queue is determined according to the time when the data blocks are added into the data block processing queue, the smaller the addition time is, the higher the first replacement priority is, the first replacement priority of the data blocks is determined according to the use frequency and hit rate of the data in the data blocks, and the lower the use frequency and hit rate of the data blocks are in a service period, the higher the first replacement priority is.
And generating the sequence numbers of the data blocks in a first preset number of data block processing queues according to the first replacement priority, wherein the smaller and the earlier the generated sequence numbers are, the easier the data blocks with smaller subsequent sequence numbers are determined as the data blocks to be cleaned, or conversely, the higher the first replacement priority is, the larger and the later the generated sequence numbers are, the easier the data blocks with larger subsequent sequence numbers are determined as the data blocks to be cleaned.
In this embodiment, according to the first replacement priority, the sequence numbers of the data blocks are generated in the first preset number of data block processing queues, so that the data blocks are convenient to manage, and the data blocks to be cleaned are determined and released according to the sequence numbers, so that the method is accurate and efficient.
In some optional embodiments, in a case where the cache occupancy level is higher than a preset threshold, determining the data block to be cleaned includes:
under the condition that the buffer occupancy water level is higher than a first preset threshold value and lower than a second preset threshold value, taking the data blocks with sequence numbers smaller than a first preset value in a first preset number of data block processing queues as data blocks to be cleaned;
Under the condition that the buffer occupancy water level is higher than a second preset threshold value and lower than a third preset threshold value, taking the data blocks with sequence numbers smaller than the second preset value in the first preset number of data block processing queues as data blocks to be cleaned;
under the condition that the buffer occupancy water level is higher than a third preset threshold value and lower than a fourth preset threshold value, taking the data blocks with sequence numbers smaller than the third preset value in the first preset number of data block processing queues as data blocks to be cleaned;
and under the condition that the buffer occupancy water level is higher than a fourth preset threshold value, taking the data blocks with sequence numbers smaller than the fourth preset value in the first preset number of data block processing queues as data blocks to be cleaned.
Specifically, in order to prevent the process of cache replacement from being put into the process of processing the data request, the process of cache replacement is put into the background for execution, and in order to prevent the process of processing the data request in the foreground from being blocked due to the full cache space of the DRAM, the process of cache replacement in the background cannot be started to be executed when the cache space is full, so that a preset threshold is set, and when the cache occupancy level is higher than the preset threshold, the process of cache replacement is started.
According to the embodiment, the buffer replacement speed is dynamically adjusted according to the preset threshold value exceeding the buffer occupancy water level by setting different preset threshold values. The different preset thresholds include a first preset threshold, for example, 70%, a second preset threshold, for example, 80%, a third preset threshold, for example, 90%, and a fourth preset threshold, for example, 95%, and specific values of the preset values can be adjusted according to actual needs, which is not limited herein. Taking the first preset number as the thread number m as an example.
Under the condition that the occupied cache water level is higher than a first preset threshold value and lower than a second preset threshold value, the cache space of the DRAM is insufficient, a cache replacement process needs to be started, m threads for processing cache replacement tasks simultaneously process cache data in m data block processing queues, data blocks with sequence numbers smaller than the first preset value in each data block processing queue are used as data blocks to be cleaned, for example, the first preset value is 2, namely, data blocks with sequence numbers 1 in each queue are used as the data blocks to be cleaned, each thread takes one data block corresponding to the head of the data block processing queue as the data block to be cleaned to fail from the cache, the cache space of the data block to be cleaned is released, and the processing is ended.
Under the condition that the occupied cache water level is higher than a second preset threshold value and lower than a third preset threshold value, the cache space of the DRAM is insufficient, a cache replacement process is required to be quickly carried out, m threads for processing cache replacement tasks simultaneously process cache data in m data block processing queues, data blocks with sequence numbers smaller than the second preset value in each data block processing queue are used as data blocks to be cleaned, for example, the second preset value is 3, namely, data blocks with sequence numbers of 1 and 2 in each queue are used as the data blocks to be cleaned, when each thread processes two data blocks in front of a failure queue in the corresponding queue, the cache space of the data blocks to be cleaned is released, and the processing is ended.
Under the condition that the occupied cache water level is higher than a third preset threshold value and lower than a fourth preset threshold value, the cache space of the DRAM is very insufficient, a cache replacement process is required to be carried out faster, m threads for processing cache replacement tasks simultaneously process cache data in m data block processing queues, data blocks with sequence numbers smaller than the third preset value in each data block processing queue are used as data blocks to be cleaned, for example, the third preset value is 5, namely, data blocks with sequence numbers of 1-4 in each queue are used as the data blocks to be cleaned, and when each thread fails to process four data blocks in front of a queue in the corresponding queue, the cache space of the data blocks to be cleaned is released, and the process is ended.
Under the condition that the occupied cache water level is higher than a fourth preset threshold value, the cache space of the DRAM is seriously insufficient, a cache replacement process is needed to be carried out at the fastest speed, m threads for processing cache replacement tasks simultaneously process cache data in m data block processing queues, data blocks with sequence numbers smaller than the fourth preset value in each data block processing queue are used as data blocks to be cleaned, for example, the fourth preset value is 9, namely, data blocks with sequence numbers of 1 to 8 in each queue are used as the data blocks to be cleaned, eight data blocks in front of a failure queue in the corresponding queue are processed by each thread, the cache space of the data blocks to be cleaned is released, and the processing is ended.
In this embodiment, the buffer occupancy levels are compared between the first preset threshold and the fourth preset threshold, and the number of data blocks to be cleaned in the data block processing queue is determined according to the comparison result, so as to modify the buffer replacement speed, and ensure that the free buffer space in the buffer can be directly used in the process of processing each data request by reasonably utilizing the computing resources of the system.
In some optional embodiments, in the process of releasing the data block to be cleaned, in order to avoid that the direct deletion of the data block to be cleaned may affect the read-write service requirement of a subsequent user, the data block to be cleaned may be temporarily locked first, and the read-write service condition of a subsequent period of time may be acquired, if the read-write service requirement of the user is not affected, the data block to be cleaned is deleted again, and the cache space of the data block to be cleaned is released. The specific flow comprises the steps A1 to A4.
And step A1, locking the determined data block to be cleaned.
Specifically, after the data block to be cleaned is locked, the data block to be cleaned cannot be read, and data cannot be written into the data block to be cleaned.
And step A2, judging whether the transmission bandwidth corresponding to the user is reduced in a preset time period.
Specifically, determining whether or not the transmission bandwidth is reduced allows an error of 5%, i.e., an effect of 5% or less may be regarded as an error. If the transmission bandwidth is not reduced in the preset time period, it can be determined that deleting the data block to be cleaned does not affect the read-write service requirement of the subsequent user, the data block to be cleaned can be deleted, and the buffer space of the data block to be cleaned is released.
And step A3, deleting the data to be cleaned in the data block processing queue and releasing the buffer memory space of the data to be cleaned if the transmission bandwidth is not reduced in the preset time period.
Specifically, the preset time period is, for example, two weeks, and the two-week period can be detected, if the transmission bandwidth is not reduced in the preset time period, it is determined that the read-write service efficiency of the user is not affected, the data to be cleaned in the data block processing queue is deleted, and the buffer space of the data to be cleaned is released. It should be noted that the preset time period may be set according to actual requirements, for example, may be longer or shorter.
And step A4, unlocking the data block to be cleaned and redetermining the data block to be cleaned if the transmission bandwidth is reduced within a preset time period.
Specifically, if the transmission bandwidth drop ratio is greater than a threshold value within a period of time, determining that locking the data block to be cleaned affects the read-write service of the user, and unlocking the data block to be cleaned. And circularly detecting other data blocks, and redefining the data blocks to be cleaned, so as to recover redundant cache space, and make the cache resource maximally utilized.
In this embodiment, the data block to be cleaned is temporarily locked first, and the read-write service condition of the next period of time is acquired, if the read-write service requirement of the user is not affected, the data block to be cleaned is deleted, and the buffer space of the data block to be cleaned is released, so that the buffer resource is maximally utilized.
In some optional embodiments, after determining the data block to be cleaned and releasing the buffer space occupied by the data block to be cleaned, the method further includes:
determining a second replacement priority of the remaining data blocks in the first preset number of data block processing queues;
And updating the sequence numbers of the rest data blocks according to the second replacement priority.
Specifically, after deleting the data block to be cleaned in the data block processing queue and releasing the buffer space occupied by the data block to be cleaned, the sequence numbers of the data blocks in the data block processing queue are not consecutive, for example, in the task queue 1 in fig. 3, the buffer data block 1-1 is deleted, and at this time, the sequence numbers of the data blocks in the task queue 1 start from 1-2, which is not beneficial to the subsequent determination of the data block to be cleaned.
Therefore, the second replacement priority of the remaining data blocks in the first preset number of data block processing queues is redetermined, and the sequence numbers of the remaining data blocks are updated according to the second replacement priority, for example, the second replacement priority of the data blocks is the same as the first replacement priority, and is generated by using the time of adding the data blocks into the data block processing queues, and directly, as shown in fig. 3, in the task queue 1, the cached data block 1-1 is deleted, at this time, the sequence numbers of the data blocks in the task queue 1 start from 1-2, the sequence numbers 1-2 are modified to 1-1, the sequence numbers 1-3 are modified to 1-2, and the like. If the second replacement priority of the data block is different from the first replacement priority, the first replacement priority is generated by using the time of adding the data block into the data block processing queue, the second replacement priority is generated by using the use frequency and hit rate of the data in the data block, and the sequence numbers of the rest data blocks are directly regenerated according to the second replacement priority. And dynamically exchanging the positions of the rest data blocks in the data block processing queue according to the updated sequence numbers.
In some alternative embodiments, in a case of receiving a data request from a host, determining a target cache space from a free cache space in a cache, saving data corresponding to the data request to the target cache space, and responding to the data request, including:
If the data request is a read request, if no data block corresponding to the read request exists in the cache, determining a target cache space from an idle cache space in the cache;
Acquiring data corresponding to a read request from a preset storage medium, writing the data corresponding to the read request into a target cache space, and generating a read request data block corresponding to the read request;
And returning the data in the read request data block to the host.
Specifically, in this embodiment, the processing logic that does not need to process the cache replacement in the flow of processing the read request may directly determine the target cache space from the free cache space in the DRAM.
Judging whether a data block corresponding to the read request exists in the DRAM, namely judging whether the read request hits the cache, if so, determining that the data read by the read request is in the DRAM, and reading the data corresponding to the read request from storage media such as an HDD (hard disk drive), an SSD (solid state drive) and the like without reading the data corresponding to the read request, and directly reading the data corresponding to the read request from the DRAM and returning the data to the host, so that the response time of the read request is reduced.
If the data block corresponding to the read request does not exist in the cache, determining that the data read by the read request is not in the DRAM, and determining a target cache space from the free cache space in the DRAM. And acquiring data corresponding to the read request from a preset storage medium, wherein the preset storage medium is a nonvolatile storage medium such as an HDD (hard disk drive), an SSD (solid state drive) and the like which are slow relative to the DRAM. After the data corresponding to the read request is obtained, the read data is written into a target cache space of the DRAM, and a read request data block corresponding to the read request is generated, so that the subsequent request can hit the cache conveniently. Finally, the data in the read request data block is returned to the host.
The above process is as shown in fig. 4, the read request is issued to the cache module, whether the read request hits the cache is judged, if yes, the data is read from the cache and returned to the host, if no, the data is read from the rear-end data disk, the read data is returned to the cache module, the read data is put into the cache, and then returned to the host, and the process is finished.
In the embodiment, the processing process of cache replacement is separated from the flow of processing the data request, so that the free cache space in the cache can be directly used in the process of processing each read request, and the processing time delay of the read request is reduced.
In some optional embodiments, releasing the buffer space occupied by the data block to be cleaned includes:
deleting the data block to be cleaned in the data block processing queue under the condition that the data block to be cleaned is a read request data block;
and replacing the positions of the rest data blocks in the data block processing queue by using a preset algorithm.
Specifically, in the case where the data block to be cleaned is a read request data block, the data block to be cleaned in the data block processing queue may be directly deleted.
After deleting the data block to be cleaned in the data block processing queue and releasing the buffer memory space occupied by the data block to be cleaned, the sequence numbers of the data blocks in the data block processing queue are not consistent, so that the positions of the remaining data blocks in the data block processing queue are dynamically exchanged by using a preset algorithm, and the sequence numbers of the remaining data blocks are updated, for example, the sequence numbers of the remaining data blocks are updated by using the second replacement priority, and the positions of the remaining data blocks in the data block processing queue are dynamically exchanged according to the updated sequence numbers.
In some alternative embodiments, in a case of receiving a data request from a host, determining a target cache space from a free cache space in a cache, saving data corresponding to the data request to the target cache space, and responding to the data request, including:
Determining a target cache space from the free cache space in the cache under the condition that the data request is a write request;
Writing the data corresponding to the write request into the target cache space, generating a write request data block corresponding to the write request, and responding to the host.
Specifically, in the case that the data request is a write request, the target cache space is determined from the free cache space in the DRAM, where the size of the target cache space is greater than or equal to the data size of the write request, and the target cache space is preferably a continuous space, so that when the data of the write request is written into the target cache space, only one data transmission needs to be initiated, and the processing speed of the data request is improved.
For write requests, the processing may be completed in response to the host data request after writing the data of the write request to the DRAM. The data written into the DRAM is written into the HDD or SSD on the slow nonvolatile storage medium of the DRAM in the background, so that the response time of the system to the write request is reduced. Therefore, the data corresponding to the write request is written into the target cache space, the write request data block corresponding to the write request is generated, and the host is responded.
In the embodiment, the processing process of cache replacement is separated from the flow of processing the data request, so that the free cache space in the cache can be directly used in the process of processing each write request, and the processing time delay of the write request is reduced.
In some optional embodiments, releasing the buffer space occupied by the data block to be cleaned includes:
Judging whether the data of the data block to be cleaned is written into a preset storage medium or not under the condition that the data block to be cleaned is a writing request data block;
if the data block is written, deleting the data block to be cleaned in the data block processing queue;
And if the data is not written in, writing the data of the data block to be cleaned in a preset storage medium, and deleting the data block to be cleaned in the data block processing queue.
Specifically, data written to the DRAM is written in the background onto a predetermined storage medium, such as a nonvolatile storage medium of HDD, SSD, etc., which is slow relative to the DRAM. The background writing process is performed simultaneously with the process of releasing the buffer space occupied by the data block to be cleaned, so that when a certain writing request data block is confirmed to be the data block to be cleaned, the data of the writing request data block is not written on the nonvolatile storage medium such as the HDD or the SSD, and if the writing request data block is directly deleted, the data is lost.
And judging whether the data of the data block to be cleaned is written into a preset storage medium, if so, directly deleting the data block to be cleaned in the data block processing queue, and if not, deleting the data block to be cleaned in the data block processing queue after writing the data of the data block to be cleaned into the preset storage medium.
In some optional embodiments, obtaining the cache occupancy level includes:
starting a preset timer, and calling a timer processing function when the duration of the preset timer is the preset duration;
and obtaining the occupied water level of the cache by using a timer processing function.
Specifically, if the buffer occupancy level is obtained in real time, the computing resource of the system is excessively occupied, so that a timer can be set, the buffer occupancy level is obtained once every a period of time, and whether the buffer replacement process is needed is judged.
A preset timer, for example a 100ms timer, is enabled. The preset duration is, for example, 100ms. When the duration of the preset timer is the preset duration, a timer processing function is called, the buffer occupied water level is obtained in the timer processing function, whether the current buffer occupied water level reaches a threshold value, for example, 70%, is judged, if the current buffer occupied water level does not reach the threshold value, and if the current buffer occupied water level does not reach the threshold value, the current buffer occupied water level starts to replace data in a buffer, namely, the corresponding data is invalidated, and the corresponding buffer space is released.
In this embodiment, when the duration of the preset timer is the preset duration, the buffer occupancy water level is obtained, and when the buffer occupancy water level is ensured to be timely obtained and the buffer replacement process is started, excessive computing resources are prevented from being occupied by the buffer occupancy water level obtained in real time.
In some optional embodiments, obtaining the cache occupancy level includes:
Acquiring a used cache space and a total cache space of a cache;
and taking the ratio of the used buffer space to the total buffer space as the buffer occupied water level.
Specifically, in order to prevent the process of cache replacement from being put into the process of processing the data request, the process of cache replacement is put into the background for execution, and in order to prevent the process of processing the data request in the foreground from being blocked due to the full cache space of the DRAM, the process of cache replacement in the background cannot be started to be executed when the cache space is full, so that a preset threshold is set, and when the cache occupancy level is higher than the preset threshold, the process of cache replacement is started.
DRAM is a cache in a memory system. The buffer occupancy level is obtained, for example, the used buffer space of the DRAM divided by the total buffer space of the DRAM, the used buffer space being the sum of the buffer spaces of all the data blocks in the DRAM. By the cache occupancy level, the use of the DRAM may be determined, determining whether the DRAM has sufficient free cache space for use in a subsequent process of processing data requests.
The embodiment also provides a data request processing device, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a data request processing apparatus, as shown in fig. 5, including:
An obtaining module 501, configured to obtain a cache occupied water level;
The releasing module 502 is configured to determine a data block to be cleaned and release a buffer space occupied by the data block to be cleaned when the buffer occupied water level is higher than a preset threshold;
the processing module 503 is configured to determine a target cache space from the free cache space in the cache when the data request is received from the host, save the data corresponding to the data request to the target cache space, and respond to the data request.
In some alternative embodiments, the apparatus is configured to, prior to obtaining the cache occupancy level:
Acquiring the thread number for processing the cache replacement task;
and generating a first preset number of data block processing queues according to the thread number, wherein the data block processing queues are used for determining and releasing the data blocks to be cleaned.
In some alternative embodiments, after generating the first preset number of data block processing queues, the apparatus is further configured to:
Determining the identification of a logic volume to which the data block belongs in the cache;
And determining a data block processing queue corresponding to the data block according to the identification of the logical volume and the first preset quantity, and placing the data block into the corresponding data block processing queue.
In some alternative embodiments, the apparatus is further configured to, after placing the data blocks in the corresponding data block processing queues:
determining a first replacement priority of the data block;
And generating sequence numbers of the data blocks in a first preset number of data block processing queues according to the first replacement priority.
In some optional embodiments, the releasing module 502 determines the data block to be cleaned if the buffer occupancy level is higher than a preset threshold, including:
under the condition that the buffer occupancy water level is higher than a first preset threshold value and lower than a second preset threshold value, taking the data blocks with sequence numbers smaller than a first preset value in a first preset number of data block processing queues as data blocks to be cleaned;
Under the condition that the buffer occupancy water level is higher than a second preset threshold value and lower than a third preset threshold value, taking the data blocks with sequence numbers smaller than the second preset value in the first preset number of data block processing queues as data blocks to be cleaned;
under the condition that the buffer occupancy water level is higher than a third preset threshold value and lower than a fourth preset threshold value, taking the data blocks with sequence numbers smaller than the third preset value in the first preset number of data block processing queues as data blocks to be cleaned;
and under the condition that the buffer occupancy water level is higher than a fourth preset threshold value, taking the data blocks with sequence numbers smaller than the fourth preset value in the first preset number of data block processing queues as data blocks to be cleaned.
In some alternative embodiments, after determining the data block to be cleaned and freeing up the buffer space occupied by the data block to be cleaned, the freeing module 502 is further configured to:
determining a second replacement priority of the remaining data blocks in the first preset number of data block processing queues;
And updating the sequence numbers of the rest data blocks according to the second replacement priority.
In some alternative embodiments, the processing module 503 determines a target cache space from the free cache space in the cache in the case of receiving the data request from the host, stores the data corresponding to the data request in the target cache space, and responds to the data request, including:
If the data request is a read request, if no data block corresponding to the read request exists in the cache, determining a target cache space from an idle cache space in the cache;
Acquiring data corresponding to a read request from a preset storage medium, writing the data corresponding to the read request into a target cache space, and generating a read request data block corresponding to the read request;
And returning the data in the read request data block to the host.
In some alternative embodiments, the processing module 503 releases the buffer space occupied by the data block to be cleaned, including:
deleting the data block to be cleaned in the data block processing queue under the condition that the data block to be cleaned is a read request data block;
and replacing the positions of the rest data blocks in the data block processing queue by using a preset algorithm.
In some alternative embodiments, the processing module 503 determines a target cache space from the free cache space in the cache in the case of receiving the data request from the host, stores the data corresponding to the data request in the target cache space, and responds to the data request, including:
Determining a target cache space from the free cache space in the cache under the condition that the data request is a write request;
Writing the data corresponding to the write request into the target cache space, generating a write request data block corresponding to the write request, and responding to the host.
In some alternative embodiments, the processing module 503 releases the buffer space occupied by the data block to be cleaned, including:
Judging whether the data of the data block to be cleaned is written into a preset storage medium or not under the condition that the data block to be cleaned is a writing request data block;
if the data block is written, deleting the data block to be cleaned in the data block processing queue;
And if the data is not written in, writing the data of the data block to be cleaned in a preset storage medium, and deleting the data block to be cleaned in the data block processing queue.
In some alternative embodiments, the obtaining module 501 obtains the cache occupancy level, including:
starting a preset timer, and calling a timer processing function when the duration of the preset timer is the preset duration;
and obtaining the occupied water level of the cache by using a timer processing function.
In some alternative embodiments, the obtaining module 501 obtains the cache occupancy level, including:
Acquiring a used cache space and a total cache space of a cache;
and taking the ratio of the used buffer space to the total buffer space as the buffer occupied water level.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The data request processing apparatus in this embodiment is presented in the form of a functional unit, where the unit refers to an ASIC (Application SPECIFIC INTEGRATED Circuit) Circuit, a processor and a memory that execute one or more software or firmware programs, and/or other devices that can provide the above functions.
The embodiment of the invention also provides computer equipment, which is provided with the data request processing device shown in the figure 5.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, and as shown in fig. 6, the computer device includes one or more processors 10, a memory 20, and interfaces for connecting components, including a high-speed interface and a low-speed interface. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 6.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further comprise, among other things, an integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform a method for implementing the embodiments described above.
The memory 20 may include a storage program area that may store an operating system, application programs required for at least one function, and a storage data area that may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The memory 20 may comprise volatile memory, such as random access memory, or nonvolatile memory, such as flash memory, hard disk or solid state disk, or the memory 20 may comprise a combination of the above types of memory.
The computer device also includes a communication interface 30 for the computer device to communicate with other devices or communication networks.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random-access memory, a flash memory, a hard disk, a solid state disk, or the like, and further, the storage medium may further include a combination of the above types of memories. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Portions of the present invention may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or aspects in accordance with the present invention by way of operation of the computer. Those skilled in the art will appreciate that the existence of computer program instructions in a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and accordingly, the manner in which computer program instructions are executed by a computer includes, but is not limited to, the computer directly executing the instructions, or the computer compiling the instructions and then executing the corresponding compiled programs, or the computer reading and executing the instructions, or the computer reading and installing the instructions and then executing the corresponding installed programs. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Although the embodiments of the present application have been described with reference to the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the application, and such modifications and variations fall within the scope of the application as defined by the claims.

Claims (13)

1.一种数据请求处理方法,其特征在于,所述方法包括:1. A data request processing method, characterized in that the method comprises: 获取处理缓存替换任务的线程数;根据所述线程数,生成第一预设数量个数据块处理队列,其中,所述数据块处理队列用于确定并释放待清理数据块;Obtaining the number of threads for processing cache replacement tasks; generating a first preset number of data block processing queues according to the number of threads, wherein the data block processing queues are used to determine and release data blocks to be cleared; 获取缓存占用水位;Get the cache occupancy level; 在所述缓存占用水位高于预设阈值的情况下,确定所述待清理数据块,并释放所述待清理数据块占用的缓存空间;When the cache occupancy level is higher than a preset threshold, determining the data block to be cleared, and releasing the cache space occupied by the data block to be cleared; 所述释放所述待清理数据块占用的缓存空间,包括:在所述待清理数据块为写请求数据块的情况下,判断所述待清理数据块的数据是否已写入预设存储介质;如果已写入,删去所述数据块处理队列中的所述待清理数据块;如果未写入,将所述待清理数据块的数据写入预设存储介质,删去所述数据块处理队列中的所述待清理数据块;The releasing of the cache space occupied by the data block to be cleared includes: if the data block to be cleared is a write request data block, determining whether the data of the data block to be cleared has been written into a preset storage medium; if it has been written, deleting the data block to be cleared in the data block processing queue; if it has not been written, writing the data of the data block to be cleared into the preset storage medium, and deleting the data block to be cleared in the data block processing queue; 在从主机接收到数据请求的情况下,从缓存中空闲的缓存空间确定目标缓存空间,将所述数据请求对应的数据保存到所述目标缓存空间,并响应所述数据请求,其中,所述缓存中空闲的缓存空间是释放所述待清理数据块占用的缓存空间后得到的。When a data request is received from a host, a target cache space is determined from free cache space in the cache, data corresponding to the data request is saved in the target cache space, and the data request is responded to, wherein the free cache space in the cache is obtained after releasing the cache space occupied by the data block to be cleared. 2.根据权利要求1所述的方法,其特征在于,在所述生成第一预设数量个数据块处理队列之后,所述方法还包括:2. The method according to claim 1, characterized in that after generating a first preset number of data block processing queues, the method further comprises: 确定所述缓存中数据块归属的逻辑卷的标识;Determining an identifier of a logical volume to which the data block in the cache belongs; 根据所述逻辑卷的标识和所述第一预设数量,确定所述数据块对应的数据块处理队列,并将所述数据块放入对应的数据块处理队列。According to the identifier of the logical volume and the first preset number, a data block processing queue corresponding to the data block is determined, and the data block is put into the corresponding data block processing queue. 3.根据权利要求2所述的方法,其特征在于,在所述将所述数据块放入对应的数据块处理队列之后,所述方法还包括:3. The method according to claim 2, characterized in that after placing the data block into the corresponding data block processing queue, the method further comprises: 确定所述数据块的第一替换优先级;determining a first replacement priority of the data block; 根据所述第一替换优先级,在所述第一预设数量个数据块处理队列中生成所述数据块的序号。According to the first replacement priority, sequence numbers of the data blocks are generated in the first preset number of data block processing queues. 4.根据权利要求3所述的方法,其特征在于,所述在所述缓存占用水位高于预设阈值的情况下,确定待清理数据块,包括:4. The method according to claim 3, characterized in that when the cache occupancy level is higher than a preset threshold, determining the data block to be cleared comprises: 在所述缓存占用水位高于第一预设阈值且低于第二预设阈值的情况下,将所述第一预设数量个数据块处理队列中序号小于第一预设值的数据块作为所述待清理数据块;When the cache occupancy level is higher than a first preset threshold and lower than a second preset threshold, data blocks with sequence numbers smaller than a first preset value in the first preset number of data block processing queues are used as the data blocks to be cleared; 在所述缓存占用水位高于第二预设阈值且低于第三预设阈值的情况下,将所述第一预设数量个数据块处理队列中序号小于第二预设值的数据块作为所述待清理数据块;When the cache occupancy level is higher than the second preset threshold and lower than the third preset threshold, the data blocks with sequence numbers smaller than the second preset value in the first preset number of data block processing queues are used as the data blocks to be cleared; 在所述缓存占用水位高于第三预设阈值且低于第四预设阈值的情况下,将所述第一预设数量个数据块处理队列中序号小于第三预设值的数据块作为所述待清理数据块;When the cache occupancy level is higher than a third preset threshold and lower than a fourth preset threshold, the data blocks with sequence numbers smaller than the third preset value in the first preset number of data block processing queues are used as the data blocks to be cleared; 在所述缓存占用水位高于所述第四预设阈值的情况下,将所述第一预设数量个数据块处理队列中序号小于第四预设值的数据块作为所述待清理数据块。When the cache occupancy level is higher than the fourth preset threshold, data blocks with sequence numbers less than a fourth preset value in the first preset number of data block processing queues are used as the data blocks to be cleared. 5.根据权利要求3所述的方法,其特征在于,在所述确定待清理数据块,并释放所述待清理数据块占用的缓存空间之后,所述方法还包括:5. The method according to claim 3, characterized in that after determining the data block to be cleared and releasing the cache space occupied by the data block to be cleared, the method further comprises: 确定所述第一预设数量个数据块处理队列中剩余数据块的第二替换优先级;Determining a second replacement priority of remaining data blocks in the first preset number of data block processing queues; 根据所述第二替换优先级,更新所述剩余数据块的序号。According to the second replacement priority, the sequence numbers of the remaining data blocks are updated. 6.根据权利要求1所述的方法,其特征在于,所述在从主机接收到数据请求的情况下,从缓存中空闲的缓存空间确定目标缓存空间,将所述数据请求对应的数据保存到所述目标缓存空间,并响应所述数据请求,包括:6. The method according to claim 1, wherein when a data request is received from a host, determining a target cache space from free cache space in a cache, saving data corresponding to the data request to the target cache space, and responding to the data request comprises: 在所述数据请求为读请求的情况下,如果所述缓存中不存在所述读请求对应的数据块,从缓存中空闲的缓存空间确定所述目标缓存空间;In the case where the data request is a read request, if the data block corresponding to the read request does not exist in the cache, determining the target cache space from free cache space in the cache; 从预设存储介质中获取所述读请求对应的数据,并将所述读请求对应的数据写入所述目标缓存空间,生成所述读请求对应的读请求数据块;Acquire data corresponding to the read request from a preset storage medium, and write the data corresponding to the read request into the target cache space to generate a read request data block corresponding to the read request; 将所述读请求数据块中的数据返回给所述主机。The data in the read request data block is returned to the host. 7.根据权利要求6所述的方法,其特征在于,所述释放所述待清理数据块占用的缓存空间,包括:7. The method according to claim 6, wherein releasing the cache space occupied by the data block to be cleared comprises: 在所述待清理数据块为所述读请求数据块的情况下,删去所述数据块处理队列中的所述待清理数据块;In the case where the data block to be cleared is the read request data block, deleting the data block to be cleared in the data block processing queue; 利用预设算法调换所述数据块处理队列中剩余的数据块的位置。The positions of the remaining data blocks in the data block processing queue are swapped using a preset algorithm. 8.根据权利要求1所述的方法,其特征在于,所述在从主机接收到数据请求的情况下,从缓存中空闲的缓存空间确定目标缓存空间,将所述数据请求对应的数据保存到所述目标缓存空间,并响应所述数据请求,包括:8. The method according to claim 1, wherein when a data request is received from a host, determining a target cache space from free cache space in a cache, saving data corresponding to the data request to the target cache space, and responding to the data request comprises: 在所述数据请求为写请求的情况下,从缓存中空闲的缓存空间确定目标缓存空间;In the case where the data request is a write request, determining a target cache space from free cache space in the cache; 将所述写请求对应的数据写入所述目标缓存空间,生成所述写请求对应的写请求数据块,并响应所述主机。The data corresponding to the write request is written into the target cache space, a write request data block corresponding to the write request is generated, and the host is responded to. 9.根据权利要求1所述的方法,其特征在于,所述获取缓存占用水位,包括:9. The method according to claim 1, characterized in that the step of obtaining the cache occupancy level comprises: 启用预设定时器,并在所述预设定时器的时长为预设时长时,调用定时器处理函数;Enable a preset timer, and when the duration of the preset timer is the preset duration, call a timer processing function; 利用所述定时器处理函数,获取缓存占用水位。The timer processing function is used to obtain the cache occupancy level. 10.根据权利要求1或9所述的方法,其特征在于,所述获取缓存占用水位,包括:10. The method according to claim 1 or 9, characterized in that obtaining the cache occupancy level comprises: 获取所述缓存的已使用缓存空间和总缓存空间;Obtaining the used cache space and the total cache space of the cache; 将所述已使用缓存空间和所述总缓存空间的比值作为所述缓存占用水位。The ratio of the used cache space to the total cache space is used as the cache occupancy watermark. 11.一种数据请求处理装置,其特征在于,所述装置包括:11. A data request processing device, characterized in that the device comprises: 获取处理缓存替换任务的线程数;根据所述线程数,生成第一预设数量个数据块处理队列,其中,所述数据块处理队列用于确定并释放待清理数据块;Obtaining the number of threads for processing cache replacement tasks; generating a first preset number of data block processing queues according to the number of threads, wherein the data block processing queues are used to determine and release data blocks to be cleared; 获取模块,用于获取缓存占用水位;An acquisition module is used to obtain the cache occupancy level; 释放模块,用于在所述缓存占用水位高于预设阈值的情况下,确定所述待清理数据块,并释放所述待清理数据块占用的缓存空间;A release module, used to determine the data block to be cleared and release the cache space occupied by the data block to be cleared when the cache occupancy level is higher than a preset threshold; 所述释放模块释放所述待清理数据块占用的缓存空间,包括:在所述待清理数据块为写请求数据块的情况下,判断所述待清理数据块的数据是否已写入预设存储介质;如果已写入,删去所述数据块处理队列中的所述待清理数据块;如果未写入,将所述待清理数据块的数据写入预设存储介质,删去所述数据块处理队列中的所述待清理数据块;The release module releases the cache space occupied by the data block to be cleaned, including: if the data block to be cleaned is a write request data block, determining whether the data of the data block to be cleaned has been written into a preset storage medium; if it has been written, deleting the data block to be cleaned in the data block processing queue; if it has not been written, writing the data of the data block to be cleaned into the preset storage medium, and deleting the data block to be cleaned in the data block processing queue; 处理模块,用于在从主机接收到数据请求的情况下,从缓存中空闲的缓存空间确定目标缓存空间,将所述数据请求对应的数据保存到所述目标缓存空间,并响应所述数据请求,其中,所述缓存中空闲的缓存空间是释放所述待清理数据块占用的缓存空间后得到的。A processing module is used to determine a target cache space from free cache space in a cache when a data request is received from a host, save data corresponding to the data request to the target cache space, and respond to the data request, wherein the free cache space in the cache is obtained after releasing the cache space occupied by the data block to be cleared. 12.一种计算机设备,其特征在于,包括:12. A computer device, comprising: 存储器和处理器,所述存储器和所述处理器之间互相通信连接,所述存储器中存储有计算机指令,所述处理器通过执行所述计算机指令,从而执行权利要求1至10中任一项所述的数据请求处理方法。A memory and a processor, wherein the memory and the processor are communicatively connected to each other, the memory stores computer instructions, and the processor executes the data request processing method according to any one of claims 1 to 10 by executing the computer instructions. 13.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机指令,所述计算机指令用于使计算机执行权利要求1至10中任一项所述的数据请求处理方法。13. A computer-readable storage medium, characterized in that computer instructions are stored on the computer-readable storage medium, and the computer instructions are used to enable a computer to execute the data request processing method according to any one of claims 1 to 10.
CN202411367369.6A 2024-09-29 2024-09-29 Data request processing method, device, computer equipment and storage medium Active CN118897655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411367369.6A CN118897655B (en) 2024-09-29 2024-09-29 Data request processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411367369.6A CN118897655B (en) 2024-09-29 2024-09-29 Data request processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN118897655A CN118897655A (en) 2024-11-05
CN118897655B true CN118897655B (en) 2025-01-28

Family

ID=93266633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411367369.6A Active CN118897655B (en) 2024-09-29 2024-09-29 Data request processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118897655B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115543938A (en) * 2021-06-30 2022-12-30 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4244572B2 (en) * 2002-07-04 2009-03-25 ソニー株式会社 Cache device, cache data management method, and computer program
CN112015343B (en) * 2020-08-27 2022-07-22 杭州宏杉科技股份有限公司 Cache space management method and device of storage volume and electronic equipment
CN113297098B (en) * 2021-05-24 2023-09-01 北京工业大学 A High-Performance-Oriented Intelligent Cache Replacement Strategy Adapting to Prefetching
CN116366657B (en) * 2023-05-31 2023-08-04 天翼云科技有限公司 Data request scheduling method and system of cache server

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115543938A (en) * 2021-06-30 2022-12-30 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN118897655A (en) 2024-11-05

Similar Documents

Publication Publication Date Title
CN110058786B (en) Method, apparatus and computer program product for controlling write requests in a storage system
US10860494B2 (en) Flushing pages from solid-state storage device
CN106547476B (en) Method and apparatus for data storage system
US10897517B2 (en) Distributed cache live migration
CN111007991B (en) Method for separating read-write requests based on NVDIMM and computer thereof
JP2018520420A (en) Cache architecture and algorithm for hybrid object storage devices
CN104375954B (en) The method and computer system for based on workload implementing that the dynamic of cache is enabled and disabled
WO2018094649A1 (en) Method for acquiring data during startup of virtual machine, and cloud computing system
CN113835624A (en) Data migration method and device based on heterogeneous memory
US10705977B2 (en) Method of dirty cache line eviction
CN110413545B (en) Storage management method, electronic device, and computer program product
CN105376269B (en) Virtual machine storage system and its implementation and device
CN115563029A (en) Caching method and device based on two-layer caching structure
CN105095495A (en) Distributed file system cache management method and system
CN117311621A (en) Cache disk space allocation method and device, computer equipment and storage medium
CN105574008B (en) Task scheduling method and device applied to distributed file system
CN112379841A (en) Data processing method and device and electronic equipment
CN116069261A (en) Data processing method, system, equipment and storage medium
KR102220468B1 (en) Preemptive cache post-recording with transaction support
CN114610654B (en) A solid-state storage device and a method for writing data thereto
JP7011156B2 (en) Storage controller and program
CN118897655B (en) Data request processing method, device, computer equipment and storage medium
CN115904795A (en) Data storage method and device in storage system
CN118312102A (en) IO request processing method, device, storage device and storage medium
CN113093994A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant