[go: up one dir, main page]

CN119440427A - Shared cache data management method and storage system based on type tracking - Google Patents

Shared cache data management method and storage system based on type tracking Download PDF

Info

Publication number
CN119440427A
CN119440427A CN202510045450.0A CN202510045450A CN119440427A CN 119440427 A CN119440427 A CN 119440427A CN 202510045450 A CN202510045450 A CN 202510045450A CN 119440427 A CN119440427 A CN 119440427A
Authority
CN
China
Prior art keywords
mapping table
table entry
priority
type
shared cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510045450.0A
Other languages
Chinese (zh)
Other versions
CN119440427B (en
Inventor
王智麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Core Storage Electronic Ltd
Original Assignee
Hefei Core Storage Electronic Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Core Storage Electronic Ltd filed Critical Hefei Core Storage Electronic Ltd
Priority to CN202510045450.0A priority Critical patent/CN119440427B/en
Publication of CN119440427A publication Critical patent/CN119440427A/en
Application granted granted Critical
Publication of CN119440427B publication Critical patent/CN119440427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a shared cache data management method and a storage system based on type tracking. The method comprises the steps of configuring a shared buffer in a memory of a host system, wherein a storage device is used for executing preset operation based on data cached in the shared buffer, tracking the type of at least one mapping table in the shared buffer to obtain a tracking result, determining the priority of a target mapping table in the at least one mapping table according to the tracking result, and keeping the target mapping table in the shared buffer or removing the target mapping table from the shared buffer according to the priority. Therefore, the efficiency of the storage device or the whole storage system can be effectively improved on the premise that the operation efficiency of the host system is not affected as much as possible.

Description

Shared buffer data management method and storage system based on type tracking
Technical Field
The present invention relates to the field of storage technologies, and in particular, to a method and a storage system for managing data in a shared cache area based on type tracking.
Background
NAND flash (flash) is a nonvolatile memory technology that is widely used in various memory devices. It stores charge by using floating gate transistors, each representing a memory cell. NAND flash memory cells typically organize data in pages, each page containing multiple bytes, which in turn make up a block. The data is read and programmed in units of pages, and the erase operation is performed in units of blocks. This organization makes NAND flash very suitable for mass storage and has a high writing speed.
However, as the storage capacity of the storage device is continuously increased, the data amount of management data for managing the storage device is continuously increased, resulting in that the small-capacity memory of the storage device itself is not used.
Disclosure of Invention
The invention provides a data management method and a storage system for a shared buffer area based on type tracking, which can effectively improve the efficiency of a storage device or a whole storage system on the premise of not affecting the operation efficiency of a host system as much as possible.
The embodiment of the invention provides a shared cache data management method based on type tracking, which is used for a storage system, wherein the storage system comprises a host system and a storage device, the host system is connected to the storage device, the shared cache data management method based on the type tracking comprises the steps of configuring a shared cache in a memory of the host system, wherein the storage device is used for executing preset operation based on data cached in the shared cache, tracking the type of at least one mapping table item cached in the shared cache to obtain a tracking result, determining the priority of a target mapping table item in the at least one mapping table item according to the tracking result, and keeping the target mapping table item in the shared cache or removing the target mapping table item from the shared cache according to the priority.
The embodiment of the invention also provides a storage system, which comprises a host system and a storage device. The storage device is connected to the host system. The host system is used for configuring a shared buffer area in a memory of the host system, wherein the storage device is used for executing preset operation based on data cached in the shared buffer area, tracking the type of at least one mapping table item cached in the shared buffer area to obtain a tracking result, determining the priority of a target mapping table item in the at least one mapping table item according to the tracking result, and keeping the target mapping table item in the shared buffer area or removing the target mapping table item from the shared buffer area according to the priority.
Based on the above, a shared buffer may be configured in the memory of the host system, and the storage device may perform a predetermined operation based on the data buffered in the shared buffer. When the storage system is in operation, the type of at least one mapping table item cached in the shared cache region can be tracked to obtain tracking results. Based on the tracking result, the priority of the target mapping entries in the shared cache may be determined. The target mapping entries may then be retained in or removed from the shared cache, depending on the priority. Therefore, the efficiency of the storage device or the whole storage system can be effectively improved on the premise that the operation efficiency of the host system is not affected as much as possible.
Drawings
FIG. 1 is a schematic diagram of a storage system shown in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a memory controller shown according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a managed memory module shown in accordance with an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a method of type tracking based shared cache data management in accordance with an embodiment of the present invention;
FIG. 5 is a flow chart illustrating a method of type tracking based shared cache data management in accordance with an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts.
FIG. 1 is a schematic diagram of a memory system according to an embodiment of the present invention. Referring to fig. 1, a storage system (also referred to as a data storage system) 10 includes a host system 11 and a storage device 12. The storage device 12 may be connected to the host system 11 and may be used to store data from the host system 11. For example, the host system 11 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, an industrial computer, a game machine, a server, or a computer system provided in a specific carrier (e.g., a vehicle, an aircraft, or a ship), or the like, and the type of the host system 11 is not limited thereto. Further, the storage device 12 may include a solid state disk, a USB flash drive, a memory card, or other type of non-volatile storage device.
Host system 11 includes a processor 111 and a memory 112. The processor 111 is responsible for the overall or partial operation of the host system 11. For example, the Processor 111 may include a central processing unit (Central Processing Unit, CPU), a graphics processing unit (GRAPHICAL PROCESSING UNIT, GPU) or other programmable general purpose or special purpose microprocessor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a programmable controller, an Application Specific Integrated Circuit (ASIC), a programmable logic device (Programmable Logic Device, PLD), or other similar device or combination of devices.
The memory 112 is connected to the processor 111 and is used for buffering data. For example, the memory 112 may include random access memory (Random Access Memory, RAM) or similar volatile storage. It should be noted that the memory 112 is disposed in the host system 11 (for example, disposed on a motherboard of the host system 11 or directly disposed in the processor 111), and is not disposed in the storage device 12.
The memory device 12 includes a connection interface 121, a memory module 122, and a memory controller 123. The connection interface 121 is used to connect the storage device 12 to the host system 11. For example, connection interface 121 may support an embedded multimedia card (embedded Multi-MEDIA CARD, EMMC), universal flash memory (Universal Flash Storage, UFS), peripheral component interconnect Express (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCI Express), non-volatile memory Express (Non-Volatile Memory Express, NVM Express), serial advanced technology attachment (SERIAL ADVANCED Technology Attachment, SATA), universal serial bus (Universal Serial Bus, USB), or other types of connection interface standards. Accordingly, storage device 12 may communicate (e.g., exchange signals, instructions, and/or data) with host system 11 via connection interface 121.
The memory module 122 is used for storing data. For example, the memory module 122 may include one or more rewritable non-volatile memory modules. Each of the rewritable non-volatile memory modules may include one or more memory cell arrays. Memory cells in a memory cell array store data in the form of voltages (also referred to as threshold voltages). For example, the memory module 122 may include a single level memory cell (SINGLE LEVEL CELL, SLC) NAND-type flash memory module, a second level memory cell (Multi LEVEL CELL, MLC) NAND-type flash memory module, a third level memory cell (TRIPLE LEVEL CELL, TLC) NAND-type flash memory module, a fourth level memory cell (Quad LEVEL CELL, QLC) NAND-type flash memory module, and/or other memory modules having the same or similar characteristics.
The memory controller 123 is connected to the connection interface 121 and the memory module 122. Memory controller 123 may be considered a control core of storage device 12 and may be used to control storage device 12. For example, the memory controller 123 may be used to control or manage the operation of the storage device 12 in whole or in part. For example, the memory controller 123 may include a central processing unit (Central Processing Unit, CPU), or other programmable general purpose or special purpose microprocessor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), programmable controller, application SPECIFIC INTEGRATED Circuits (ASIC), programmable logic device (Programmable Logic Device, PLD), or other similar device or combination of devices. In an embodiment, the memory controller 123 may comprise a flash memory controller.
Memory controller 123 may send a sequence of instructions to memory module 122 to access memory module 122. For example, memory controller 123 may send a sequence of write instructions to memory module 122 to instruct memory module 122 to store data in a particular memory location. For example, memory controller 123 can send a sequence of read instructions to memory module 122 to instruct memory module 122 to read data from a particular memory location. For example, memory controller 123 can send a sequence of erase instructions to memory module 122 to instruct memory module 122 to erase data stored in a particular memory cell. In addition, memory controller 123 may send other types of instruction sequences to memory module 122 to instruct memory module 122 to perform other types of operations, as the invention is not limited. The memory module 122 may receive a sequence of instructions from the memory controller 123 and access memory locations within the memory module 122 according to the sequence of instructions.
FIG. 2 is a schematic diagram of a memory controller according to an embodiment of the invention. Referring to fig. 1 and 2, the memory controller 123 includes a host interface 21, a memory interface 22, and a memory control circuit 23. The host interface 21 is used to connect to the host system 11 through the connection interface 121 to communicate with the host system 11. The memory interface 22 is used to connect to the memory module 122 to access the memory module 122.
The memory control circuit 23 is connected to the host interface 21 and the memory interface 22. The memory control circuit 23 may be used to control or manage the operation of the memory controller 123 in whole or in part. For example, the memory control circuit 23 may communicate with the host system 11 through the host interface 21 and access the memory module 122 through the memory interface 22. For example, the memory control circuit 23 may include a control circuit such as an embedded controller or a microcontroller. In the following embodiment, the explanation of the memory control circuit 23 is equivalent to the explanation of the memory controller 123.
In one embodiment, memory controller 123 may also include buffer memory 24. The buffer memory 24 is connected to the memory control circuit 23 and is used for buffering data. For example, buffer memory 24 may be used to buffer instructions from host system 11, data from host system 11, and/or data from memory module 122.
In an embodiment, the memory controller 123 may also include a decoding circuit 25. The decoding circuit 25 is connected to the memory control circuit 23 and performs encoding and decoding of data to ensure the correctness of the data. For example, the decoding circuit 25 may support various encoding/decoding algorithms such as Low DENSITY PARITY CHECK codes (LDPC codes), BCH codes, reed-solomon codes (RS codes), and Exclusive OR (XOR) codes. In one embodiment, the memory controller 123 may also include various circuit modules of other types (e.g., power management circuits, etc.), which are not limiting.
FIG. 3 is a schematic diagram illustrating managing memory modules according to an embodiment of the invention. Referring to fig. 1 to 3, the memory module 122 includes a plurality of physical units 301 (1) to 301 (B). Each physical unit comprises a plurality of memory cells and is used for nonvolatile memory data.
In one embodiment, a physical cell may include one or more physical erase units. Furthermore, one entity unit may comprise a plurality of sub-entity units. For example, a sub-entity unit may include one or more entity programming units.
In one embodiment, a physical programmer may include a plurality of physical sectors (sectors). For example, a physical sector may have a data size of 512 Bytes (Bytes), and a physical programming unit may include 32 physical sectors. However, the data capacity of one physical fan and/or the total number of physical fans included in one physical programming unit can be adjusted according to the practical requirements, and the present invention is not limited thereto. In one embodiment, a physical programmer may be considered a physical page. For example, the storage capacity of one physical programming unit may be 16 kilobytes, and the present invention is not limited thereto.
In one embodiment, one physical programmer is the minimum unit of synchronous write data in the memory module 122. For example, when performing a programming operation (also referred to as a write operation) on a physical programming unit to write data into the physical programming unit, a plurality of memory cells in the physical programming unit may be synchronously programmed to store corresponding data. For example, when programming a physical programming unit, a write voltage may be applied to the physical programming unit to change the threshold voltage of at least some of the memory cells in the physical programming unit. For example, the threshold voltage of a memory cell may reflect the bit data stored by the memory cell.
In one embodiment, a physical erase unit may include a plurality of physical program units. Multiple physical program units in a physical erase unit can be erased simultaneously. For example, when performing an erase operation on a physically erased cell, an erase voltage may be applied to a plurality of physically programmed cells in the physically erased cell to change the threshold voltage of at least some of the physically programmed cells. By performing an erase operation on a physically erased cell, data stored in the physically erased cell may be erased. In one embodiment, a physical erased cell may be considered a physical block.
In one embodiment, the memory control circuit 23 can logically associate the physical units 301 (1) to 301 (A) and 301 (A+1) to 301 (B) to the data area 31 and the idle area 32, respectively. The physical units 301 (1) -301 (a) in the data area 31 all store data (also referred to as user data) from the host system 11. For example, any entity in the data area 31 may store valid (valid) data and/or invalid (invalid) data. In addition, none of the physical units 301 (a+1) -301 (B) in the spare area 32 stores data (e.g., valid data).
In one embodiment, if a certain physical unit does not store valid data, the physical unit may be associated with the idle area 32. In addition, the physical cells in the spare area 32 may be erased to erase the data in the physical cells. In one embodiment, the physical units in the idle region 32 are also referred to as idle physical units. In one embodiment, the free area 32 is also referred to as a free pool (free pool).
In one embodiment, when data is to be stored, the memory control circuit 23 may select one or more physical units from the idle area 32 and instruct the memory module 122 to store the data in the selected physical units. After storing the data in the physical unit, the physical unit may be associated with the data area 31. In other words, one or more physical units may be used alternately between the data area 31 and the idle area 32.
In one embodiment, the memory control circuit 23 may configure a plurality of logic units 302 (1) -302 (C) to map physical units (i.e., physical units 301 (1) -301 (A)) in the data area 31. For example, a logical unit may correspond to a logical block address (Logical Block Address, LBA) or other logical management unit. One logical unit may be mapped to one or more physical units.
In one embodiment, if a physical unit is currently mapped by any logical unit, the memory control circuit 23 may determine that the data currently stored in the physical unit includes valid data. Conversely, if a physical unit is not currently mapped by any logical unit, the memory control circuit 23 may determine that the physical unit does not currently store any valid data.
In one embodiment, the memory control circuit 23 may record the mapping relationship between the logical unit and the physical unit in at least one management table (also referred to as a logical-to-physical mapping table). In one embodiment, the memory control circuit 23 instructs the memory module 122 to perform data reading, writing or erasing operations according to the information in the management table (i.e. the logical-to-physical mapping table).
In one embodiment, the processor 111 may configure the shared cache 101 in the memory 112. By utilizing the memory resources of the host system 11 (e.g., the shared cache area 101), the processor 111 provides an efficient temporary data exchange space for the storage device 12, which not only reduces the latency associated with frequently accessing flash memory, but also significantly improves the performance of random read and write operations. Specifically, based on the data in the shared cache region 101, the storage device 12 is capable of performing a preset operation associated with the memory module 122, such as at least one of ① commanding a scheduling optimization that allows the memory control circuit 23 to temporarily store multiple I/O requests from the host system 11 in the shared cache region 101 in advance, thereby optimizing the order of execution of the requests, reducing unnecessary addressing time and page switching, and improving the overall response speed. ② The shared buffer area 101 can be used for storing and updating a logical-to-physical address Mapping Table (L2P Mapping Table), so that the searching process of the data position is quickened, and particularly for random reading operation, the efficiency is greatly improved. And performing ordering operations on the plurality of mapping table entries cached in the shared cache area 101 according to an ordering algorithm to obtain operations such as ordering queues. ③ Data caching-when data is read from the memory module 122, if the data has been loaded into the shared cache area 101, it can be retrieved directly from it, avoiding re-access to the slower memory module 122, thereby speeding up the data reading speed. Likewise, when new data is written, the data may be temporarily stored in the shared cache region 101 and then it is decided when and how to persist it to the memory module 122 according to a background optimization strategy. ④ Metadata processing-in addition to user data, may also be used to cache key metadata such as File Allocation Tables (FAT), wear leveling information, etc., to ensure that efficient data management and access patterns are maintained even under high load conditions.
In one embodiment, the memory control circuit 23 may establish a connection between the host system 11 and the storage device 12. For example, the memory control circuit 23 may perform a handshake (handshake) operation with the host system 11. The handshake operation is used to exchange information related to the establishment of a connection between the host system 11 and the storage 12, such as clock (clock) information and/or voltage information, etc.
In one embodiment, the memory control circuit 23 may establish a connection between the host system 11 and the storage device 12 according to the execution result of the handshake operation. The memory control circuit 23 can then access (also referred to as accessing) the shared buffer 101 through the connection.
In one embodiment, the memory control circuit 23 may store management data in the shared buffer 101. During access to the memory module 122, the memory control circuit 23 may query or update (i.e., modify) the management data in the shared cache area 101, respectively. For example, the management data may include part of the data (also referred to as mapping entries) in a logical-to-entity mapping table. The mapping table entry may carry mapping information (e.g., logical-to-entity mapping information). The mapping information may reflect a mapping relationship between at least one logical unit and at least one physical unit.
In one embodiment, when the host system 11 wants to read data belonging to a certain logical unit (also referred to as a first logical unit) from the storage device 12, the processor 111 can store the read information corresponding to the first logical unit in the shared buffer 101. For example, the read information may include a read instruction that instructs to read data belonging to the first logical unit. For example, the read instructions may include random read instructions and/or continuous read instructions. The random read instruction is used to instruct the reading of data from a single logical unit or a plurality of discrete logical units. The sequential read instruction is used for indicating to read data from a plurality of sequential logic units. The memory control circuit 23 can access the shared buffer 101 to obtain the read information.
After the read information is obtained, the memory control circuit 23 can determine whether the mapping table entry related to the first logical unit is cached in the shared buffer 101. If the mapping table associated with the first logical unit is not cached in the shared cache 101, the memory control circuit 23 may load the mapping table from the memory module 122 into the shared cache 101.
If the mapping table associated with the first logical unit is already cached in the shared cache 101 (or after the mapping table is loaded into the shared cache 101 from the memory module 122), the memory control circuit 23 may query the mapping table in the shared cache 101 to obtain a mapping relationship between the first logical unit and a physical unit (also referred to as a first physical unit) in the memory module 122. The memory control circuit 23 may then read data (also referred to as first data) from the first physical unit in the memory module 122 according to the mapping relationship. The memory control circuit 23 may then transmit the read first data back to the host system 11 in response to the read information.
In one embodiment, when the host system 11 wants to store data (also referred to as second data) belonging to a certain logical unit (also referred to as second logical unit) in the storage device 12, the processor 111 can store the write information corresponding to the second logical unit in the shared buffer 101. For example, the write information may include a write instruction indicating that data belonging to the second logical unit is updated. For example, the write instructions may include random write instructions and/or sequential write instructions. The random write instruction is used for indicating to update the data belonging to a single logic unit or a plurality of discontinuous logic units. The sequential write instruction is used for indicating to update data belonging to a plurality of sequential logic units. The memory control circuit 23 can access the shared buffer 101 to obtain the writing information.
After retrieving the write information, the memory control circuit 23 may store second data into physical units (also referred to as second physical units) in the memory module 122 according to the write information. On the other hand, the memory control circuit 23 can determine whether the mapping table entry related to the second logic unit is cached in the shared buffer 101. If the mapping table associated with the second logical unit is not cached in the shared cache 101, the memory control circuit 23 may load the mapping table from the memory module 122 into the shared cache 101. If the mapping table related to the second logic unit is already cached in the shared buffer 101 (or after the mapping table is loaded into the shared buffer 101 from the memory module 122), the memory control circuit 23 may update the mapping table in the shared buffer 101 to establish the mapping relationship between the second logic unit and the second entity unit. The memory control circuit 23 may then inform the host system 11 of the completion of the write operation in response to the write information.
In one embodiment, the memory control circuit 23 may also store other types of management data in the shared cache 101 (e.g., load from the memory module 122 into the shared cache 101). For example, the management data may include valid count management data, wear leveling management data, bad block management data, or the like, and the present invention is not limited. The effective count management data is used to manage the effective data storage status of at least some of the physical units in the memory module 122. For example, the effective count management data may include an effective count corresponding to at least one physical unit in the memory module 122. The wear-leveling management data is used to manage wear status of at least some of the physical units in the memory module 122. For example, wear-leveling management data may include read counts, write counts, and/or erase counts corresponding to at least one physical cell in memory module 122. Bad block management data is used to manage defective physical units (also known as bad blocks) in the memory module 122. For example, bad block management data may be used to mark at least one physical unit in the memory module 122 as a bad block. The memory control circuit 23 may then access or manage the memory module 122 according to the management data in the shared buffer 101.
It should be noted that the shared buffer 101 disposed in the memory 112 occupies a portion of the storage space of the memory 112, thereby reducing the capacity of the memory 112 that can be used by the host system 11 itself. For example, after the shared buffer 101 is set, the remaining capacity of the memory 112 after the shared buffer 101 is deducted is the memory space available for the host system 11 itself. Therefore, if the capacity of the shared buffer 101 is larger, the operation performance of the host system 11 may be reduced (due to the reduced memory space available for the host system 11). However, if the capacity of the shared buffer 101 is too small, the performance of the storage device 12 may be reduced (due to the reduced memory space available for the storage device 12).
According to the technical scheme provided by the embodiment of the invention, the fine management of the mapping table entries residing in the shared buffer area 101 can be realized while the shared buffer area 101 maintains a smaller capacity configuration (including but not limited to reducing the capacity requirement thereof), including but not limited to sorting, adding and removing operations. In this way, on the basis of ensuring that the operation efficiency of the host system 11 is not interfered, the performance of the storage device 12 and even the whole storage system 10 is further enhanced, the optimal balance between the processing capability of the host system 11 and the response speed of the storage device 12 is realized, and the two can operate in an ideal state without mutual influence.
In particular, this solution is particularly suitable for scenarios that promote random read-write performance. Conventionally, to maximize the performance of random reads and writes, all mapping entries should theoretically be loaded into the buffer memory 24. However, under the limitations of current product designs, e.g., buffer memory 24 typically has a fixed 256KB capacity, while the amount of data for mapping entries may reach 1GB or more (for example, a 1TB capacity), this makes it impossible to load all mapping entries all at once, resulting in frequent batch loading of mapping entries, thereby increasing latency and affecting the read speed of storage device 12. By employing the shared cache region 101 to efficiently manage these mapping entries, not only is this challenge solved, but independent and undisturbed optimization of host system 11 and storage device 12 performance is achieved.
In particular, this approach allows the memory system 10 to maintain high data access speeds and low latency characteristics while handling large numbers of map entries, thereby ensuring random read and write performance metrics. At the same time, the method also avoids the problems of increased cost, possibly caused by the increase of additional hardware resources, increased complexity of the system, reduced stability and the like. In summary, this innovative solution provides a viable and efficient way to address the challenges of high performance storage systems.
However, according to the technical solution provided in the embodiment of the present invention, the mapping entries cached in the shared buffer 101 can be properly managed (e.g. ordered, added and/or removed) under the condition that the capacity of the shared buffer 101 is not large (even the capacity of the shared buffer 101 can be reduced). Therefore, the performance of the storage device 12 or the whole storage system 10 can be effectively improved without affecting the operation performance of the host system 11 itself as much as possible.
In one embodiment, one or more mapping entries may be cached in the shared cache 101. After the memory control circuit 23 caches at least one mapping table in the shared buffer 101, the processor 111 can trace the type of at least one mapping table cached in the shared buffer 101 to obtain a tracing result. In other words, the tracking result may reflect the type of the at least one mapping entry currently cached in the shared cache 101.
In an embodiment, the processor 111 may determine a priority of one of the at least one mapping table (also referred to as a target mapping table) according to the tracking result. For example, the priority may affect the order in which the target mapping entries are retained in the shared buffer 101 or removed from the shared buffer 101 when the memory 112 and/or the shared buffer 101 is about to be or has been full. The processor 111 may then either retain the target mapping entries in the shared cache 101 or remove the target mapping entries from the shared cache 101, depending on the priority.
In one embodiment, the processor 111 may obtain identification information corresponding to the target mapping table entry. The identification information may reflect the type of the target mapping entry. Processor 111 may then determine the type of target mapping entry based on the identification information.
In one embodiment, when the target mapping table is cached in the shared cache area 101, an identification information may be associated with the target mapping table according to the application program to which the target mapping table belongs. For example, assuming that the application to which the target mapping table belongs is application a, identification information a corresponding to application a may be associated with the target mapping table. Or assuming that the application program to which the target mapping table belongs is application program B, identification information B corresponding to application program B may be associated with the target mapping table.
In one embodiment, the processor 111 may query the management table according to the identification information corresponding to the target mapping table item to obtain the query result. The management table may record a plurality of types (also referred to as candidate types) corresponding to a plurality of identification information (also referred to as candidate identification information) respectively. Processor 111 may then determine a type of the target mapping entry from the plurality of candidate types based on the query results. For example, based on the query result, the processor 111 may determine a candidate type in the management table that matches the identification information as the type of the target mapping entry.
In one embodiment, after determining the type of the target mapping table, the processor 111 may determine the priority of the target mapping table according to the type of the target mapping table. For example, different types of mapping entries may have different priorities. For example, if the type of the target mapping table item is a certain type (also referred to as a first type), the processor 111 may determine that the priority of the target mapping table item is a certain priority (also referred to as a first type of priority). If the type of the target mapping table is another type (also referred to as a second type), the processor 111 may determine that the priority of the target mapping table is another priority (also referred to as a second type of priority). The first type of priority may be different from the second type of priority.
In one embodiment, it is assumed that the target mapping table entry includes a first mapping table entry and a second mapping table entry. Both the first mapping table entry and the second mapping table entry are cached in the shared buffer 101. It is assumed that the type of the first mapping table entry is the first type and the type of the second mapping table entry is the second type. In one embodiment, the processor 111 may preferentially remove the second mapping table entries (of the second type) having the second type of priority from the shared cache 101 over the first mapping table entries (of the first type) having the first type of priority.
In one embodiment, the type of a mapping entry may reflect that the mapping entry belongs to at least one of an active foreground (foreground) application, an inactive foreground application, and a background (background) application. In one embodiment, assume that the current processor 111 is running applications A and B. The application program a runs in the foreground of an Operating System (OS), and the application program B runs in the background of the Operating System. At this point, the processor 111 may classify application a as an active foreground application and application B as a background application. It should be noted that those skilled in the art should know how to run different applications in the foreground and background of the operating system, and detailed descriptions are omitted here.
In one embodiment, assume that another application C is switched to the foreground run of the operating system. In response to application C being switched to the foreground run of the operating system, processor 111 may classify application C as an active foreground application. Meanwhile, the processor 111 may reclassify (e.g., demote) the application program a that originally belongs to the active foreground application as an inactive foreground application and maintain the application program B as a background application.
In an embodiment, the identification information corresponding to the target mapping table entry may reflect that the target mapping table entry belongs to an active foreground application, an inactive foreground application, or a background application. In addition, when the type of the target mapping table item is changed, the processor 111 may correspondingly update the identification information of the target mapping table item. The updated identification information may reflect the current type of the target mapping entry. For example, when switching from the target mapping table entry from belonging to the inactive foreground application to belonging to the active foreground application, the processor 111 may correspondingly update the identification information of the target mapping table entry, such that the updated identification information reflects that the target mapping table entry currently belongs to the active foreground application.
In one embodiment, if the type of the target mapping table reflects that the target mapping table belongs to the active foreground application (i.e., the target mapping table is for data access to the active foreground application), the processor 111 may determine that the priority of the target mapping table is a certain priority (also referred to as a first priority). Or if the type of the target mapping table entry reflects that the target mapping table entry belongs to an inactive foreground application (i.e., the target mapping table entry is for data access to the inactive foreground application), the processor 111 may determine that the priority of the target mapping table entry is another priority (also referred to as a second priority). Or if the type of the target mapping table entry reflects that the target mapping table entry belongs to the background application (i.e., the target mapping table entry is for data access to the background application), the processor 111 may determine that the priority of the target mapping table entry is yet another priority (also referred to as a third priority).
In one embodiment, when the processor 111 wants to remove data from the shared cache 101 to free up new memory storage, the mapping entries with the third priority may be removed from the shared cache 101 in preference to the mapping entries with the second priority, and the mapping entries with the second priority may be removed from the shared cache 101 in preference to the mapping entries with the first priority. In one embodiment, the mapping table having the first priority in the shared buffer 101 may also be set to be non-removable until the mapping table does not belong to the active foreground application.
In an embodiment, the processor 111 may preferentially keep mapping entries related to foreground applications (e.g., active foreground applications and/or inactive foreground applications) in the shared cache region 101 and/or preferentially remove mapping entries unrelated to foreground applications (e.g., active foreground applications and/or inactive foreground applications) from the shared cache region 101 as much as possible. Thus, even though the capacity of the shared buffer 101 is limited (or even reduced), the performance of the host system 11 and the storage device 12 in executing the foreground application may be maintained (or even improved).
In one embodiment, the processor 111 may also configure multiple regions in the shared buffer 101 to store different types of mapping entries in a sorted manner. Thus, the performance of the host system 11 and the storage device 12 can be effectively improved.
In one embodiment, the processor 111 may detect the data storage amount of at least one of the memory 112 and the shared buffer 101. For example, the amount of data stored may reflect how much data has been stored in at least one of the current memory 112 and the shared buffer 101. In one embodiment, the processor 111 may detect whether the data storage amount reaches a threshold value. When the data storage amount reaches a critical value, the processor 111 may either keep the target mapping table in the shared buffer 101 or remove the target mapping table from the shared buffer 101 according to the priority. However, if the data storage amount does not reach the threshold value, the processor 111 may not perform the above-described operation of removing the mapping table entry from the shared buffer 101.
In one embodiment, when the data storage amount of at least one of the memory 112 and the shared buffer 101 is relatively large (e.g. reaches the threshold value), the data that has little influence on the operation performance of the host system 11 and the storage device 12 is preferentially removed according to the priority, so that additional memory space can be released for the host system 11 (and the storage device 12). Thus, the performance of the host system 11 and the storage device 12 can be improved.
It should be noted that, in an embodiment, the type of at least one mapping entry cached in the shared buffer 101 may be configured or adjusted according to the actual requirement, and is not limited to the active foreground application, the inactive foreground application and/or the background application. In addition, in one embodiment, the priorities and the ordering manner of the mapping entries of different types can be adjusted according to the practical requirements, which is not limited by the present invention.
In one embodiment, when a plurality of mapping entries belonging to the same type and/or having the same priority in the shared cache 101 are to be removed, the processor 111 may randomly remove at least one of the plurality of mapping entries from the shared cache 101. In one embodiment, by preferentially removing mapping entries that are of a particular type and/or have a particular priority, the performance of host system 11 and storage device 12 may be maintained or even increased.
In one embodiment, when multiple mapping entries belonging to the same type and/or having the same priority in the shared cache 101 are to be removed, the processor 111 may determine which mapping entry of the multiple mapping entries is to be removed from the shared cache 101 according to the frequency, the number of times, or the time point when the multiple mapping entries are to be accessed (e.g., accessed, queried, or used) respectively. For example, the processor 111 may compare the frequency, number of times, or point in time at which the plurality of mapping entries are accessed (e.g., accessed, queried, or used), respectively, to obtain the comparison result. Processor 111 may then determine which of the plurality of mapping entries to remove from shared cache 101 preferentially based on the comparison. For example, the processor 111 may preferentially remove mapping entries from the shared cache 101 that are accessed relatively frequently, and/or accessed at a time point relatively close to the current system time as exhibited by the comparison result. These evaluation factors may alternatively or at least be taken into account as selection of the mapping entries to be removed preferentially. Thus, it is ensured that the performance of the host system 11 and the storage device 12 can be maintained or even improved after the partial mapping entries are removed from the shared buffer 101. In one embodiment, the processor 111 may also select one or more mapping entries that need to be removed preferentially from the plurality of mapping entries of the same type and/or with the same priority by using other sorting algorithms, depending on the actual requirement.
FIG. 4 is a flow chart illustrating a method of type tracking based shared cache data management in accordance with an embodiment of the present invention. Referring to fig. 4, in step S401, a shared buffer is configured in a memory of a host system, wherein a storage device is configured to perform a preset operation based on data buffered in the shared buffer. In step S402, the type of at least one mapping entry cached in the shared cache is tracked to obtain a tracking result. In step S403, the priority of the target mapping table item in the at least one mapping table item is determined according to the tracking result. In step S404, the target mapping table entry is retained in the shared buffer or removed from the shared buffer according to the priority.
FIG. 5 is a flow chart illustrating a method of type tracking based shared cache data management in accordance with an embodiment of the present invention. Referring to fig. 5, in step S501, the type of the target mapping table currently cached in the shared cache is determined. In step S502, it is determined whether the target mapping table entry belongs to an active foreground application. If the target mapping table belongs to the active foreground application, in step S503, it is determined that the priority of the target mapping table is the first priority. If the target mapping table item does not belong to the active foreground application, in step S504, it is determined whether the target mapping table item belongs to the inactive foreground application. If the target mapping table belongs to the inactive foreground application, in step S505, the priority of the target mapping table is determined to be the second priority. In addition, if the target mapping table item does not belong to the inactive foreground application, in step S506, the priority of the target mapping table item is determined to be the third priority.
However, the steps in fig. 4 and fig. 5 are described in detail above, and will not be repeated here. It should be noted that each step in fig. 4 and fig. 5 may be implemented as a plurality of program codes or circuits, which is not limited by the present invention. In addition, the methods of fig. 4 and 5 may be used with the above exemplary embodiments, or may be used alone, and the present invention is not limited thereto.
In summary, the method and the system for managing data in a shared buffer based on type tracking according to the embodiments of the present invention can properly manage (e.g. sort, add and/or remove) the mapping entries cached in the shared buffer under the condition that the capacity of the shared buffer for the storage device is not large (even the capacity of the shared buffer can be reduced) in the host system. Therefore, the efficiency of the storage device or the whole storage system can be effectively improved on the premise that the operation efficiency of the host system is not affected as much as possible.
In one embodiment, the management and/or ordering mechanism described above for mapping entries may also be used to manage and/or order other types of data. For example, in one embodiment, the mapping table entries may be replaced by various instructions, valid count management data, wear-leveling management data, bad block management data, or other types of custom data, which are not repeated herein.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present invention.

Claims (16)

1.一种基于类型追踪的共享缓存区数据管理方法,其特征在于,用于存储系统,所述存储系统包括主机系统与存储装置,所述主机系统连接至所述存储装置,且所述基于类型追踪的共享缓存区数据管理方法包括:1. A shared cache data management method based on type tracking, characterized in that it is used in a storage system, the storage system includes a host system and a storage device, the host system is connected to the storage device, and the shared cache data management method based on type tracking includes: 在所述主机系统的内存中配置共享缓存区,其中所述存储装置用以基于缓存于所述共享缓存区中的数据执行预设操作;configuring a shared cache area in the memory of the host system, wherein the storage device is used to perform a preset operation based on data cached in the shared cache area; 追踪缓存于所述共享缓存区中的至少一映射表项的类型,以取得追踪结果;Tracking the type of at least one mapping table entry cached in the shared cache area to obtain a tracking result; 根据所述追踪结果,确定所述至少一映射表项中的目标映射表项的优先级;以及Determining the priority of a target mapping table entry in the at least one mapping table entry according to the tracking result; and 根据所述优先级,将所述目标映射表项保留于所述共享缓存区中,或将所述目标映射表项从所述共享缓存区中移除。According to the priority, the target mapping table entry is retained in the shared cache area, or the target mapping table entry is removed from the shared cache area. 2.根据权利要求1所述的基于类型追踪的共享缓存区数据管理方法,其中追踪缓存于所述共享缓存区中的所述至少一映射表项的所述类型的步骤包括:2. The shared cache data management method based on type tracking according to claim 1, wherein the step of tracking the type of the at least one mapping table entry cached in the shared cache comprises: 取得对应于所述目标映射表项的识别信息;以及Obtaining identification information corresponding to the target mapping table entry; and 根据所述识别信息,确定所述目标映射表项的所述类型。The type of the target mapping table entry is determined according to the identification information. 3.根据权利要求2所述的基于类型追踪的共享缓存区数据管理方法,其中根据所述识别信息,确定所述目标映射表项的所述类型的步骤包括:3. The shared cache data management method based on type tracking according to claim 2, wherein the step of determining the type of the target mapping table entry according to the identification information comprises: 根据所述识别信息,查询管理表格,以取得查询结果,其中所述管理表格记载多个候选识别信息所分别对应的多个候选类型;以及According to the identification information, querying a management table to obtain a query result, wherein the management table records a plurality of candidate types corresponding to a plurality of candidate identification information respectively; and 根据所述查询结果,从所述多个候选类型中确定所述目标映射表项的所述类型。According to the query result, the type of the target mapping table entry is determined from the multiple candidate types. 4.根据权利要求1所述的基于类型追踪的共享缓存区数据管理方法,其中根据所述追踪结果,确定所述至少一映射表项中的所述目标映射表项的所述优先级的步骤包括:4. The shared cache data management method based on type tracking according to claim 1, wherein the step of determining the priority of the target mapping table entry in the at least one mapping table entry according to the tracking result comprises: 若所述目标映射表项的所述类型为第一类型,确定所述目标映射表项的所述优先级为第一类优先级;If the type of the target mapping table entry is the first type, determining that the priority of the target mapping table entry is a first priority; 若所述目标映射表项的所述类型为第二类型,确定所述目标映射表项的所述优先级为第二类优先级,其中所述第一类优先级不同于所述第二类优先级。If the type of the target mapping table entry is the second type, the priority of the target mapping table entry is determined to be a second type of priority, wherein the first type of priority is different from the second type of priority. 5.根据权利要求4所述的基于类型追踪的共享缓存区数据管理方法,其中所述目标映射表项包括第一映射表项与第二映射表项,所述第一映射表项的类型为所述第一类型,所述第二映射表项的类型为所述第二类型,且根据所述优先级,将所述目标映射表项保留于所述共享缓存区中,或将所述目标映射表项从所述共享缓存区中移除的步骤包括:5. The method for managing shared cache data based on type tracking according to claim 4, wherein the target mapping table entry comprises a first mapping table entry and a second mapping table entry, the type of the first mapping table entry is the first type, the type of the second mapping table entry is the second type, and according to the priority, the step of retaining the target mapping table entry in the shared cache or removing the target mapping table entry from the shared cache comprises: 相较于具有所述第一类优先级的所述第一映射表项,优先将具有所述第二类优先级的所述第二映射表项从所述共享缓存区中移除。Compared with the first mapping table entry with the first priority, the second mapping table entry with the second priority is preferentially removed from the shared cache area. 6.根据权利要求1所述的基于类型追踪的共享缓存区数据管理方法,其中所述至少一映射表项的所述类型反映所述至少一映射表项属于活跃前台应用、非活跃前台应用及后台应用的至少其中之一。6. According to the type tracking-based shared cache data management method of claim 1, wherein the type of the at least one mapping table entry reflects that the at least one mapping table entry belongs to at least one of an active foreground application, an inactive foreground application, and a background application. 7.根据权利要求6所述的基于类型追踪的共享缓存区数据管理方法,其中根据所述追踪结果,确定所述至少一映射表项中的所述目标映射表项的所述优先级的步骤包括:7. The shared cache data management method based on type tracking according to claim 6, wherein the step of determining the priority of the target mapping table entry in the at least one mapping table entry according to the tracking result comprises: 若所述目标映射表项的所述类型反映所述目标映射表项属于所述活跃前台应用,则确定所述目标映射表项的所述优先级为第一优先级;If the type of the target mapping table entry reflects that the target mapping table entry belongs to the active foreground application, determining that the priority of the target mapping table entry is a first priority; 若所述目标映射表项的所述类型反映所述目标映射表项属于所述非活跃前台应用,则确定所述目标映射表项的所述优先级为第二优先级;以及If the type of the target mapping table entry reflects that the target mapping table entry belongs to the inactive foreground application, determining that the priority of the target mapping table entry is a second priority; and 若所述目标映射表项的类型反映所述目标映射表项属于所述后台应用,则确定所述目标映射表项的所述优先级为第三优先级,If the type of the target mapping table entry reflects that the target mapping table entry belongs to the background application, determining that the priority of the target mapping table entry is the third priority, 其中当欲从所述共享缓存区中移除数据时,具有所述第三优先级的映射表项优先于具有所述第二优先级的映射表项被从所述共享缓存区中移除,且具有所述第二优先级的所述映射表项优先于具有所述第一优先级的映射表项被从所述共享缓存区中移除。When data is to be removed from the shared cache area, the mapping table entry with the third priority is removed from the shared cache area before the mapping table entry with the second priority, and the mapping table entry with the second priority is removed from the shared cache area before the mapping table entry with the first priority. 8.根据权利要求1所述的基于类型追踪的共享缓存区数据管理方法,其中根据所述优先级,将所述目标映射表项保留于所述共享缓存区中,或将所述目标映射表项从所述共享缓存区中移除的步骤包括:8. The method for managing shared cache data based on type tracking according to claim 1, wherein the step of retaining the target mapping table entry in the shared cache or removing the target mapping table entry from the shared cache according to the priority comprises: 检测所述内存与所述共享缓存区的至少其中之一的数据存储量;以及detecting a data storage capacity of at least one of the memory and the shared cache; and 当所述数据存储量达到临界值时,根据所述优先级,将所述目标映射表项保留于所述共享缓存区中,或将所述目标映射表项从所述共享缓存区中移除。When the data storage amount reaches a critical value, the target mapping table entry is retained in the shared cache area or removed from the shared cache area according to the priority. 9.一种存储系统,其特征在于,包括:9. A storage system, comprising: 主机系统;以及Host system; and 存储装置,连接至所述主机系统,a storage device, connected to the host system, 其中所述主机系统用以:The host system is used to: 在所述主机系统的内存中配置共享缓存区,其中所述存储装置用以基于缓存于所述共享缓存区中的数据执行预设操作;configuring a shared cache area in the memory of the host system, wherein the storage device is used to perform a preset operation based on data cached in the shared cache area; 追踪缓存于所述共享缓存区中的至少一映射表项的类型,以取得追踪结果;Tracking the type of at least one mapping table entry cached in the shared cache area to obtain a tracking result; 根据所述追踪结果,确定所述至少一映射表项中的目标映射表项的优先级;以及Determining the priority of a target mapping table entry in the at least one mapping table entry according to the tracking result; and 根据所述优先级,将所述目标映射表项保留于所述共享缓存区中,或将所述目标映射表项从所述共享缓存区中移除。According to the priority, the target mapping table entry is retained in the shared cache area, or the target mapping table entry is removed from the shared cache area. 10.根据权利要求9所述的存储系统,其中追踪缓存于所述共享缓存区中的所述至少一映射表项的所述类型的操作包括:10. The storage system according to claim 9, wherein the operation of tracking the type of the at least one mapping table entry cached in the shared cache area comprises: 取得对应于所述目标映射表项的识别信息;以及Obtaining identification information corresponding to the target mapping table entry; and 根据所述识别信息,确定所述目标映射表项的所述类型。The type of the target mapping table entry is determined according to the identification information. 11.根据权利要求10所述的存储系统,其中根据所述识别信息,确定所述目标映射表项的所述类型的操作包括:11. The storage system according to claim 10, wherein the operation of determining the type of the target mapping table entry according to the identification information comprises: 根据所述识别信息,查询管理表格,以取得查询结果,其中所述管理表格记载多个候选识别信息所分别对应的多个候选类型;以及According to the identification information, querying a management table to obtain a query result, wherein the management table records a plurality of candidate types corresponding to a plurality of candidate identification information respectively; and 根据所述查询结果,从所述多个候选类型中确定所述目标映射表项的所述类型。According to the query result, the type of the target mapping table entry is determined from the multiple candidate types. 12.根据权利要求9所述的存储系统,其中根据所述追踪结果,确定所述至少一映射表项中的所述目标映射表项的所述优先级的操作包括:12. The storage system according to claim 9, wherein the operation of determining the priority of the target mapping table entry in the at least one mapping table entry according to the tracking result comprises: 若所述目标映射表项的所述类型为第一类型,确定所述目标映射表项的所述优先级为第一类优先级;If the type of the target mapping table entry is the first type, determining that the priority of the target mapping table entry is a first priority; 若所述目标映射表项的所述类型为第二类型,确定所述目标映射表项的所述优先级为第二类优先级,其中所述第一类优先级不同于所述第二类优先级。If the type of the target mapping table entry is the second type, the priority of the target mapping table entry is determined to be a second type of priority, wherein the first type of priority is different from the second type of priority. 13.根据权利要求12所述的存储系统,其中所述目标映射表项包括第一映射表项与第二映射表项,所述第一映射表项的类型为所述第一类型,所述第二映射表项的类型为所述第二类型,且根据所述优先级,将所述目标映射表项保留于所述共享缓存区中,或将所述目标映射表项从所述共享缓存区中移除的操作包括:13. The storage system according to claim 12, wherein the target mapping table entry comprises a first mapping table entry and a second mapping table entry, the type of the first mapping table entry is the first type, the type of the second mapping table entry is the second type, and according to the priority, the operation of retaining the target mapping table entry in the shared cache area or removing the target mapping table entry from the shared cache area comprises: 当欲从所述共享缓存区中移除数据时,相较于具有所述第一类优先级的所述第一映射表项,优先将具有所述第二类优先级的所述第二映射表项从所述共享缓存区中移除。When data is to be removed from the shared cache area, the second mapping table entry with the second priority is removed from the shared cache area in preference to the first mapping table entry with the first priority. 14.根据权利要求9所述的存储系统,其中所述至少一映射表项的所述类型反映所述至少一映射表项属于活跃前台应用、非活跃前台应用及后台应用的至少其中之一。14 . The storage system according to claim 9 , wherein the type of the at least one mapping table entry reflects that the at least one mapping table entry belongs to at least one of an active foreground application, an inactive foreground application, and a background application. 15.根据权利要求14所述的存储系统,其中根据所述追踪结果,确定所述至少一映射表项中的所述目标映射表项的所述优先级的操作包括:15. The storage system according to claim 14, wherein the operation of determining the priority of the target mapping table entry in the at least one mapping table entry according to the tracking result comprises: 若所述目标映射表项的所述类型反映所述目标映射表项属于所述活跃前台应用,则确定所述目标映射表项的所述优先级为第一优先级;If the type of the target mapping table entry reflects that the target mapping table entry belongs to the active foreground application, determining that the priority of the target mapping table entry is a first priority; 若所述目标映射表项的所述类型反映所述目标映射表项属于所述非活跃前台应用,则确定所述目标映射表项的所述优先级为第二优先级;以及If the type of the target mapping table entry reflects that the target mapping table entry belongs to the inactive foreground application, determining that the priority of the target mapping table entry is a second priority; and 若所述目标映射表项的类型反映所述目标映射表项属于所述后台应用,则确定所述目标映射表项的所述优先级为第三优先级,If the type of the target mapping table entry reflects that the target mapping table entry belongs to the background application, determining that the priority of the target mapping table entry is the third priority, 其中当欲从所述共享缓存区中移除数据时,具有所述第三优先级的映射表项优先于具有所述第二优先级的映射表项被从所述共享缓存区中移除,且具有所述第二优先级的所述映射表项优先于具有所述第一优先级的映射表项被从所述共享缓存区中移除。When data is to be removed from the shared cache area, the mapping table entry with the third priority is removed from the shared cache area before the mapping table entry with the second priority, and the mapping table entry with the second priority is removed from the shared cache area before the mapping table entry with the first priority. 16.根据权利要求9所述的存储系统,其中根据所述优先级,将所述目标映射表项保留于所述共享缓存区中,或将所述目标映射表项从所述共享缓存区中移除的操作包括:16. The storage system according to claim 9, wherein according to the priority, the operation of retaining the target mapping table entry in the shared cache area or removing the target mapping table entry from the shared cache area comprises: 检测所述内存与所述共享缓存区的至少其中之一的数据存储量;以及detecting a data storage capacity of at least one of the memory and the shared cache; and 当所述数据存储量达到临界值时,根据所述优先级,将所述目标映射表项保留于所述共享缓存区中,或将所述目标映射表项从所述共享缓存区中移除。When the data storage amount reaches a critical value, the target mapping table entry is retained in the shared cache area or removed from the shared cache area according to the priority.
CN202510045450.0A 2025-01-13 2025-01-13 Shared buffer data management method and storage system based on type tracking Active CN119440427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510045450.0A CN119440427B (en) 2025-01-13 2025-01-13 Shared buffer data management method and storage system based on type tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510045450.0A CN119440427B (en) 2025-01-13 2025-01-13 Shared buffer data management method and storage system based on type tracking

Publications (2)

Publication Number Publication Date
CN119440427A true CN119440427A (en) 2025-02-14
CN119440427B CN119440427B (en) 2025-06-10

Family

ID=94518332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510045450.0A Active CN119440427B (en) 2025-01-13 2025-01-13 Shared buffer data management method and storage system based on type tracking

Country Status (1)

Country Link
CN (1) CN119440427B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120324196A1 (en) * 2011-06-20 2012-12-20 Microsoft Corporation Memory manager with enhanced application metadata
CN112119384A (en) * 2018-05-07 2020-12-22 苹果公司 Techniques for managing memory allocation within a storage device to improve operation of camera applications
CN114741336A (en) * 2022-06-09 2022-07-12 荣耀终端有限公司 Host side buffer adjustment method in memory, electronic device and chip system
US20240004811A1 (en) * 2022-07-04 2024-01-04 SK Hynix Inc. Memory controller, storage device, and method of operating the same
CN118092807A (en) * 2024-03-12 2024-05-28 合肥开梦科技有限责任公司 Cache space allocation method and memory storage device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120324196A1 (en) * 2011-06-20 2012-12-20 Microsoft Corporation Memory manager with enhanced application metadata
CN112119384A (en) * 2018-05-07 2020-12-22 苹果公司 Techniques for managing memory allocation within a storage device to improve operation of camera applications
CN114741336A (en) * 2022-06-09 2022-07-12 荣耀终端有限公司 Host side buffer adjustment method in memory, electronic device and chip system
US20240004811A1 (en) * 2022-07-04 2024-01-04 SK Hynix Inc. Memory controller, storage device, and method of operating the same
CN118092807A (en) * 2024-03-12 2024-05-28 合肥开梦科技有限责任公司 Cache space allocation method and memory storage device

Also Published As

Publication number Publication date
CN119440427B (en) 2025-06-10

Similar Documents

Publication Publication Date Title
US11232041B2 (en) Memory addressing
US8380945B2 (en) Data storage device, memory system, and computing system using nonvolatile memory device
US8892814B2 (en) Data storing method, and memory controller and memory storage apparatus using the same
TW201842444A (en) Garbage collection
US20130013853A1 (en) Command executing method, memory controller and memory storage apparatus
US8037236B2 (en) Flash memory writing method and storage system and controller using the same
CN111694510B (en) Data storage device and data processing method
CN108733577B (en) Memory management method, memory control circuit unit, and memory storage device
TWI829363B (en) Data processing method and the associated data storage device
CN112230849B (en) Memory control method, memory storage device and memory controller
CN119440427B (en) Shared buffer data management method and storage system based on type tracking
TWI808011B (en) Data processing method and the associated data storage device
CN119473163B (en) Shared buffer capacity adjustment method and storage system based on type tracking
CN119473164B (en) Shared buffer data management method and storage system for dynamically determining management policy
CN119440428B (en) Shared buffer data management method and storage system based on counting information
CN119473167B (en) Method for adjusting capacity of shared buffer area and storage system
CN119473166B (en) Shared buffer data management method and storage system based on verification information
CN119440429B (en) Shared cache capacity adjustment method and storage system based on counting information
CN118331511B (en) Memory management method and memory controller
TWI814590B (en) Data processing method and the associated data storage device
CN102456401A (en) Block management method, memory controller and memory storage device
CN120010777A (en) Memory management method and memory device
CN118747059A (en) Memory management method and storage device
CN114503086A (en) Adaptive Wear Leveling Method and Algorithm
CN117806534A (en) Data processing method and corresponding data storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant