[go: up one dir, main page]

CN119473167A - Shared cache capacity adjustment method and storage system - Google Patents

Shared cache capacity adjustment method and storage system Download PDF

Info

Publication number
CN119473167A
CN119473167A CN202510045463.8A CN202510045463A CN119473167A CN 119473167 A CN119473167 A CN 119473167A CN 202510045463 A CN202510045463 A CN 202510045463A CN 119473167 A CN119473167 A CN 119473167A
Authority
CN
China
Prior art keywords
mapping table
capacity
target
table item
specific
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510045463.8A
Other languages
Chinese (zh)
Other versions
CN119473167B (en
Inventor
王智麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Core Storage Electronic Ltd
Original Assignee
Hefei Core Storage Electronic Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Core Storage Electronic Ltd filed Critical Hefei Core Storage Electronic Ltd
Priority to CN202510045463.8A priority Critical patent/CN119473167B/en
Publication of CN119473167A publication Critical patent/CN119473167A/en
Application granted granted Critical
Publication of CN119473167B publication Critical patent/CN119473167B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本发明提供一种共享缓存区容量调整方法与存储系统。所述方法包括:在主机系统的内存中配置共享缓存区,且存储装置用以基于缓存于共享缓存区中的数据执行预设操作;检测当前缓存于共享缓存区中的至少一映射表项中是否存在对应于特定应用程序的特定映射表项;若当前缓存于共享缓存区中的至少一映射表项中存在所述特定映射表项,从多个候选容量中确定目标容量;以及根据目标容量,调整共享缓存区的容量。由此,可在尽可能不影响主机系统自身的运作效能的前提下,有效提高存储装置或整个存储系统的效能。

The present invention provides a shared cache capacity adjustment method and storage system. The method includes: configuring a shared cache in the memory of a host system, and a storage device is used to perform a preset operation based on data cached in the shared cache; detecting whether there is a specific mapping table item corresponding to a specific application in at least one mapping table item currently cached in the shared cache; if the specific mapping table item exists in at least one mapping table item currently cached in the shared cache, determining a target capacity from multiple candidate capacities; and adjusting the capacity of the shared cache according to the target capacity. In this way, the performance of the storage device or the entire storage system can be effectively improved without affecting the operating performance of the host system itself as much as possible.

Description

Method for adjusting capacity of shared buffer area and storage system
Technical Field
The present invention relates to the field of storage technologies, and in particular, to a method for adjusting a capacity of a shared buffer and a storage system.
Background
NAND flash (flash) is a nonvolatile memory technology that is widely used in various memory devices. It stores charge by using floating gate transistors, each representing a memory cell. NAND flash memory cells typically organize data in pages, each page containing multiple bytes, which in turn make up a block. The data is read and programmed in units of pages, and the erase operation is performed in units of blocks. This organization makes NAND flash very suitable for mass storage and has a high writing speed.
However, as the storage capacity of the storage device is continuously increased, the data amount of management data for managing the storage device is continuously increased, resulting in that the small-capacity memory of the storage device itself is not used.
Disclosure of Invention
The invention provides a method for adjusting capacity of a shared buffer and a storage system, which can effectively improve the efficiency of a storage device or a whole storage system on the premise of not affecting the operation efficiency of a host system as much as possible.
An embodiment of the invention provides a method for adjusting capacity of a shared buffer area, which is used for a storage system, wherein the storage system comprises a host system and a storage device, the host system is connected to the storage device, the method for adjusting the capacity of the shared buffer area comprises the steps of configuring the shared buffer area in a memory of the host system, wherein the storage device is used for executing preset operation based on data cached in the shared buffer area, detecting whether a specific mapping table item corresponding to a specific application program exists in at least one mapping table item currently cached in the shared buffer area, determining target capacity from a plurality of candidate capacities if the specific mapping table item corresponding to the specific application program exists in the at least one mapping table item currently cached in the shared buffer area, and adjusting the capacity of the shared buffer area according to the target capacity.
The embodiment of the invention also provides a storage system, which comprises a host system and a storage device. The storage device is connected to the host system. The host system is used for configuring a shared buffer area in a memory of the host system, wherein the storage device is used for executing preset operation based on data cached in the shared buffer area, detecting whether a specific mapping table item corresponding to a specific application program exists in at least one mapping table item currently cached in the shared buffer area, determining a target capacity from a plurality of candidate capacities if the specific mapping table item corresponding to the specific application program exists in at least one mapping table item currently cached in the shared buffer area, and adjusting the capacity of the shared buffer area according to the target capacity.
Based on the above, a shared buffer may be configured in the memory of the host system, and the storage device may perform a predetermined operation based on the data buffered in the shared buffer. When the storage system is in operation, whether a specific mapping table corresponding to a specific application program exists in at least one mapping table currently cached in the shared cache region can be determined. In particular, if a specific mapping table corresponding to the specific application exists in the at least one mapping table currently cached in the shared cache, the target capacity may be determined from a plurality of candidate capacities, and the capacity of the shared cache may be automatically adjusted according to the target capacity. Therefore, the efficiency of the storage device or the whole storage system can be effectively improved on the premise that the operation efficiency of the host system is not affected as much as possible.
Drawings
FIG. 1 is a schematic diagram of a storage system shown in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a memory controller shown according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a managed memory module shown in accordance with an embodiment of the present invention;
fig. 4 is a flowchart illustrating a method for adjusting the capacity of a shared buffer according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts.
FIG. 1 is a schematic diagram of a memory system according to an embodiment of the present invention. Referring to fig. 1, a storage system (also referred to as a data storage system) 10 includes a host system 11 and a storage device 12. The storage device 12 may be connected to the host system 11 and may be used to store data from the host system 11. For example, the host system 11 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, an industrial computer, a game machine, a server, or a computer system provided in a specific carrier (e.g., a vehicle, an aircraft, or a ship), or the like, and the type of the host system 11 is not limited thereto. Further, the storage device 12 may include a solid state disk, a USB flash drive, a memory card, or other type of non-volatile storage device.
Host system 11 includes a processor 111 and a memory 112. The processor 111 is responsible for the overall or partial operation of the host system 11. For example, the Processor 111 may include a central processing unit (Central Processing Unit, CPU), a graphics processing unit (GRAPHICAL PROCESSING UNIT, GPU) or other programmable general purpose or special purpose microprocessor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a programmable controller, an Application Specific Integrated Circuit (ASIC), a programmable logic device (Programmable Logic Device, PLD), or other similar device or combination of devices.
The memory 112 is connected to the processor 111 and is used for buffering data. For example, the memory 112 may include random access memory (Random Access Memory, RAM) or similar volatile storage. It should be noted that the memory 112 is disposed in the host system 11 (for example, disposed on a motherboard of the host system 11 or directly disposed in the processor 111), and is not disposed in the storage device 12.
The memory device 12 includes a connection interface 121, a memory module 122, and a memory controller 123. The connection interface 121 is used to connect the storage device 12 to the host system 11. For example, connection interface 121 may support an embedded multimedia card (embedded Multi-MEDIA CARD, EMMC), universal flash memory (Universal Flash Storage, UFS), peripheral component interconnect Express (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCI Express), non-volatile memory Express (Non-Volatile Memory Express, NVM Express), serial advanced technology attachment (SERIAL ADVANCED Technology Attachment, SATA), universal serial bus (Universal Serial Bus, USB), or other types of connection interface standards. Accordingly, storage device 12 may communicate (e.g., exchange signals, instructions, and/or data) with host system 11 via connection interface 121.
The memory module 122 is used for storing data. For example, the memory module 122 may include one or more rewritable non-volatile memory modules. Each of the rewritable non-volatile memory modules may include one or more memory cell arrays. Memory cells in a memory cell array store data in the form of voltages (also referred to as threshold voltages). For example, the memory module 122 may include a single level memory cell (SINGLE LEVEL CELL, SLC) NAND-type flash memory module, a second level memory cell (Multi LEVEL CELL, MLC) NAND-type flash memory module, a third level memory cell (TRIPLE LEVEL CELL, TLC) NAND-type flash memory module, a fourth level memory cell (Quad LEVEL CELL, QLC) NAND-type flash memory module, and/or other memory modules having the same or similar characteristics.
The memory controller 123 is connected to the connection interface 121 and the memory module 122. Memory controller 123 may be considered a control core of storage device 12 and may be used to control storage device 12. For example, the memory controller 123 may be used to control or manage the operation of the storage device 12 in whole or in part. For example, the memory controller 123 may include a central processing unit (Central Processing Unit, CPU), or other programmable general purpose or special purpose microprocessor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), programmable controller, application SPECIFIC INTEGRATED Circuits (ASIC), programmable logic device (Programmable Logic Device, PLD), or other similar device or combination of devices. In an embodiment, the memory controller 123 may comprise a flash memory controller.
Memory controller 123 may send a sequence of instructions to memory module 122 to access memory module 122. For example, memory controller 123 may send a sequence of write instructions to memory module 122 to instruct memory module 122 to store data in a particular memory location. For example, memory controller 123 can send a sequence of read instructions to memory module 122 to instruct memory module 122 to read data from a particular memory location. For example, memory controller 123 can send a sequence of erase instructions to memory module 122 to instruct memory module 122 to erase data stored in a particular memory cell. In addition, memory controller 123 may send other types of instruction sequences to memory module 122 to instruct memory module 122 to perform other types of operations, as the invention is not limited. The memory module 122 may receive a sequence of instructions from the memory controller 123 and access memory locations within the memory module 122 according to the sequence of instructions.
FIG. 2 is a schematic diagram of a memory controller according to an embodiment of the invention. Referring to fig. 1 and 2, the memory controller 123 includes a host interface 21, a memory interface 22, and a memory control circuit 23. The host interface 21 is used to connect to the host system 11 through the connection interface 121 to communicate with the host system 11. The memory interface 22 is used to connect to the memory module 122 to access the memory module 122.
The memory control circuit 23 is connected to the host interface 21 and the memory interface 22. The memory control circuit 23 may be used to control or manage the operation of the memory controller 123 in whole or in part. For example, the memory control circuit 23 may communicate with the host system 11 through the host interface 21 and access the memory module 122 through the memory interface 22. For example, the memory control circuit 23 may include a control circuit such as an embedded controller or a microcontroller. In the following embodiment, the explanation of the memory control circuit 23 is equivalent to the explanation of the memory controller 123.
In one embodiment, memory controller 123 may also include buffer memory 24. The buffer memory 24 is connected to the memory control circuit 23 and is used for buffering data. For example, buffer memory 24 may be used to buffer instructions from host system 11, data from host system 11, and/or data from memory module 122.
In an embodiment, the memory controller 123 may also include a decoding circuit 25. The decoding circuit 25 is connected to the memory control circuit 23 and performs encoding and decoding of data to ensure the correctness of the data. For example, the decoding circuit 25 may support various encoding/decoding algorithms such as Low DENSITY PARITY CHECK codes (LDPC codes), BCH codes, reed-solomon codes (RS codes), and Exclusive OR (XOR) codes. In one embodiment, the memory controller 123 may also include various circuit modules of other types (e.g., power management circuits, etc.), which are not limiting.
FIG. 3 is a schematic diagram illustrating managing memory modules according to an embodiment of the invention. Referring to fig. 1 to 3, the memory module 122 includes a plurality of physical units 301 (1) to 301 (B). Each physical unit comprises a plurality of memory cells and is used for nonvolatile memory data.
In one embodiment, a physical cell may include one or more physical erase units. Furthermore, one entity unit may comprise a plurality of sub-entity units. For example, a sub-entity unit may include one or more entity programming units.
In one embodiment, a physical programmer may include a plurality of physical sectors (sectors). For example, a physical sector may have a data size of 512 Bytes (Bytes), and a physical programming unit may include 32 physical sectors. However, the data capacity of one physical fan and/or the total number of physical fans included in one physical programming unit can be adjusted according to the practical requirements, and the present invention is not limited thereto. In one embodiment, a physical programmer may be considered a physical page. For example, the storage capacity of one physical programming unit may be 16 kilobytes, and the present invention is not limited thereto.
In one embodiment, one physical programmer is the minimum unit of synchronous write data in the memory module 122. For example, when performing a programming operation (also referred to as a write operation) on a physical programming unit to write data into the physical programming unit, a plurality of memory cells in the physical programming unit may be synchronously programmed to store corresponding data. For example, when programming a physical programming unit, a write voltage may be applied to the physical programming unit to change the threshold voltage of at least some of the memory cells in the physical programming unit. For example, the threshold voltage of a memory cell may reflect the bit data stored by the memory cell.
In one embodiment, a physical erase unit may include a plurality of physical program units. Multiple physical program units in a physical erase unit can be erased simultaneously. For example, when performing an erase operation on a physically erased cell, an erase voltage may be applied to a plurality of physically programmed cells in the physically erased cell to change the threshold voltage of at least some of the physically programmed cells. By performing an erase operation on a physically erased cell, data stored in the physically erased cell may be erased. In one embodiment, a physical erased cell may be considered a physical block.
In one embodiment, the memory control circuit 23 can logically associate the physical units 301 (1) to 301 (A) and 301 (A+1) to 301 (B) to the data area 31 and the idle area 32, respectively. The physical units 301 (1) -301 (a) in the data area 31 all store data (also referred to as user data) from the host system 11. For example, any entity in the data area 31 may store valid (valid) data and/or invalid (invalid) data. In addition, none of the physical units 301 (a+1) -301 (B) in the spare area 32 stores data (e.g., valid data).
In one embodiment, if a certain physical unit does not store valid data, the physical unit may be associated with the idle area 32. In addition, the physical cells in the spare area 32 may be erased to erase the data in the physical cells. In one embodiment, the physical units in the idle region 32 are also referred to as idle physical units. In one embodiment, the free area 32 is also referred to as a free pool (free pool).
In one embodiment, when data is to be stored, the memory control circuit 23 may select one or more physical units from the idle area 32 and instruct the memory module 122 to store the data in the selected physical units. After storing the data in the physical unit, the physical unit may be associated with the data area 31. In other words, one or more physical units may be used alternately between the data area 31 and the idle area 32.
In one embodiment, the memory control circuit 23 may configure a plurality of logic units 302 (1) -302 (C) to map physical units (i.e., physical units 301 (1) -301 (A)) in the data area 31. For example, a logical unit may correspond to a logical block address (Logical Block Address, LBA) or other logical management unit. One logical unit may be mapped to one or more physical units.
In one embodiment, if a physical unit is currently mapped by any logical unit, the memory control circuit 23 may determine that the data currently stored in the physical unit includes valid data. Conversely, if a physical unit is not currently mapped by any logical unit, the memory control circuit 23 may determine that the physical unit does not currently store any valid data.
In one embodiment, the memory control circuit 23 may record the mapping relationship between the logical unit and the physical unit in at least one management table (also referred to as a logical-to-physical mapping table). In one embodiment, the memory control circuit 23 instructs the memory module 122 to perform data reading, writing or erasing operations according to the information in the management table (i.e. the logical-to-physical mapping table).
In one embodiment, the processor 111 may configure the shared cache 101 in the memory 112. By utilizing the memory resources of host system 11 (e.g., shared cache area 101), processor 111 provides an efficient temporary data exchange space for storage device 12, which not only reduces the latency associated with frequent accesses to flash memory, but also significantly improves the performance of random read and write operations.
Specifically, based on the data in the shared cache region 101, the storage device 12 is capable of performing a preset operation associated with the memory module 122, such as at least one of ① commanding a scheduling optimization that allows the memory control circuit 23 to temporarily store multiple I/O requests from the host system 11 in the shared cache region 101 in advance, thereby optimizing the order of execution of the requests, reducing unnecessary addressing time and page switching, and improving the overall response speed. ② The shared buffer area 101 can be used for storing and updating a logical-to-physical address Mapping Table (L2P Mapping Table), so that the searching process of the data position is quickened, and particularly for random reading operation, the efficiency is greatly improved. And performing ordering operations on the plurality of mapping table entries cached in the shared cache area 101 according to an ordering algorithm to obtain operations such as ordering queues. ③ Data caching-when data is read from the memory module 122, if the data has been loaded into the shared cache area 101, it can be retrieved directly from it, avoiding re-access to the slower memory module 122, thereby speeding up the data reading speed. Likewise, when new data is written, the data may be temporarily stored in the shared cache region 101 and then it is decided when and how to persist it to the memory module 122 according to a background optimization strategy. ④ Metadata processing-in addition to user data, may also be used to cache key metadata such as File Allocation Tables (FAT), wear leveling information, etc., to ensure that efficient data management and access patterns are maintained even under high load conditions.
In one embodiment, the memory control circuit 23 may establish a connection between the host system 11 and the storage device 12. For example, the memory control circuit 23 may perform a handshake (handshake) operation with the host system 11. The handshake operation is used to exchange information related to the establishment of a connection between the host system 11 and the storage 12, such as clock (clock) information and/or voltage information, etc.
In one embodiment, the memory control circuit 23 may establish a connection between the host system 11 and the storage device 12 according to the execution result of the handshake operation. The memory control circuit 23 can then access (also referred to as accessing) the shared buffer 101 through the connection.
In one embodiment, the memory control circuit 23 may store management data in the shared buffer 101. During access to the memory module 122, the memory control circuit 23 may query or update (i.e., modify) the management data in the shared cache area 101, respectively. For example, the management data may include part of the data (also referred to as mapping entries) in a logical-to-entity mapping table. The mapping table entry may carry mapping information (e.g., logical-to-entity mapping information). The mapping information may reflect a mapping relationship between at least one logical unit and at least one physical unit.
In one embodiment, when the host system 11 wants to read data belonging to a certain logical unit (also referred to as a first logical unit) from the storage device 12, the processor 111 can store the read information corresponding to the first logical unit in the shared buffer 101. For example, the read information may include a read instruction that instructs to read data belonging to the first logical unit. For example, the read instructions may include random read instructions and/or continuous read instructions. The random read instruction is used to instruct the reading of data from a single logical unit or a plurality of discrete logical units. The sequential read instruction is used for indicating to read data from a plurality of sequential logic units. The memory control circuit 23 can access the shared buffer 101 to obtain the read information.
After the read information is obtained, the memory control circuit 23 can determine whether the mapping table entry related to the first logical unit is cached in the shared buffer 101. If the mapping table associated with the first logical unit is not cached in the shared cache 101, the memory control circuit 23 may load the mapping table from the memory module 122 into the shared cache 101.
If the mapping table associated with the first logical unit is already cached in the shared cache 101 (or after the mapping table is loaded into the shared cache 101 from the memory module 122), the memory control circuit 23 may query the mapping table in the shared cache 101 to obtain a mapping relationship between the first logical unit and a physical unit (also referred to as a first physical unit) in the memory module 122. The memory control circuit 23 may then read data (also referred to as first data) from the first physical unit in the memory module 122 according to the mapping relationship. The memory control circuit 23 may then transmit the read first data back to the host system 11 in response to the read information.
In one embodiment, when the host system 11 wants to store data (also referred to as second data) belonging to a certain logical unit (also referred to as second logical unit) in the storage device 12, the processor 111 can store the write information corresponding to the second logical unit in the shared buffer 101. For example, the write information may include a write instruction indicating that data belonging to the second logical unit is updated. For example, the write instructions may include random write instructions and/or sequential write instructions. The random write instruction is used for indicating to update the data belonging to a single logic unit or a plurality of discontinuous logic units. The sequential write instruction is used for indicating to update data belonging to a plurality of sequential logic units. The memory control circuit 23 can access the shared buffer 101 to obtain the writing information.
After retrieving the write information, the memory control circuit 23 may store second data into physical units (also referred to as second physical units) in the memory module 122 according to the write information. On the other hand, the memory control circuit 23 can determine whether the mapping table entry related to the second logic unit is cached in the shared buffer 101. If the mapping table associated with the second logical unit is not cached in the shared cache 101, the memory control circuit 23 may load the mapping table from the memory module 122 into the shared cache 101.
If the mapping table related to the second logic unit is already cached in the shared buffer 101 (or after the mapping table is loaded into the shared buffer 101 from the memory module 122), the memory control circuit 23 may update the mapping table in the shared buffer 101 to establish the mapping relationship between the second logic unit and the second entity unit. The memory control circuit 23 may then inform the host system 11 of the completion of the write operation in response to the write information.
In one embodiment, the memory control circuit 23 may also store other types of management data in the shared cache 101 (e.g., load from the memory module 122 into the shared cache 101). For example, the management data may include valid count management data, wear leveling management data, bad block management data, or the like, and the present invention is not limited. The effective count management data is used to manage the effective data storage status of at least some of the physical units in the memory module 122. For example, the effective count management data may include an effective count corresponding to at least one physical unit in the memory module 122. The wear-leveling management data is used to manage wear status of at least some of the physical units in the memory module 122. For example, wear-leveling management data may include read counts, write counts, and/or erase counts corresponding to at least one physical cell in memory module 122. Bad block management data is used to manage defective physical units (also known as bad blocks) in the memory module 122. For example, bad block management data may be used to mark at least one physical unit in the memory module 122 as a bad block. The memory control circuit 23 may then access or manage the memory module 122 according to the management data in the shared buffer 101.
It should be noted that the shared buffer 101 disposed in the memory 112 occupies a portion of the storage space of the memory 112, thereby reducing the capacity of the memory 112 that can be used by the host system 11 itself. For example, after the shared buffer 101 is set, the remaining capacity of the memory 112 after the shared buffer 101 is deducted is the memory space available for the host system 11 itself. Therefore, if the capacity of the shared buffer 101 is larger, the operation performance of the host system 11 may be reduced (due to the reduced memory space available for the host system 11). However, if the capacity of the shared buffer 101 is too small, the performance of the storage device 12 may be reduced (due to the reduced memory space available for the storage device 12).
However, according to the technical solution provided by the embodiment of the present invention, the capacity of the shared buffer 101 can be dynamically adjusted according to the current data storage status of the shared buffer 101. Therefore, the performance of the storage device 12 or the whole storage system 10 can be effectively improved without affecting the operation performance of the host system 11 itself as much as possible.
Specifically, the technical solution provided by the embodiment of the present invention can implement fine management of mapping entries residing in the shared buffer 101 while maintaining the shared buffer 101 in a smaller capacity configuration (including but not limited to reducing its capacity requirement), including but not limited to sorting, adding and removing operations. In this way, on the basis of ensuring that the operation efficiency of the host system 11 is not interfered, the performance of the storage device 12 and even the whole storage system 10 is further enhanced, the optimal balance between the processing capability of the host system 11 and the response speed of the storage device 12 is realized, and the two can operate in an ideal state without mutual influence.
This solution is particularly suitable for the scenario of improving random read-write performance. Conventionally, to maximize the performance of random reads and writes, all mapping entries should theoretically be loaded into the buffer memory 24. However, under the limitations of current product designs, e.g., buffer memory 24 typically has a fixed 256KB capacity, while the amount of data for mapping entries may reach 1GB or more (for example, a 1TB capacity), this makes it impossible to load all mapping entries all at once, resulting in frequent batch loading of mapping entries, thereby increasing latency and affecting the read speed of storage device 12. By employing the shared cache region 101 to efficiently manage these mapping entries, not only is this challenge solved, but independent and undisturbed optimization of host system 11 and storage device 12 performance is achieved.
In particular, this approach allows the memory system 10 to maintain high data access speeds and low latency characteristics while handling large numbers of map entries, thereby ensuring random read and write performance metrics. At the same time, the method also avoids the problems of increased cost, possibly caused by the increase of additional hardware resources, increased complexity of the system, reduced stability and the like. In summary, this innovative solution provides a viable and efficient way to address the challenges of high performance storage systems.
In one embodiment, one or more mapping entries may be cached in the shared cache 101. After the memory control circuit 23 caches at least one mapping table in the shared buffer 101, the processor 111 can detect whether a mapping table corresponding to a specific application (also referred to as a specific mapping table) exists in at least one mapping table currently cached in the shared buffer 101. If the specific mapping table exists in the at least one mapping table currently cached in the shared cache area 101, the processor 111 may determine a capacity (also referred to as a target capacity) from a plurality of candidate capacities. The processor 111 may then automatically adjust (e.g., increase or decrease) the capacity of the shared buffer 101 based on the target capacity. Thus, the performance of the host system 11 and the storage device 12 in executing the specific application program can be effectively improved. However, if the specific mapping table corresponding to the specific application does not exist in at least one mapping table currently cached in the shared cache 101, the processor 111 may not adjust the capacity of the shared cache 101.
In one embodiment, after the memory control circuit 23 buffers at least one mapping table in the shared buffer 101, the processor 111 can trace the type of the at least one mapping table buffered in the shared buffer 101 to obtain the tracing result. In other words, the tracking result may reflect the type of the at least one mapping entry currently cached in the shared cache 101. Processor 111 may then determine, based on the tracking result, whether the particular mapping table exists in the at least one mapping table currently cached in shared cache area 101.
In one embodiment, the processor 111 may obtain identification information corresponding to a mapping entry (also referred to as a target mapping entry) in the shared cache 101. The identification information may reflect the type of the target mapping entry. Processor 111 may then determine, based on the identification information, whether the target mapping entry belongs to the particular mapping entry.
In one embodiment, when the target mapping table is cached in the shared cache area 101, an identification information may be associated with the target mapping table according to the application program corresponding to the target mapping table. For example, assuming that the application corresponding to the target mapping table item is application a, identification information a corresponding to application a may be associated to the target mapping table item. Or, assuming that the application corresponding to the target mapping table item is application B, identification information B corresponding to application B may be associated to the target mapping table item.
In one embodiment, the processor 111 may query the management table according to the identification information corresponding to the target mapping table item to obtain the query result. The management table may record a plurality of types (also referred to as candidate types) corresponding to a plurality of identification information (also referred to as candidate identification information) respectively. Processor 111 may then determine, based on the query results, whether the target mapping entry belongs to the particular mapping entry. For example, if the query results reflect that the type of the target mapping table item is identical (e.g., the same) as the type of the particular mapping table item, processor 111 may determine that the target mapping table item belongs to the particular mapping table item. However, if the query results reflect that the type of the target mapping table item is not consistent (e.g., different) than the type of the particular mapping table item, processor 111 may determine that the target mapping table item does not belong to the particular mapping table item.
In one embodiment, after determining that the target mapping table currently cached in the shared cache area 101 belongs to the specific mapping table, the processor 111 may determine the type of the application program (i.e. the specific application program) corresponding to the target mapping table belonging to the specific mapping table. Processor 111 may then determine one of the plurality of candidate capacities as a target capacity based on the type of the particular application.
In one embodiment, if the application corresponding to the target mapping table (i.e., the specific application) is a certain application (also referred to as a first application), the processor 111 may determine a certain capacity (also referred to as a first capacity) of the plurality of candidate capacities as the target capacity. In one embodiment, if the application corresponding to the target mapping table (i.e., the specific application) is another application (also referred to as a second application), the processor 111 may determine another capacity (also referred to as a second capacity) of the plurality of candidate capacities as the target capacity. The first application is different from the second application. The first capacity is different from the second capacity.
In one embodiment, after determining the target capacity, the processor 111 may dynamically adjust the capacity of the shared buffer 101 to be consistent (e.g., identical) with the target capacity. In one embodiment, after determining the target capacity, the processor 111 may add or subtract the target capacity to or from the current capacity of the shared buffer 101.
In one embodiment, after determining the target capacity, the processor 111 may obtain a capacity adjustment parameter according to the target capacity. The processor 111 may then adjust (e.g., increase or decrease) the capacity of the shared buffer 101 based on the capacity adjustment parameter. For example, the processor 111 may input the target capacity to a calculation or a lookup table and obtain the capacity adjustment parameter according to the output of the calculation or the lookup table.
For example, when running the map program (belonging to the specific application program), the processor 111 may dynamically increase the capacity of the shared buffer 101 according to the mapping entries currently cached in the shared buffer 101 and belonging to the map program. The enlarged shared buffer 101 may be more suitable for buffering large amounts of map information that need to be loaded during the running of the map program. It should be noted that how much capacity is increased (or decreased) may be determined by the actual requirements. However, the particular application may also include other types of applications.
Thus, during execution of a particular application by host system 11, processor 111 can dynamically adjust the capacity of shared cache 101 to an appropriate size corresponding to the particular application. Thus, the performance of the host system 11 and the storage device 12 in executing the specific application program can be improved.
In one embodiment, the type of a mapping entry may reflect that the mapping entry belongs to at least one of an active foreground (foreground) application, an inactive foreground application, and a background (background) application. In one embodiment, assume that the current processor 111 is running applications A and B. The application program a runs in the foreground of an Operating System (OS), and the application program B runs in the background of the Operating System. At this point, the processor 111 may classify application a as an active foreground application and application B as a background application. It should be noted that those skilled in the art should know how to run different applications in the foreground and background of the operating system, and detailed descriptions are omitted here.
In one embodiment, assume that another application C is switched to the foreground run of the operating system. In response to application C being switched to the foreground run of the operating system, processor 111 may classify application C as an active foreground application. Meanwhile, the processor 111 may reclassify (e.g., demote) the application program a that originally belongs to the active foreground application as an inactive foreground application and maintain the application program B as a background application.
In an embodiment, the identification information corresponding to the target mapping table entry may reflect that the target mapping table entry belongs to an active foreground application, an inactive foreground application, or a background application. In addition, when the type of the target mapping table item is changed, the processor 111 may correspondingly update the identification information of the target mapping table item. The updated identification information may reflect the current type of the target mapping entry. For example, when switching from the target mapping table entry from belonging to the inactive foreground application to belonging to the active foreground application, the processor 111 may correspondingly update the identification information of the target mapping table entry, such that the updated identification information reflects that the target mapping table entry currently belongs to the active foreground application.
In an embodiment, the specific mapping table entry must belong to an active foreground application or an inactive foreground application. That is, if a target mapping table item of the at least one mapping table item belongs to an active foreground application or an inactive foreground application, the processor 111 may allow the target mapping table item to be determined as a specific mapping table item when the type of the target mapping table item is identical (e.g., same) as the type of the specific mapping table. However, if the target mapping entry does not belong to an active foreground application or an inactive foreground application, processor 111 may not allow (or prohibit) the target mapping entry to be determined as a particular mapping entry.
In an embodiment, if the target mapping table in the at least one mapping table belongs to the background application, the processor 111 may determine that the target mapping table does not belong to the specific mapping table. In other words, in the case where the target mapping table item belongs to the background application, the processor 111 may determine that the target mapping table item does not belong to the specific mapping table item even if the type of the target mapping table item is identical (e.g., the same) as the type of the specific mapping table.
In one embodiment, when data is to be removed from the shared cache 101, the processor 111 may preferentially remove the remaining mapping entries in the shared cache 101 that do not belong to the specific mapping entry from the shared cache 101. In one embodiment, when data is to be removed from the shared cache 101, the processor 111 may prioritize or force the specific mapping table to be retained in the shared cache 101 (i.e., the specific mapping table in the shared cache 101 is not removed).
In one embodiment, the performance of the host system 11 and the storage device 12in executing the specific application program can be improved by keeping the specific mapping table entry in the shared buffer 101 as much as possible. On the other hand, by removing the remaining mapping entries not belonging to the specific mapping entry from the shared cache 101, the performance of the host system 11 and the storage device 12 when executing the specific application program can be effectively maintained (or even improved) even if the capacity of the shared cache 101 is reduced.
In an embodiment, in an extreme case, after dynamically adjusting the capacity of the shared buffer 101, only the specific mapping table entry may be stored in the shared buffer 101, and the remaining mapping table entries may be removed. Thus, the operation performance of the host system 11 and the storage device 12 can be maintained (or even improved) while the capacity of the shared buffer 101 is reduced as much as possible.
In one embodiment, the processor 111 may also configure multiple regions in the shared buffer 101 to store different types of mapping entries in a sorted manner. Thus, the performance of the host system 11 and the storage device 12 can be effectively improved.
In one embodiment, the processor 111 may detect the data storage amount of at least one of the memory 112 and the shared buffer 101. For example, the amount of data stored may reflect how much data has been stored in at least one of the current memory 112 and the shared buffer 101. In one embodiment, the processor 111 may detect whether the data storage amount reaches a threshold value. When the data storage amount reaches a critical value, the processor 111 may perform the aforementioned operation of removing the mapping table entry from the shared buffer 101. However, if the data storage amount does not reach the threshold value, the processor 111 may not perform the above-described operation of removing the mapping table entry from the shared buffer 101.
In one embodiment, when the data storage amount of at least one of the memory 112 and the shared buffer 101 is relatively large (e.g., reaches the threshold value), the data that has little influence on the operation performance of the host system 11 (and the storage device 12) is preferentially removed (e.g., the remaining mapping entries not belonging to the specific mapping entry), so that additional memory space is released for the host system 11 (and the storage device 12). Thus, the performance of the host system 11 (and the storage device 12) can be improved.
It should be noted that, in an embodiment, the type of at least one mapping entry in the shared buffer 101 may be configured or adjusted according to the actual requirement, and is not limited to the above. In addition, in an embodiment, the specific application program may include a movie playing program, a music playing program, or other suitable application programs, which are not described herein.
In an embodiment, if multiple types of specific mapping table entries exist in the shared buffer 101 at the same time, the processor 111 may determine another target capacity (also referred to as a second target capacity) according to the target capacities (also referred to as first target capacities) corresponding to the multiple types of specific mapping table entries, respectively. For example, the processor 111 may average the determined plurality of first target capacities to obtain the second target capacity. Thus, even if there are multiple types of specific mapping entries in the shared buffer 101 at the same time, the capacity of the shared buffer 101 can be maintained as large as possible (e.g., near the average of the first target capacities) to achieve a better balance between controlling (e.g., reducing) the capacity of the shared buffer 101 and maintaining the system performance as much as possible.
In one embodiment, when a plurality of mapping entries belonging to the same type and/or having the same priority in the shared cache 101 are to be removed, the processor 111 may randomly remove at least one of the plurality of mapping entries from the shared cache 101. In one embodiment, by preferentially removing mapping entries belonging to a particular type and/or having a particular priority, the performance of host system 11 and storage device 12 may be maintained or even increased even though the occupied capacity of shared buffer 101 in memory 111 is reduced.
In one embodiment, when multiple mapping entries belonging to the same type and/or having the same priority in the shared cache 101 are to be removed, the processor 111 may determine which mapping entry of the multiple mapping entries is to be removed from the shared cache 101 according to the frequency, the number of times, or the time point when the multiple mapping entries are to be accessed (e.g., accessed, queried, or used) respectively. For example, the processor 111 may compare the frequency, number of times, or point in time at which the plurality of mapping entries are accessed (e.g., accessed, queried, or used), respectively, to obtain the comparison result. Processor 111 may then determine which of the plurality of mapping entries to remove from shared cache 101 preferentially based on the comparison. For example, the processor 111 may preferentially remove mapping entries from the shared cache 101 that are accessed relatively frequently, and/or accessed at a time point relatively close to the current system time as exhibited by the comparison result. These evaluation factors may alternatively or at least be taken into account as selection of the mapping entries to be removed preferentially. Thus, it is ensured that the performance of the host system 11 and the storage device 12 can be maintained or even improved after the partial mapping entries are removed from the shared buffer 101. In one embodiment, the processor 111 may also select one or more mapping entries that need to be removed preferentially from the plurality of mapping entries of the same type and/or with the same priority by using other sorting algorithms, depending on the actual requirement.
Fig. 4 is a flowchart illustrating a method for adjusting the capacity of a shared buffer according to an embodiment of the present invention. Referring to fig. 4, in step S401, a shared buffer is configured in a memory of a host system, wherein a storage device is configured to perform a preset operation based on data buffered in the shared buffer. In step S402, it is detected whether a specific mapping table corresponding to a specific application exists in at least one mapping table currently cached in the shared cache region. In step S403, it is determined whether the specific mapping table exists in at least one mapping table currently cached in the shared cache region. If the specific mapping table entry exists in at least one mapping table entry currently cached in the shared cache region, in step S404, a target capacity is determined from a plurality of candidate capacities. In step S405, the capacity of the shared buffer is adjusted according to the target capacity. However, if the specific mapping table is not present in the at least one mapping table currently cached in the shared cache, in step S406, the capacity of the shared cache is not adjusted.
However, the steps in fig. 4 are described in detail above, and will not be described again here. It should be noted that each step in fig. 4 may be implemented as a plurality of program codes or circuits, and the present invention is not limited thereto. In addition, the method of fig. 4 may be used with the above exemplary embodiment, or may be used alone, and the present invention is not limited thereto.
In summary, the method and the storage system for adjusting the capacity of the shared buffer according to the embodiments of the present invention can dynamically adjust the capacity of the shared buffer according to the data storage status of the current shared buffer. Therefore, the efficiency of the storage device or the whole storage system can be effectively improved on the premise that the operation efficiency of the host system is not affected as much as possible.
In one embodiment, the management and/or ordering mechanism described above for mapping entries may also be used to manage and/or order other types of data. For example, in one embodiment, the mapping table entries may be replaced by various instructions, valid count management data, wear-leveling management data, bad block management data, or other types of custom data, which are not repeated herein.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present invention.

Claims (16)

1. The method for adjusting the capacity of the shared buffer area is characterized by being used for a storage system, wherein the storage system comprises a host system and a storage device, the host system is connected to the storage device, and the method for adjusting the capacity of the shared buffer area comprises the following steps:
Configuring a shared cache area in a memory of the host system, wherein the storage device is used for executing preset operation based on data cached in the shared cache area;
Detecting whether a specific mapping table item corresponding to a specific application program exists in at least one mapping table item currently cached in the shared cache area;
Determining a target capacity from a plurality of candidate capacities if the specific mapping table entry corresponding to the specific application exists in the at least one mapping table entry currently cached in the shared cache region, and
And adjusting the capacity of the shared buffer area according to the target capacity.
2. The method of claim 1, wherein the at least one mapping table comprises a target mapping table, and the step of determining whether the specific mapping table corresponding to the specific application exists in the at least one mapping table currently cached in the shared cache comprises:
Acquiring identification information corresponding to the target mapping table item, and
And determining whether the target mapping table item belongs to the specific mapping table item according to the identification information.
3. The shared buffer capacity adjustment method according to claim 2, wherein the step of determining whether the target mapping table item belongs to the specific mapping table item according to the identification information comprises:
querying a management table according to the identification information to obtain a query result, wherein the query result reflects the type of the target mapping table item, and
And determining whether the target mapping table item belongs to the specific mapping table item according to the query result.
4. The shared buffer capacity adjustment method of claim 1, wherein the step of determining the target capacity from the plurality of candidate capacities comprises:
Determining the type of the specific application program corresponding to the specific mapping table item, and
One of the plurality of candidate capacities is determined as the target capacity according to the type of the particular application.
5. The shared cache area capacity adjustment method as recited in claim 4, wherein the step of determining said one of said plurality of candidate capacities as said target capacity according to said type of said particular application program comprises:
Determining a first capacity of the plurality of candidate capacities as the target capacity if the particular application is the first application, and
And if the specific application program is a second application program, determining a second capacity in the plurality of candidate capacities as the target capacity, wherein the first capacity is different from the second capacity.
6. The method for adjusting capacity of a shared buffer according to claim 1, wherein the specific mapping table item belongs to an active foreground application or an inactive foreground application.
7. The method for adjusting capacity of a shared buffer according to claim 6, further comprising:
and if the target mapping table item in the at least one mapping table item belongs to the background application, judging that the target mapping table item does not belong to the specific mapping table item.
8. The shared buffer capacity adjustment method according to claim 1, further comprising:
when data is to be removed from the shared cache, the rest of the at least one mapping table entries which do not belong to the specific mapping table entry are preferentially removed from the shared cache.
9. A storage system, comprising:
Host system, and
A storage device connected to the host system,
Wherein the host system is to:
Configuring a shared cache area in a memory of the host system, wherein the storage device is used for executing preset operation based on data cached in the shared cache area;
Detecting whether a specific mapping table item corresponding to a specific application program exists in at least one mapping table item currently cached in the shared cache area;
Determining a target capacity from a plurality of candidate capacities if the specific mapping table entry corresponding to the specific application exists in the at least one mapping table entry currently cached in the shared cache region, and
And adjusting the capacity of the shared buffer area according to the target capacity.
10. The storage system of claim 9, wherein the at least one mapping entry comprises a target mapping entry, and the operation of whether the particular mapping entry corresponding to the particular application exists in the at least one mapping entry currently cached in the shared cache comprises:
Acquiring identification information corresponding to the target mapping table item, and
And determining whether the target mapping table item belongs to the specific mapping table item according to the identification information.
11. The storage system of claim 10, wherein determining whether the target mapping entry belongs to the particular mapping entry based on the identification information comprises:
querying a management table according to the identification information to obtain a query result, wherein the query result reflects the type of the target mapping table item, and
And determining whether the target mapping table item belongs to the specific mapping table item according to the query result.
12. The storage system of claim 9, wherein determining the target capacity from the plurality of candidate capacities comprises:
Determining the type of the specific application program corresponding to the specific mapping table item, and
One of the plurality of candidate capacities is determined as the target capacity according to the type of the particular application.
13. The storage system of claim 12, wherein determining the one of the plurality of candidate capacities as the target capacity according to the type of the particular application comprises:
Determining a first capacity of the plurality of candidate capacities as the target capacity if the particular application is the first application, and
And if the specific application program is a second application program, determining a second capacity in the plurality of candidate capacities as the target capacity, wherein the first capacity is different from the second capacity.
14. The storage system of claim 9, wherein the particular mapping entry belongs to an active foreground application or an inactive foreground application.
15. The storage system of claim 14, wherein the host system is further to:
and if the target mapping table item in the at least one mapping table item belongs to the background application, judging that the target mapping table item does not belong to the specific mapping table item.
16. The storage system of claim 9, wherein the host system is further to:
when data is to be removed from the shared cache, the rest of the at least one mapping table entries which do not belong to the specific mapping table entry are preferentially removed from the shared cache.
CN202510045463.8A 2025-01-13 2025-01-13 Method for adjusting capacity of shared buffer area and storage system Active CN119473167B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510045463.8A CN119473167B (en) 2025-01-13 2025-01-13 Method for adjusting capacity of shared buffer area and storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510045463.8A CN119473167B (en) 2025-01-13 2025-01-13 Method for adjusting capacity of shared buffer area and storage system

Publications (2)

Publication Number Publication Date
CN119473167A true CN119473167A (en) 2025-02-18
CN119473167B CN119473167B (en) 2025-05-06

Family

ID=94587603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510045463.8A Active CN119473167B (en) 2025-01-13 2025-01-13 Method for adjusting capacity of shared buffer area and storage system

Country Status (1)

Country Link
CN (1) CN119473167B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033397A (en) * 2015-03-17 2016-10-19 小米科技有限责任公司 Method and device for adjusting memory buffer, and terminal
CN113504880A (en) * 2021-07-27 2021-10-15 群联电子股份有限公司 Memory buffer management method, memory control circuit unit and storage device
CN114741336A (en) * 2022-06-09 2022-07-12 荣耀终端有限公司 Host side buffer adjustment method in memory, electronic device and chip system
CN117707639A (en) * 2023-08-30 2024-03-15 荣耀终端有限公司 Application startup acceleration method, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033397A (en) * 2015-03-17 2016-10-19 小米科技有限责任公司 Method and device for adjusting memory buffer, and terminal
CN113504880A (en) * 2021-07-27 2021-10-15 群联电子股份有限公司 Memory buffer management method, memory control circuit unit and storage device
CN114741336A (en) * 2022-06-09 2022-07-12 荣耀终端有限公司 Host side buffer adjustment method in memory, electronic device and chip system
CN117707639A (en) * 2023-08-30 2024-03-15 荣耀终端有限公司 Application startup acceleration method, electronic device and storage medium

Also Published As

Publication number Publication date
CN119473167B (en) 2025-05-06

Similar Documents

Publication Publication Date Title
US8819358B2 (en) Data storage device, memory system, and computing system using nonvolatile memory device
US8103820B2 (en) Wear leveling method and controller using the same
US9098395B2 (en) Logical block management method for a flash memory and control circuit storage system using the same
US9128618B2 (en) Non-volatile memory controller processing new request before completing current operation, system including same, and method
TW201842444A (en) Garbage collection
CN111400201B (en) Data sorting method of flash memory, storage device and control circuit unit
CN113015965A (en) Performing mixed wear leveling operations based on a subtotal write counter
US20130013853A1 (en) Command executing method, memory controller and memory storage apparatus
CN110998550A (en) Memory addressing
US8037236B2 (en) Flash memory writing method and storage system and controller using the same
TWI698749B (en) A data storage device and a data processing method
CN101957797A (en) Flash memory logic block management method and its control circuit and storage system
CN113885692A (en) Memory efficiency optimization method, memory control circuit unit and memory device
TWI829363B (en) Data processing method and the associated data storage device
CN112230849B (en) Memory control method, memory storage device and memory controller
CN119473167B (en) Method for adjusting capacity of shared buffer area and storage system
CN119473163B (en) Shared buffer capacity adjustment method and storage system based on type tracking
CN119440427B (en) Shared buffer data management method and storage system based on type tracking
CN119440428B (en) Shared buffer data management method and storage system based on counting information
CN119473164B (en) Shared buffer data management method and storage system for dynamically determining management policy
CN119440429B (en) Shared cache capacity adjustment method and storage system based on counting information
CN119473166B (en) Shared buffer data management method and storage system based on verification information
CN117806533A (en) Data processing method and corresponding data storage device
CN118331511B (en) Memory management method and memory controller
TWI814590B (en) Data processing method and the associated data storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant