CN115994101A - Flash memory device and data management method thereof - Google Patents
Flash memory device and data management method thereof Download PDFInfo
- Publication number
- CN115994101A CN115994101A CN202211521508.7A CN202211521508A CN115994101A CN 115994101 A CN115994101 A CN 115994101A CN 202211521508 A CN202211521508 A CN 202211521508A CN 115994101 A CN115994101 A CN 115994101A
- Authority
- CN
- China
- Prior art keywords
- data
- address
- physical address
- logical
- cache module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0877—Cache access modes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The embodiment of the application relates to the field of storage equipment application, and discloses a flash memory device and a data management method thereof, wherein the method comprises the following steps: establishing a mapping table from a logical address to a physical address, wherein the addressing granularity of the physical address=N, N is a positive integer and N is more than or equal to 2; dividing the cache module into a data cache module to be flushed and a data cache module to be combined; acquiring a write request sent by a host, and dividing write data into a data cache module to be flushed and/or a data cache module to be combined; and brushing the data in the data cache module to be brushed down to the flash memory medium, or combining N minimum writing units in the data cache module to be combined into a brushing unit, and brushing the brushing unit down to the data cache module to be brushed down or the flash memory medium. By means of the physical address addressing granularity=N-times logical address addressing granularity, performance of the flash memory device in a random writing scene can be improved, and service life of the flash memory device can be prolonged.
Description
Technical Field
The present disclosure relates to the field of storage device applications, and in particular, to a flash memory device and a data management method thereof.
Background
Flash memory devices, for example: the solid state disk (So l i d State Dr i ves, SSD) is a hard disk made of a solid state electronic memory chip array, and the solid state disk comprises a control unit and a memory unit (FLASH memory chip or DRAM memory chip).
Because the flash memory device has special data read-write characteristics, the operating system cannot directly manage the flash memory device, and a flash memory conversion layer (F l ash trans l at i on l ayer, FTL) needs to be arranged between the operating system and the flash memory device to complete mapping from a host (or user) logical address space to a flash memory (F l ash) physical address space. However, the increasingly larger flash memory capacity makes the mapping table larger, but the memory space of the flash memory device is not proportionally expanded with the flash memory capacity.
The prior art reduces the size of the mapping table by increasing the management granularity of the flash solid state disk, but when the overall read-write length of the test scenario is smaller than the management granularity, the write amplification is increased.
Disclosure of Invention
The embodiment of the application provides a flash memory device and a data management method thereof, which can improve the performance of the flash memory device in a random writing scene and prolong the service life of the flash memory device.
The embodiment of the application provides the following technical scheme:
in a first aspect, an embodiment of the present application provides a data management method of a flash memory device, where the flash memory device includes a flash memory medium and a cache module, the method includes:
establishing a mapping table from a logical address to a physical address, wherein the addressing granularity of the physical address=n×the addressing granularity of the logical address, N is a positive integer and N is greater than or equal to 2;
dividing the cache module into a data cache module to be flushed and a data cache module to be combined, wherein the data granularity of the data cache module to be combined is equal to the addressing granularity of the logical address, and the data granularity of the data cache module to be flushed is equal to the addressing granularity of the physical address;
acquiring a write request sent by a host, wherein the write request comprises write data of the host, and the write data comprises a plurality of minimum write units;
dividing the written data into a data cache module to be flushed and/or a data cache module to be combined according to the size of the written data;
and brushing the data in the data cache module to be brushed down to the flash memory medium, or combining N minimum writing units in the data cache module to be combined into a brushing unit, and brushing the brushing unit down to the data cache module to be brushed down or the flash memory medium, wherein the size of the brushing unit is equal to the addressing granularity of the physical address.
In a second aspect, embodiments of the present application provide a flash memory device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a data management method of a flash memory device as in the first aspect.
In a third aspect, embodiments of the present application also provide a non-volatile computer-readable storage medium storing computer-executable instructions for enabling a flash memory device to perform a data management method of a flash memory device as in the first aspect.
The beneficial effects of the embodiment of the application are that: in a case different from the prior art, the data management method of a flash memory device provided in the embodiment of the present application includes a flash memory medium and a cache module, where the method includes: establishing a mapping table from a logical address to a physical address, wherein the addressing granularity of the physical address=n×the addressing granularity of the logical address, N is a positive integer and N is greater than or equal to 2; dividing the cache module into a data cache module to be flushed and a data cache module to be combined, wherein the data granularity of the data cache module to be combined is equal to the addressing granularity of the logical address, and the data granularity of the data cache module to be flushed is equal to the addressing granularity of the physical address; acquiring a write request sent by a host, wherein the write request comprises write data of the host, and the write data comprises a plurality of minimum write units; dividing the written data into a data cache module to be flushed and/or a data cache module to be combined according to the size of the written data; and brushing the data in the data cache module to be brushed down to the flash memory medium, or combining N minimum writing units in the data cache module to be combined into a brushing unit, and brushing the brushing unit down to the data cache module to be brushed down or the flash memory medium, wherein the size of the brushing unit is equal to the addressing granularity of the physical address.
The performance of the flash memory device under a random write scene can be improved, and because data does not need to be written after the logical address is read, no extra write amplification is introduced, and the service life of the flash memory device can be prolonged.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
Fig. 1 is a schematic structural diagram of a flash memory device according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a mapping table from logical addresses to physical addresses of a flash memory device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a data distribution of a logical address according to an embodiment of the present disclosure;
Fig. 4 is a flowchart of a data management method of a flash memory device according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another mapping table of logical addresses to physical addresses provided by embodiments of the present application;
FIG. 6 is a schematic diagram of a mapping table of logical addresses to physical addresses provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a cache module according to an embodiment of the present application;
fig. 8 is a detailed flowchart of step S404 in fig. 4;
fig. 9 is a detailed flowchart of step S443 in fig. 8;
FIG. 10 is a schematic diagram of a data cache set according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of writing data according to an embodiment of the present application;
FIG. 12 is a schematic diagram of dividing write data according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a flow of reading data according to an embodiment of the present disclosure;
fig. 14 is a schematic flow chart of garbage collection provided in an embodiment of the present application;
FIG. 15 is a schematic diagram of a mapping table of physical addresses to logical addresses provided in an embodiment of the present application;
FIG. 16 is a schematic diagram of another mapping table of physical addresses to logical addresses provided by embodiments of the present application;
FIG. 17 is a schematic diagram of a mapping table of physical addresses to logical addresses provided by an embodiment of the present application;
Fig. 18 is a schematic diagram of a refinement flow of step S142 of fig. 14;
FIG. 19 is a detailed flowchart of searching for valid data according to an embodiment of the present application;
FIG. 20 is a schematic diagram of a P2L table and an L2P table according to an embodiment of the present disclosure;
FIG. 21 is a schematic view of a garbage collection provided in an embodiment of the present application;
FIG. 22 is a schematic diagram of another data cache set provided by an embodiment of the present application;
fig. 23 is a schematic structural diagram of a flash memory device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that, if not conflicting, the various features in the embodiments of the present application may be combined with each other, which is within the protection scope of the present application. In addition, while functional block division is performed in a device diagram and logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. Moreover, the words "first," "second," "third," and the like as used herein do not limit the data and order of execution, but merely distinguish between identical or similar items that have substantially the same function and effect.
The following specifically describes the technical scheme of the present application with reference to the drawings of the specification:
referring to fig. 1, fig. 1 is a schematic structural diagram of a flash memory device according to an embodiment of the present application;
as shown in fig. 1, the flash memory device 100 includes a flash memory medium 110 and a controller 120 connected to the flash memory medium 110. The flash memory device 100 is in communication connection with the host 200 through a wired or wireless manner, so as to implement data interaction.
The flash memory medium 110, which is a storage medium of the flash memory device 100, is also called a flash memory, an F1 ash memory, or an F1 ash granule, belongs to one type of storage device, is a nonvolatile memory, and can store data for a long time even without current supply, and its storage characteristic is equivalent to a hard disk, so that the flash memory medium 110 becomes a base of storage media of various portable digital devices.
The controller 120 includes a data converter 121, a processor 122, a buffer 123, a flash memory controller 124, and an interface 125.
The data converter 121 is connected to the processor 122 and the flash memory controller 124, respectively, and the data converter 121 is used for converting binary data into hexadecimal data and vice versa. Specifically, when the flash controller 124 writes data to the flash medium 110, binary data to be written is converted into hexadecimal data by the data converter 121, and then written to the flash medium 110. When the flash controller 124 reads data from the flash memory medium 110, hexadecimal data stored in the flash memory medium 110 is converted into binary data by the data converter 121, and then the converted data is read from the binary data page register. The data converter 121 may include a binary data register and a hexadecimal data register, among others. Binary data registers may be used to hold data converted from hexadecimal to binary, and hexadecimal data registers may be used to hold data converted from binary to hexadecimal.
The processor 122 is connected to the data converter 121, the buffer 123, the flash memory controller 124 and the interface 125, where the processor 122 is connected to the data converter 121, the buffer 123, the flash memory controller 124 and the interface 125 through buses or other manners, and is configured to execute nonvolatile software programs, instructions and modules stored in the buffer 123, so as to implement any method embodiment of the present application.
The buffer 123 is mainly used for buffering the read/write command sent by the host 200 and the read data or write data obtained from the flash memory medium 110 according to the read/write command sent by the host 200.
A flash controller 124 connected to the flash medium 110, the data converter 121, the processor 122, and the buffer 123, for accessing the flash medium 110 at the back end and managing various parameters and data I/O of the flash medium 110; or, the interface and the protocol for providing access implement a corresponding SAS/SATA target protocol end or NVMe protocol end, obtain an I/O instruction sent by the host 200, decode and generate an internal private data result, and wait for execution; or for taking care of the core processing of the flash translation layer (F l ash trans l at i on l ayer, FTL).
The interface 125 is connected to the host 200 and the data converter 121, the processor 122 and the buffer 123, and is configured to receive data sent by the host 200, or receive data sent by the processor 122, so as to implement data transmission between the host 200 and the processor 122, where the interface 125 may be a SATA-2 interface, a SATA-3 interface, a SAS interface, a MSATA interface, a PC I-E interface, a ngaf interface, a CFast interface, a SFF-8639 interface and an m.2nvme/SATA protocol.
Currently, in most enterprise-level flash memory devices, the mapping table of logical addresses to physical addresses (Logi ca L to Phys i ca L, L2P), i.e. the L2P table is cached in its entirety in a dynamic random access memory (Dynami c Random Access Memory, DRAM), typically using 4Byte as the size of each mapping unit, i.e. physical address, one mapping unit corresponding to the 4K data of the Host (Host) (flash page size 4K), if the flash memory capacity is 1TB, the L2P table requires a DRAM capacity of 1GB, and similarly if the flash memory capacity is 4TB/8TB, the L2P table requires a DRAM capacity of about 4GB/8GB.
Referring to fig. 2, fig. 2 is a schematic diagram of a mapping table from logical addresses to physical addresses of a flash memory device according to an embodiment of the present application;
as shown in fig. 2, in the L2P table, the address number of one logical address (Logi ca L Mapp i ngAddress, LMA) corresponds to the address number of one physical address (Phys i ca L Mapp i ngAddress, PMA), and the address number of the physical address (PMA) can be queried from the L2P table according to the address number of the logical address (LMA).
Currently, for flash memory devices with a capacity of 8TB and below, the logical address (LMA) generally uses 4K as its minimum addressing granularity, the physical address (PMA) uses 4K as its minimum addressing granularity, and then the physical address (PMA) address is represented by 4Byte, where the maximum representable address range=4k×4byte_num=16tb, and since 4Byte, i.e. 1 bi T in the 32b it is specially used in the firmware design, the remaining 31b it can only identify the flash memory capacity of 8T to the maximum, and thus, in practice, the maximum physical address range that 4Byte can represent is generally only suitable for flash memory devices with a capacity of 8TB and below. When the flash memory capacity exceeds 8TB, 4Byte is insufficient to represent its full physical address range.
Currently, enterprise-level flash memory devices typically employ large granularity mapping to solve the above problems, such as: setting the addressing granularity of both logical addresses (LMA) and physical addresses (PMA) to 8K, then 4Byte can be used to represent the physical address range of a flash memory device with a flash memory capacity of 16 TB; similarly, setting the addressing granularity of both logical addresses (LMA) and physical addresses (PMA) to 16K, then 4Byte can be used to represent the physical address range of a flash memory device with a flash capacity of 32 TB.
It should be noted that, the addressing granularity of the logical address (LMA) in the present application may be 8K, 16K, 32K or other sizes, and the specific scheme of the large granularity mapping is described below with the addressing granularity of the logical address (LMA) being 8K:
When the addressing granularity of the logical addresses (LMAs) of the firmware is 8K, it is actually to split the logical address space according to the granularity of 8K, which requires that host data in each logical address (LMA) must be stored together continuously. The host typically acts as a minimum write unit at 4K (or 512B).
When data is read, firstly converting an address number corresponding to a logical block address (Logi ca L B L ock Address, LBA) into an address number of a logical address (LMA), then inquiring a physical address (PMA) through an L2P table, and then performing redundancy operation on the address number corresponding to the Logical Block Address (LBA) to determine the position of the data corresponding to the logical address (LMA) in the PMA.
If the capacity of the flash memory device is an integer multiple of 4K, if the sector format (i.e., the minimum writing unit) of the Host (Host) is 4K, the address number of the Logical Block Address (LBA) is 1 to N; if the sector format (i.e., the minimum write unit) of the Host (Host) is 512B, the address numbers of the Logical Block Addresses (LBAs) are 0 to 8*N. If the addressing granularity of the Logical Block Address (LBA) is 4K, the address number corresponding to the Logical Block Address (LBA) =the address number of the logical address (LMA); if the addressing granularity of the Logical Block Address (LBA) is 512B, the address number of the logical address (LMA) =address number/8 corresponding to the Logical Block Address (LBA).
For example: if the addressing granularity of the Logical Block Address (LBA) is 4K, the host reads the data with address number=11 corresponding to the Logical Block Address (LBA), then the address number=11 corresponding to the Logical Block Address (LBA) of the logical address (LMA), then the physical address (PMA) corresponding to the LMA11 is queried through the L2P table, and then the address number 11 corresponding to the Logical Block Address (LBA) is subjected to the redundancy operation of 2 (2=addressing granularity 8K of the logical address (LMA)/addressing granularity 4K of the Logical Block Address (LBA), so that 1,1 represents the second position of the data corresponding to the logical address (LMA) in the physical address (PMA). It will be appreciated that all the number pair 2 remainders have only two results of 0 and 1, 0 representing a first location of data corresponding to the logical address (LMA) in the physical address (PMA), and 1 representing a second location of data corresponding to the logical address (LMA) in the physical address (PMA).
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a data distribution situation of a logical address according to an embodiment of the present application;
in the embodiment of the present application, the host takes 4K (or 512B) as the minimum writing unit, and fig. 3 takes the host as the minimum writing unit as an example.
As shown in fig. 3, when the addressing granularity of the logical address (LMA) is 8K and the host uses 4K as the minimum writing unit, the situation that the host discretely writes 4K data of the logical address occurs, and both logical address 0 (LMA 0) and logical address 1 (LMA 1) write data only in the first 4K position and have no data in the second 4K position, at this time, the data of LMA0 and LMA1 do not have continuous 8K meeting the requirement, and the data cannot be written.
Therefore, the data corresponding to the original LMA0 and LMA1 need to be read out from the flash memory medium, and then the part of the original data and the part of the new data are combined into a new 8K, written into the flash memory medium, and the L2P table is updated. However, this method may cause a read-write in the firmware under random write scenario, resulting in reduced write performance and increased write amplification (more 4K data is written than the actual write amount of the host), thereby shortening the lifetime of the flash memory device.
Based on this, the embodiment of the application provides a data management method of a flash memory device, so as to improve the performance of the flash memory device in a random write scene, and write data after reading a logical address is not needed, so that no extra write amplification is introduced, and the service life of the flash memory device can be prolonged.
Referring to fig. 4, fig. 4 is a flow chart of a data management method of a flash memory device according to an embodiment of the present application;
the data management method of the flash memory device is applied to the flash memory device, and the flash memory device comprises a flash memory medium and a cache module.
As shown in fig. 4, the data management method of the flash memory device includes:
step S401: establishing a mapping table from a logical address to a physical address;
Specifically, the addressing granularity of the physical address=n×the addressing granularity of the logical address, where N is a positive integer and N is greater than or equal to 2. The addressing granularity of logical addresses (LMA) is 4K, the addressing granularity of physical addresses (PMA) =n×4k, for example: 8K,16K … …
In this embodiment of the present application, the mapping table from the logical address to the physical address includes a mapping relationship between an address number of the logical address and an address number of the physical address, where the address number of the logical address is, for example: LMA0, LMA1, etc., the address number of the physical address, for example: PMA0, PMA1, etc.
Referring to fig. 5 again, fig. 5 is a schematic diagram of another mapping table from logical address to physical address according to the embodiment of the present application;
in the embodiment of the application, when the addressing granularity of the logical addresses (LMA) is 4K and the addressing granularity of the physical addresses (PMA) is 8K, in the mapping table from logical addresses to physical addresses, one physical address (PMA) has 2 logical addresses (LMA) corresponding to it.
As shown in fig. 5, LMA0 corresponds to PMA0 with LMA3, LMA1 corresponds to PMA2 with LMA4, and LMA2 corresponds to PMA1 with LMA 5.
Referring to fig. 6 again, fig. 6 is a schematic diagram of a mapping table from logical address to physical address according to another embodiment of the present application;
In the embodiment of the application, when the addressing granularity of the logical addresses (LMA) is 4K and the addressing granularity of the physical addresses (PMA) is 12K, in the mapping table from logical addresses to physical addresses, one physical address (PMA) has 3 logical addresses (LMA) corresponding to it.
As shown in fig. 6, LMA0, LMA2, and LMA4 correspond to PMA0, and LMA1, LMA3, and LMA5 correspond to PMA1.
Step S402: dividing the cache module into a data cache module to be flushed and a data cache module to be combined;
specifically, the buffer module is used for buffering the write-in data of the host, the data granularity of the data buffer module to be combined is equal to the addressing granularity of the logical address, and the data granularity of the data buffer module to be flushed is equal to the addressing granularity of the physical address. For example: if the addressing granularity of the logic address (LMA) is 4K, the data granularity of the data cache module to be combined is equal to 4K; if the addressing granularity of the physical address (PMA) is 8K, the data granularity of the data cache module to be flushed is equal to 8K; if the addressing granularity of the physical address (PMA) is 16K, the data granularity of the data cache module to be flushed is equal to 16K.
Referring to fig. 7 again, fig. 7 is a schematic diagram of a buffer module according to an embodiment of the present application;
as shown in fig. 7, the buffer module includes a data buffer module to be flushed and a data buffer module to be combined. In an embodiment of the present application, the cache module includes a cache memory (cache).
Step S403: acquiring a write request sent by a host;
specifically, the write request sent by the host through the nonvolatile memory host controller interface specification (non-vo l at i l ememory Express, nvme protocol), which is a logical device interface specification, is obtained and is based on the bus transmission protocol specification (corresponding to the application layer in the communication protocol) of the device logical interface, and is used to access the nonvolatile memory medium (such as a solid state hard disk drive using flash memory) attached through the PCI Express (PCI e) bus. The write request includes write data of the host, where the write data includes a plurality of minimum write units, and a data granularity of the minimum write units is 4K or 512B.
Step S404: dividing the written data into a data cache module to be flushed and/or a data cache module to be combined according to the size of the written data;
specifically, the size of the write data=m×secotor, where M is a positive integer and M is greater than or equal to 1, and the sector is the data granularity of the host sector format, i.e. the minimum write unit, and includes 4K or 512B. It will be appreciated that the data granularity sector of the minimum write unit may also include other values, such as: 1K, 2K, etc.
In this embodiment of the present application, the write data is stored on a storage medium such as a DRAM or an SRAM in the firmware system, and dividing the write data into the data cache module to be flushed and/or the data cache module to be combined changes a mapping relationship to which the write data belongs, and does not involve memory copying.
Referring to fig. 8 again, fig. 8 is a detailed flowchart of step S404 in fig. 4;
as shown in fig. 8, this step S404: dividing the writing data into a data caching module to be flushed and/or a data caching module to be combined according to the size of the writing data, wherein the data caching module comprises:
step S441: if the size of the written data is larger than the addressing granularity of the physical address, dividing the continuous N minimum writing units in the written data into a data cache module to be flushed, and dividing the rest data in the written data into a data cache module to be combined;
specifically, the addressing granularity of the physical address=n×the addressing granularity of the logical address, where N is a positive integer and N is greater than or equal to 2, the data granularity of the data cache module to be combined is equal to the addressing granularity of the logical address, and the data granularity of the data cache module to be flushed is equal to the addressing granularity of the physical address.
For example, the data granularity of the minimum writing unit is 4K, the addressing granularity of the logical address (LMA) is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, the data granularity of the data cache module to be combined is 4K, the data granularity of the data cache module to be flushed is 8K, if the size of the writing data is greater than the addressing granularity of the physical address (PMA) by 8K, the 8K data which is the continuous 2 minimum writing units in the writing data is divided into the data cache module to be flushed, and the rest data is divided into the data cache module to be combined.
Or the data granularity of the minimum writing unit is 4K, the addressing granularity of the logic address (LMA) is 4K, n=3, the addressing granularity of the physical address (PMA) is 12K, the data granularity of the data cache module to be combined is 4K, the data granularity of the data cache module to be flushed is 12K, if the size of the writing data is larger than the addressing granularity of the physical address (PMA) by 12K, the continuous 3 minimum writing units, namely, 12K data in the writing data are divided into the data cache module to be flushed, and the rest data are divided into the data cache module to be combined.
It should be noted that, in the embodiment of the present application, the addressing granularity of the physical address may be 16K, 32K or other values, and the processing manner is similar to the above manner, which is not repeated herein.
It can be understood that, because the data granularity corresponding to the data cache module to be flushed is the data granularity of n×minimum writing units, in this embodiment of the present application, dividing the continuous N multiple minimum writing units in the writing data into the data cache module to be flushed includes: according to the data sequence of the written data, taking N minimum writing units as a unit, dividing the N minimum writing units in the written data into the data cache module to be flushed respectively, so that the data size of each data divided into the data cache module to be flushed is the data granularity of the N minimum writing units, wherein the data sequence of the written data is characterized by the sequence number sequence of the logic blocks.
Step S442: if the size of the written data is equal to the addressing granularity of the physical address, dividing the written data into a data cache module to be flushed;
specifically, the data granularity of the data cache module to be flushed is equal to the addressing granularity of the N logical addresses, and if the size of the written data is equal to the addressing granularity of the N logical addresses, N is a positive integer and N is greater than or equal to 2, the written data is partitioned into the data cache module to be flushed.
For example, the addressing granularity of the logical address (LMA) is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, the data granularity of the data cache module to be flushed is 8K, and if the size of the write data is equal to the addressing granularity of the physical address (PMA) 8K, the write data is partitioned into the data cache module to be flushed.
Or the addressing granularity of the logic address (LMA) is 4K, N=3, the addressing granularity of the physical address (PMA) is 12K, the data granularity of the data cache module to be flushed is 12K, and if the size of the written data is equal to the addressing granularity of the physical address (PMA) of 12K, the written data is divided into the data cache module to be flushed.
Step S443: and if the size of the written data is smaller than the addressing granularity of the physical address, dividing the written data into the data cache modules to be combined.
For example, the addressing granularity of the logical address (LMA) is 4K, the data granularity of the data cache module to be combined is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, and if the size of the write data is smaller than the addressing granularity of the physical address (PMA) by 8K, the write data is divided into the data cache modules to be combined.
Or the addressing granularity of the logic address (LMA) is 4K, the data granularity of the data cache module to be combined is 4K, N=3, the addressing granularity of the physical address (PMA) is 12K, and if the size of the written data is smaller than the addressing granularity of the physical address (PMA) by 12K, the written data is divided into the data cache module to be combined.
Specifically, referring to fig. 9 again, fig. 9 is a detailed flowchart of step S443 in fig. 8;
in this embodiment of the present application, the data cache module to be combined includes N data cache sets, each of which corresponds to one set number one by one.
As shown in fig. 9, this step S443: if the size of the written data is smaller than the addressing granularity of the physical address, dividing the written data into a data cache module to be combined, wherein the data cache module comprises:
step S4431: if the size of the written data is smaller than the addressing granularity of the physical address, performing redundancy operation on the address number corresponding to each minimum writing unit in the written data to obtain redundancy results of the address number corresponding to each writing unit;
Specifically, the addressing granularity of the physical address=n×the addressing granularity of the logical address, where N is a positive integer and N is greater than or equal to 2, and performing a remainder operation on the address number corresponding to each minimum writing unit in the writing data to obtain a remainder result of the address number corresponding to each writing unit, where the remainder operation includes:
and performing remainder taking operation on N according to the address number corresponding to each minimum writing unit in the writing data to obtain remainder taking results of the address number corresponding to each writing unit, wherein the scope of the remainder taking results is [0, N-1].
For example: and if the size of the written data is smaller than the addressing granularity of the N logical addresses, performing redundancy operation on N for the address number corresponding to each minimum writing unit in the written data, and obtaining redundancy results of the address number corresponding to each writing unit.
For example, if the addressing granularity of the logical address (LMA) is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, the size of the written data is smaller than the addressing granularity of the physical address (PMA) by 8K, and if the address number corresponding to the smallest writing unit in the written data is lma=1, the remainder operation is performed by 1 to 2, so that the remainder result of the address number corresponding to the writing unit is 1; if the address number corresponding to the smallest writing unit in the written data is lma=4, the remainder operation is performed by using 4 to 2, and the remainder result of the address number corresponding to the writing unit is obtained as 0.
Or, the addressing granularity of the logical address (LMA) is 4K, n=4, the addressing granularity of the physical address (PMA) is 16K, the size of the written data is smaller than the addressing granularity of the physical address (PMA) by 16K, if the address number corresponding to the smallest writing unit in the written data is lma=1, the remainder operation is performed on 4 by 1, so that the remainder result of the address number corresponding to the writing unit is 1; if the address number corresponding to the minimum writing unit in the written data is LMA=6, performing a remainder operation on 4 by using 6 pairs to obtain a remainder result of 2 of the address number corresponding to the writing unit; if the address number corresponding to the smallest writing unit in the written data is lma=2, performing a remainder operation with 2 to 4 to obtain a remainder result of 2 of the address number corresponding to the writing unit.
Step S4432: and dividing each minimum writing unit in the writing data into a data cache set one by one according to the remainder result.
Specifically, the remainder of the address number of each minimum writing unit divided into the data cache set is equal to the set number, the number of the data cache set is n=addressing granularity of physical addresses/addressing granularity of logical addresses, N is a positive integer, and N is not less than 2.
For example: n=2, the data cache set includes 2 sets: set 0 and set 1, if the remainder result is 1, dividing the minimum writing unit into set 1 of the data cache set; if the remainder result is 0, the minimum writing unit is divided into a set 0 of the data cache set.
Alternatively, n=3, the data cache set includes 3 sets: set 0, set 1 and set 2, if the remainder result is 2, dividing the minimum writing unit into set 2 of the data cache set; if the remainder result is 1, dividing the minimum writing unit into a set 1 of the data cache set; if the remainder result is 0, the minimum writing unit is divided into a set 0 of the data cache set.
For example: n=4, the data cache set includes 4 sets: set 0, set 1, set 2 and set 3, if the remainder result is 3, dividing the minimum writing unit into set 3 of the data cache set; if the remainder result is 2, dividing the minimum writing unit into a set 2 of the data cache set; if the remainder result is 1, dividing the minimum writing unit into a set 1 of the data cache set; if the remainder result is 0, the minimum writing unit is divided into a set 0 of the data cache set.
Referring to fig. 10 again, fig. 10 is a schematic diagram of a data cache set according to an embodiment of the present application;
in the embodiment of the present application, the data cache set may be a linked list or any other distinguishable software data structure.
The process of dividing the smallest write unit in the write data into the data cache set is described below taking n=4 as an example.
If n=4, the data granularity of the minimum writing unit is 4K, the addressing granularity of the logical address (LMA) is 4K, the addressing granularity of the physical address (PMA) is 16K, the size of the writing data is smaller than the addressing granularity of the physical address (PMA) by 16K, if the address number corresponding to the minimum writing unit in the writing data is lma=1, the remainder operation is performed on 4 by 1, the remainder result of the address number corresponding to the writing unit is 1, the minimum writing unit is divided into the set 1 of the data cache set, namely the address number LMA1 corresponding to the minimum writing unit is divided into the set 1 of the data cache set;
if the address number corresponding to the minimum writing unit in the written data is LMA=2, performing remainder taking operation on 4 by using 2 pairs to obtain a remainder taking result of 2 of the address number corresponding to the writing unit, dividing the minimum writing unit into a set 2 of a data cache set, namely dividing the address number LMA2 corresponding to the minimum writing unit into the set 2 of the data cache set;
If the address number corresponding to the minimum writing unit in the written data is LMA=3, performing a remainder operation on 4 by 3 pairs to obtain a remainder result of 3 of the address number corresponding to the writing unit, dividing the minimum writing unit into a set 3 of a data cache set, namely dividing the address number LMA3 corresponding to the minimum writing unit into the set 3 of the data cache set;
if the address number corresponding to the minimum writing unit in the written data is LMA=4, 4 pairs of 4 are used for redundancy operation, the redundancy result of the address number corresponding to the writing unit is 0, the minimum writing unit is divided into a set 0 of a data cache set, namely the address number LMA4 corresponding to the minimum writing unit is divided into a set 0 of the data cache set;
the process of dividing the minimum writing units corresponding to different address numbers into the data cache sets is the same as the above method, and will not be described here again.
As shown in fig. 10, when n=4, the data cache set includes 4 sets: set 0, set 1, set 2 and set 3, LMA4 and LMA8 belong together to set 0, LMA1 and LMA5 belong together to set 1, LMA2 and LMA6 belong together to set 2, LMA3 and LMA7 belong together to set 3.
Referring to fig. 11, fig. 11 is a schematic diagram of writing data according to an embodiment of the present application;
as shown in fig. 11, when the size of the write data is greater than the addressing granularity of n×logical addresses, and the address number ranges corresponding to all the minimum write units of the write data are lma_x to lma_y (X, Y is a positive integer and X < Y), that is, the logical address X to logical address Y, the write data may be divided into two parts, one part is composed of an integer multiple of N consecutive minimum write units, and the other part is composed of the rest of the data.
For example: the data granularity of the minimum writing units is 4K, the addressing granularity of the logic address (LMA) is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, the data granularity of the data buffer module to be combined is 4K, the data granularity of the data buffer module to be flushed is 8K, if the address number range corresponding to all the minimum writing units of the writing data is LMA 110-LMA 120, the granularity of the data formed by each two minimum writing units is 8K in the minimum writing units with the address number range of LMA 110-LMA 119, which is the same as the data granularity of the data buffer module to be flushed, therefore, the minimum writing units with the address number range of LMA 110-LMA 119 can be divided into the data buffer module to be flushed, the rest minimum writing units with the address number of LMA120 are divided into the data buffer module to be combined, and the rest of the minimum writing units with the address number of LMA120 are divided into the set of 0 in the data buffer module to be combined because the address number of lma=120 performs the remainder of n=2.
Referring to fig. 12, fig. 12 is a schematic diagram of dividing write data according to an embodiment of the present application;
as shown in fig. 12, the write data is divided into a data to be flushed and a data to be combined in the buffer module, and the data to be flushed and the data to be combined are identified by data structures of nodes (nodes) for identifying positions and data lengths of different data areas into which the write data is divided, that is, the data to be flushed and the data lengths, and the data to be combined and the data buffer module. Nodes are combined in a linked list form, and the nodes point to the DRAM addresses corresponding to the writing data divided into different data areas through pointers.
Similarly, each minimum writing unit in the writing data is divided into the data cache sets of the data cache modules to be combined one by one, and the data cache sets are also realized through node (node) data structures.
It can be understood that the write data is divided into the data cache module to be flushed and the data cache module to be combined in the cache module, and each minimum write unit in the write data is divided into the data cache set of the data cache module to be combined one by one, so that the copy of the memory is not involved, the mapping relation of the write data is changed, and the flush data or the combined data is realized by searching the information of the nodes.
Step S405: and brushing the data in the data cache module to be brushed down to the flash memory medium, or combining N minimum writing units in the data cache module to be combined into a brushing unit, and brushing the brushing unit down to the data cache module to be brushed down or the flash memory medium.
Specifically, the size of the brushing unit is equal to the addressing granularity of the physical address, the data granularity of the data cache module to be brushed is also equal to the addressing granularity of the physical address, the data in the data cache module to be brushed is brushed to the flash memory medium, or N minimum writing units belonging to different data cache sets in the data cache module to be combined are combined into one brushing unit, and the brushing unit is brushed to the data cache module to be brushed or the flash memory medium.
For example: the data granularity of the minimum writing unit is 4K, the addressing granularity of the logic address (LMA) is 4K, N=2, the addressing granularity of the physical address (PMA) is 8K, the data granularity of the data cache module to be combined is 4K, the data granularity of the data cache module to be flushed is 8K, the data cache module to be combined comprises a set 0 and a set 1, 8K data in the data cache module to be flushed is flushed to a flash memory medium, or the minimum writing unit is respectively taken from the set 0 and the set 1 in the data cache module to be combined to form a flushing unit, the data granularity of the flushing unit is 8K, and the flushing unit is flushed to the data cache module to be flushed or the flash memory medium.
Or the data granularity of the minimum writing unit is 4K, the addressing granularity of the logic address (LMA) is 4K, n=4, the addressing granularity of the physical address (PMA) is 16K, the data granularity of the data cache module to be combined is 4K, the data granularity of the data cache module to be flushed is 16K, the data cache module to be combined comprises a set 0, a set 1, a set 2 and a set 3, and 16K data in the data cache module to be flushed is flushed to the flash memory medium, or one minimum writing unit is respectively taken from the set 0, the set 1, the set 2 and the set 3 in the data cache module to be combined to form a flushing unit, the data granularity of the flushing unit is 16K, and the flushing unit is flushed to the data cache module to be flushed or the flash memory medium.
It is understood that after the under-swiping unit is under-swished to the data cache module to be under-swished, the under-swiping unit is under-swished to the flash memory medium by the data cache module to be under-swished.
In this embodiment of the present application, N minimum write units in a data cache module to be combined are combined into a lower swiping unit, and the lower swiping unit is swiped down to the data cache module to be swiped down or the flash memory medium, and further including:
dividing the lower brushing units into data caching modules to be brushed, and piecing up the lower brushing units into minimum data volume corresponding to the flash memory medium by the data caching modules to be brushed so as to brush down the lower brushing units into the flash memory medium.
As shown in fig. 7, in the data cache module to be combined, N minimum writing units are combined into one lower brushing unit, the lower brushing unit is divided into the data cache module to be lower brushed, and the data cache module to be lower brushed brushes the lower brushing unit into the flash memory medium.
Specifically, the minimum data amount corresponding to the flash memory medium is related to the type of the flash memory medium, for example: if the type of the flash memory medium is TLC flash memory, the minimum data size corresponding to the flash memory medium is 3 data pages, and at this time, a plurality of brushing units need to be pieced into a size of the minimum data size, so as to brush the data of the minimum data size into the flash memory medium.
It will be appreciated that for partial flash memories, for example: TLC flash generally requires writing a plurality of data pages together, and therefore, when data is to be flushed to a flash memory medium, the amount of data required by the flash memory medium needs to be rounded up in the cache module to be flushed, that is, the amount of data for which a plurality of data pages need to be patched.
In an embodiment of the present application, the method further includes:
updating a mapping table from a logical address to a physical address when writing data of a host computer is written into a flash memory medium;
wherein, N logical addresses in the mapping table from logical address to physical address correspond to one physical address.
It can be understood that, since the addressing granularity of the physical address=n×the addressing granularity of the logical address, where N is a positive integer and N is not less than 2, N logical addresses in the mapping table of the logical address to the physical address correspond to one physical address.
Referring to fig. 13 again, fig. 13 is a schematic flow chart of reading data according to an embodiment of the present application;
as shown in fig. 13, the process of reading data includes:
step S1301: acquiring a read request sent by a host;
specifically, the read request includes an address number corresponding to the logical block address (Logi ca l B l ock Address, LBA). If the capacity of the flash memory device is n×4k (N is a positive integer and N is greater than or equal to 2), if the sector format (i.e., the minimum writing unit) is 4K, the address number of the Logical Block Address (LBA) is 1 to N; if the sector format (i.e., the minimum write unit) is 512B, then the Logical Block Addresses (LBAs) have address numbers of 0-8*N.
Step S1302: determining the address number of the logical address corresponding to the logical block address according to the address number corresponding to the logical block address;
specifically, when the addressing granularity of the logical address (LMA) is 4K, if the addressing granularity of the Logical Block Address (LBA) is 4K, the address number corresponding to the Logical Block Address (LBA) =the address number of the logical address (LMA); if the addressing granularity of the Logical Block Address (LBA) is 512B, the address number of the logical address (LMA) =address number/8 corresponding to the Logical Block Address (LBA).
For example: when the addressing granularity of the logical address (LMA) is 4K, the addressing granularity of the Logical Block Address (LBA) is 4K, and the address number corresponding to the Logical Block Address (LBA) is 11, the address number of the logical address (LMA) =the address number corresponding to the Logical Block Address (LBA) =11.
Step S1303: inquiring a mapping table from the logical address to the physical address according to the address number of the logical address, and determining the physical address corresponding to the logical address;
specifically, the mapping table from the logical address to the physical address includes a mapping relationship between an address number of the logical address and an address number of the corresponding physical address.
For example: and inquiring a mapping table from the logical address to the physical address, and determining the address number of the physical address (PMA) corresponding to the logical address (LMA) with the address number of 11, wherein the addressing granularity of the physical address (PMA) is equal to N, and N is a positive integer and is more than or equal to 2. It will be appreciated that the address number of the N logical addresses (LMA) corresponds to the address number of one physical address (PMA).
Step S1304: performing redundancy operation on the address number of the logical address, and determining a redundancy result corresponding to the address number of the logical address;
Specifically, N redundancy operations are performed on address numbers of the logical addresses (LMA), and redundancy results corresponding to the address numbers of the logical addresses are determined, wherein N types of redundancy results corresponding to the address numbers of the logical addresses (LMA) are provided, n=addressing granularity of physical addresses (PMA)/addressing granularity of the logical addresses (LMA), N is a positive integer, and N is greater than or equal to 2.
For example: the addressing granularity of the logical address (LMA) is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, and the remainder has 2 results: 0 and 1. If the address number of the logical address (LMA) is lma=11, performing a remainder taking operation on 2 by using 11, and obtaining a remainder taking result of 1; if the address number of the logical address (LMA) is lma=4, a remainder operation is performed with 4 to 2, resulting in a remainder result of 0.
Alternatively, the addressing granularity of the logical address (LMA) is 4K, n=3, the addressing granularity of the physical address (PMA) is 12K, and the remainder results are 3: 0. 1 and 2. If the address number of the logical address (LMA) is lma=10, performing a remainder taking operation on 3 by using 10, and obtaining a remainder taking result of 1; if the address number of the logic address (LMA) is LMA=6, performing a remainder taking operation on 3 by 6 to obtain a remainder taking result of 0; if the address number of the logical address (LMA) is lma=2, a remainder operation is performed with 2 to 3, resulting in a remainder result of 2.
Alternatively, the addressing granularity of the logical address (LMA) is 4K, n=4, the addressing granularity of the physical address (PMA) is 16K, and the remainder results are 4: 0. 1, 2 and 3. If the address number of the logic address (LMA) is LMA=11, performing a remainder taking operation on 4 by using 11 to obtain a remainder taking result of 3; if the address number of the logic address (LMA) is LMA=10, performing a remainder taking operation on 4 by using 10 to obtain a remainder taking result of 2; if the address number of the logical address (LMA) is lma=9, performing a remainder taking operation on 4 by 9, and obtaining a remainder taking result of 1; if the address number of the logical address (LMA) is lma=8, a remainder operation is performed with 8 pairs of 4, resulting in a remainder result of 0.
Step S1305: and determining the position of the data corresponding to the logical address in the physical address according to the remainder result corresponding to the address number of the logical address so as to read the data corresponding to the logical address.
Specifically, since the addressing granularity of the physical address=n×the addressing granularity of the logical address, where N is a positive integer and N is greater than or equal to 2, one physical address corresponds to N logical addresses, and the order of the positions of the data corresponding to the logical addresses in the physical addresses is the same as the order of the remainder results corresponding to the address numbers of the logical addresses in all the remainder results, for example: each location of the physical address corresponds to a 4K datum.
For example: when the addressing granularity of the logical address (LMA) is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, and the remainder result is 2: when 0 and 1, the order of the remainder result corresponding to the address number lma=4 of the logical address (LMA) is 0 in all remainder results is the first bit, and then the data corresponding to lma=4 is at the first 4K position in the physical address; the order of the remainder result 1 in all remainder results is the second bit, and the data corresponding to lma=11 is the second position in the physical address.
Alternatively, when the addressing granularity of the logical address (LMA) is 4K, n=3, the addressing granularity of the physical address (PMA) is 12K, and the remainder results are 3: 0. 1 and 2, if the order of the remainder result corresponding to the address number lma=6 of the logical address (LMA) is 0 in all the remainder results is the first bit, then the data corresponding to lma=6 is at the first position in the physical address; the order of the remainder result 1 in all remainder results is the second bit, and the data corresponding to the lma=10 is the second position in the physical address; the order of the remainder result 2 in all remainder results is the third bit, and the data corresponding to lma=2 is the third position in the physical address.
Alternatively, when the addressing granularity of the logical address (LMA) is 4K, n=4, the addressing granularity of the physical address (PMA) is 16K, and the remainder results are 4: 0. 1, 2 and 3, the order of the remainder result corresponding to the address number lma=8 of the logical address (LMA) being 0 in all the remainder results being the first bit, then the data corresponding to lma=8 is at the first position in the physical address; the order of the remainder result 1 in all the remainder results of the address number lma=9 of the logical address (LMA) is the second position, and the data corresponding to lma=9 is the second position in the physical address; the order of the remainder result 2 corresponding to the address number lma=10 of the logical address (LMA) in all remainder results is the third bit, and then the data corresponding to lma=10 is the third position in the physical address; the order of the remainder result corresponding to the address number lma=11 of the logical address (LMA) being 3 in all remainder results is the fourth bit, and the data corresponding to lma=11 is the fourth position in the physical address.
Further, after determining the location of the data corresponding to the logical address in the physical address, the data in the location of the physical address is read.
In the embodiment of the application, the mapping table from the logical address to the physical address is queried according to the address number of the logical address, the physical address corresponding to the logical address is determined, and the position of the data corresponding to the logical address in the physical address is determined according to the remainder result corresponding to the address number of the logical address.
Referring to fig. 14 again, fig. 14 is a schematic flow chart of garbage recycling according to an embodiment of the present application;
in the embodiment of the application, since the flash memory cannot be overwritten in situ, when a user updates data, the firmware can only find the flash memory space to write new data, so that the original flash memory space data is outdated to form garbage, and therefore, the garbage collection is required to perform block erasure to enable the invalid space to be reused. Garbage collection, namely, a process of reading out and rewriting effective data on a certain flash memory block, and then erasing the flash memory block to obtain a new available flash memory block.
As shown in fig. 14, the garbage collection process includes:
step S141: establishing a mapping table from a physical address to a logical address;
specifically, a mapping table (Phys i ca L to Logi ca L, P2L) from physical address to logical address, i.e. one physical address (PMA) in the P2L table corresponds to N logical addresses (LMA), the mapping table from physical address to logical address includes a mapping relationship between an address number of the physical address (PMA) and an address number of the corresponding logical address (LMA), and the mapping table from physical address to logical address is updated together with the mapping table from logical address to physical address at the time of data writing.
In the embodiment of the present application, the mapping table from physical address to logical address is not cached in the DRAM in full, and after one flash block is fully written, the P2L table of the flash block is recorded in the "metadata area" of the flash medium. The "metadata area" may be a space that occupies a part of the flash memory blocks along with the data, or may be a physical area that is specially divided from an area of other flash memory media of the flash memory device.
Referring to fig. 15 again, fig. 15 is a schematic diagram of a mapping table from physical address to logical address according to an embodiment of the present application;
in the embodiment of the present application, when the addressing granularity of the logical addresses (LMA) is 4K, n=2, and the addressing granularity of the physical addresses (PMA) is 8K, one physical address (PMA) in the mapping table of physical addresses to logical addresses corresponds to 2 logical addresses (LMA), that is, the address number of one physical address (PMA) corresponds to the address number of 2 logical addresses (LMA).
As shown in fig. 15, physical address PMA0 with address number 0 corresponds to logical address LMA3 with address number 0 and logical address LMA0 with address number 0, physical address PMA1 with address number 1 corresponds to logical address LMA2 with address number 5 and logical address LMA5 with address number 2 corresponds to logical address LMA6 with address number 6 and logical address LMA9 with address number 2 corresponds to address number 6, wherein data corresponding to LMA0, LMA2, LMA6 are located at first locations of PMA0, PMA1, PMA2, respectively, and data corresponding to LMA3, LMA5, LMA9 are located at second locations of PMA0, PMA1, PMA2, respectively.
Referring to fig. 16 again, fig. 16 is a schematic diagram of a mapping table from physical address to logical address according to another embodiment of the present application;
in the embodiment of the present application, when the addressing granularity of the logical addresses (LMA) is 4K, n=3, and the addressing granularity of the physical addresses (PMA) is 12K, one physical address (PMA) in the mapping table of the physical addresses to the logical addresses (Phys i ca L to Logi ca L, P2L) corresponds to 3 logical addresses (LMA), that is, the address number of one physical address (PMA) corresponds to the address number of 3 logical addresses (LMA).
As shown in fig. 16, physical address PMA0 with address number 0 corresponds to logical address LMA0 with address number 0, logical address LMA4 with address number 4 and logical address LMA2 with address number 2, physical address PMA1 with address number 1 corresponds to logical address LMA3 with address number 3, logical address LMA7 with address number 7 corresponds to logical address LMA5 with address number 5, physical address PMA2 with address number 2 corresponds to logical address LMA6 with address number 6, logical address LMA10 with address number 10 and logical address LMA11 with address number 11.
Wherein, the data corresponding to LMA0, LMA3, LMA6 are respectively located at the first positions of PMA0, PMA1, PMA2, the data corresponding to LMA4, LMA7, LMA10 are respectively located at the second positions of PMA0, PMA1, PMA2, and the data corresponding to LMA2, LMA5, LMA11 are respectively located at the third positions of PMA0, PMA1, PMA 2.
Referring to fig. 17 again, fig. 17 is a schematic diagram of a mapping table from physical address to logical address according to another embodiment of the present application;
in the embodiment of the present application, when the addressing granularity of the logical addresses (LMA) is 4K, n=4, and the addressing granularity of the physical addresses (PMA) is 16K, one physical address (PMA) in the mapping table of the physical addresses to the logical addresses (Phys i ca L to Logi ca L, P2L) corresponds to 4 logical addresses (LMA), that is, the address number of one physical address (PMA) corresponds to the address number of 4 logical addresses (LMA).
As shown in fig. 17, physical address PMA0 with address number 0 corresponds to logical address LMA0 with address number 0, logical address LMA1 with address number 1, logical address LMA2 with address number 2 and logical address LMA3 with address number 3, physical address PMA1 with address number 1 corresponds to logical address LMA4 with address number 4, logical address LMA5 with address number 5, logical address LMA6 with address number 6 and logical address LMA7 with address number 7, physical address PMA2 with address number 2 corresponds to logical address LMA8 with address number 8, logical address LMA9 with address number 9, logical address LMA10 with address number 10 and logical address LMA11 with address number 11.
Wherein, the data corresponding to LMA0, LMA4, LMA8 are respectively located at the first positions of PMA0, PMA1, PMA2, the data corresponding to LMA1, LMA5, LMA9 are respectively located at the second positions of PMA0, PMA1, PMA2, the data corresponding to LMA2, LMA6, LMA10 are respectively located at the third positions of PMA0, PMA1, PMA2, and the data corresponding to LMA3, LMA7, LMA11 are respectively located at the fourth positions of PMA0, PMA1, PMA 2.
Step S142: comparing the mapping table from the logical address to the physical address with the mapping table from the physical address to the logical address, and determining that the data corresponding to the logical address is effective data or invalid data;
it will be appreciated that at the time of garbage collection, there may be a scenario in which part of the 4K data is valid in one physical address (PMA), where it is necessary to determine which 4K data is valid in the physical address (PMA) by comparing the logical address to physical address mapping table with the physical address to logical address mapping table (not written to other locations in combination with other data).
Specifically, referring to fig. 18 again, fig. 18 is a schematic diagram of the refinement flow of step S142 in fig. 14;
as shown in fig. 18, step S142: comparing the mapping table from the logical address to the physical address with the mapping table from the physical address to the logical address, determining whether the data corresponding to the logical address is valid data or invalid data, including:
Step S1421: traversing a mapping table from a physical address to a logical address to acquire a first physical address;
specifically, the first physical address corresponds to N logical addresses, where N satisfies: addressing granularity of physical address = addressing granularity of N logical address, where N is a positive integer and N is ≡2.
Step S1422: inquiring each first logic address corresponding to the first physical address according to the mapping table from the physical address to the logic address;
specifically, if the addressing granularity of the logical address (LMA) is 4K, n=2, and the addressing granularity of the physical address (PMA) is 8K, the first physical address PMA0 corresponds to 2 first logical addresses (LMA), and each first logical address corresponding to the first physical address is queried according to the mapping table from the physical address to the logical address.
Or if the addressing granularity of the logical address (LMA) is 4K, n=3, and the addressing granularity of the physical address (PMA) is 12K, the first physical address PMA0 corresponds to 3 first logical addresses (LMA), and each first logical address corresponding to the first physical address is queried according to the mapping table from the physical address to the logical address.
Or if the addressing granularity of the logical address (LMA) is 4K, n=4, and the addressing granularity of the physical address (PMA) is 16K, the first physical address PMA0 corresponds to 4 first logical addresses (LMA), and each first logical address corresponding to the first physical address is queried according to the mapping table from the physical address to the logical address.
Step S1423: inquiring a mapping table from the logical address to the physical address according to each first logical address to obtain a second physical address corresponding to each first logical address;
for example, the addressing granularity of the logical address (LMA) is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, the first physical address PMA0 corresponds to 2 first logical addresses LMA0 and LMA3, and the mapping table of the logical address to the physical address is queried to obtain a second physical address PMA0 corresponding to LMA0, and a second physical address PMA2 corresponding to LMA 3.
Alternatively, the addressing granularity of the logical address (LMA) is 4K, n=3, the addressing granularity of the physical address (PMA) is 12K, the first physical address PMA0 corresponds to 3 first logical addresses LMA0, LMA4 and LMA2, and the mapping table from the logical address to the physical address is queried to obtain a second physical address PMA0 corresponding to LMA0, a second physical address PMA0 corresponding to LMA4, and a second physical address PMA0 corresponding to LMA 2.
Alternatively, the addressing granularity of the logical address (LMA) is 4K, n=4, the addressing granularity of the physical address (PMA) is 16K, the first physical address PMA0 corresponds to 4 first logical addresses LMA0, LMA1, LMA2 and LMA3, and the mapping table from the logical address to the physical address is queried to obtain a second physical address PMA0 corresponding to LMA0, a second physical address PMA1 corresponding to LMA1, a second physical address PMA2 corresponding to LMA2, and a second physical address PMA3 corresponding to LMA 3.
Step S1424: if the second physical address is the same as the first physical address, determining that the data corresponding to the first logical address is effective data;
for example: the addressing granularity of the logical address (LMA) is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, the first physical address PMA0 corresponds to 2 first logical addresses LMA0 and LMA3, the second physical address corresponding to LMA0 is PMA0, and the second physical address PMA0 is the same as the first physical address PMA0, and then the data corresponding to the first logical address LMA0 is determined to be valid data.
Alternatively, the addressing granularity of the logical address (LMA) is 4K, n=3, the addressing granularity of the physical address (PMA) is 12K, the first physical address PMA0 corresponds to 3 first logical addresses LMA0, LMA4 and LMA2, the second physical address corresponding to LMA0 is PMA0, the second physical address corresponding to LMA4 is PMA0, and the second physical address corresponding to LMA2 is PMA0, and the second physical address PMA0 corresponding to each logical address is the same as the first physical address PMA0, and then it is determined that the data corresponding to the first logical addresses LMA0, LMA4 and LMA2 are valid data.
Alternatively, the addressing granularity of the logical address (LMA) is 4K, n=4, the addressing granularity of the physical address (PMA) is 16K, the first physical address PMA0 corresponds to 4 first logical addresses LMA0, LMA1, LMA2 and LMA3, the second physical address corresponding to LMA0 is PMA0, and the second physical address PMA0 is the same as the first physical address PMA0, and then the data corresponding to the first logical address LMA0 is determined to be valid data.
Step S1425: if the second physical address is different from the first physical address, determining that the data corresponding to the logical address is invalid data.
For example: the addressing granularity of the logical address (LMA) is 4K, n=2, the addressing granularity of the physical address (PMA) is 8K, the first physical address PMA0 corresponds to 2 first logical addresses LMA0 and LMA3, the second physical address corresponding to LMA3 is PMA2, and if the second physical address PMA2 is different from the first physical address PMA0, the data corresponding to the first logical address LMA3 is determined to be invalid data.
Alternatively, the addressing granularity of the logical address (LMA) is 4K, n=4, the addressing granularity of the physical address (PMA) is 16K, the first physical address PMA0 corresponds to 4 first logical addresses LMA0, LMA1, LMA2 and LMA3, the second physical address corresponding to LMA1 is PMA1, the second physical address corresponding to LMA2 is PMA2, the second physical address corresponding to LMA3 is PMA3, and the second physical addresses PMA1, PMA2 and PMA3 are all different from the first physical address PMA0, and it is determined that the data corresponding to the first logical addresses LMA1, LMA2 and LMA3 are all invalid data.
Referring to fig. 19 and fig. 20 together, fig. 19 is a detailed flowchart of searching for valid data according to an embodiment of the present application; FIG. 20 is a schematic diagram of a P2L table and an L2P table according to an embodiment of the present disclosure;
Specifically, fig. 19 is a detailed flowchart of searching for valid data when the addressing granularity of the logical address (LMA) is 4K, n=2, and the addressing granularity of the physical address (PMA) is 8K, and X, Y, M, N in the figure is a designation of an address number, which indicates "one".
As shown in fig. 19, the detailed flow of searching for valid data includes:
step S1901: traversing a mapping table from a physical address to a logical address to obtain a first physical address PMA_X, namely the first physical address X;
specifically, as shown in fig. 20, the first physical address pma_x is PMA0.
Step S1902: inquiring first logical addresses LMA_X and LMA_Y corresponding to the first physical address, namely the first logical address X and the first logical address Y according to a mapping table from the physical address to the logical address;
specifically, as shown in the P2L table in fig. 20, the first logical address lma_x corresponding to the first physical address PMA0 is LMA3, and lma_y is LMA0.
Step S1903: inquiring a mapping table from a logical address to a physical address according to the LMA_X and the LMA_Y to obtain second physical addresses PMA_ M, PMA _N corresponding to the LMA_X and the LMA_Y respectively;
specifically, according to the first logical address X and the first logical address Y, the L2P table is queried to obtain a second physical address M (pma_m) corresponding to the first logical address X (lma_x) and a second physical address N (pma_n) corresponding to the first logical address Y (lma_y).
Specifically, as shown in the L2P table in fig. 20, the second physical address pma_m corresponding to LMA3 is PMA0, and the second physical address pma_m corresponding to LMA0 is PMA5.
Step S1904: judging whether PMA_M is identical to PMA_X;
specifically, it is determined whether the second physical address M is identical to the first physical address X.
If yes, go to step S1905: determining the data corresponding to the LMA_X as effective data;
if not, the process advances to step S1909: determining the data corresponding to the LMA_X as invalid data;
specifically, the second physical address pma_m corresponding to LMA3 is PMA0, the first physical address pma_x is PMA0, and the two are the same, and the process advances to step S1905.
Step S1905: determining the data corresponding to the LMA_X as effective data;
specifically, the data corresponding to the LMA3 is determined to be valid data.
Step S1906: judging whether PMA_N is identical to PMA_X;
specifically, it is determined whether the second physical address N (pma_n) is the same as the first physical address pma_x.
If yes, go to step S1907: determining the data corresponding to the LMA_Y as effective data;
if not, go to step S1910: determining the data corresponding to the LMA_Y as invalid data;
specifically, the second physical address pma_m corresponding to LMA0 is PMA5, the first physical address pma_x is PMA0, and the two are different, and the process advances to step S1909, where it is determined that the data corresponding to LMA0 is invalid data.
It will be appreciated that the first physical address is different from the second physical address, indicating that the data corresponding to the logical address is written to another location, not on the current flash block.
Step S1907: determining the data corresponding to the LMA_Y as effective data;
step S1908: whether the traversal is finished;
specifically, whether to traverse each logical address and each physical address corresponding to the P2L table in the current flash block is determined.
If yes, ending the flow;
if not, the process advances to step S1901: traversing a mapping table from a physical address to a logical address to obtain a first physical address PMA_X;
specifically, the next physical address of the mapping table from the physical address to the logical address is obtained, and similar operations in the above steps are performed until the P2L table of the current flash block is traversed.
In this embodiment of the present application, the flash memory device further includes a garbage collection cache module, and the method further includes:
dividing the garbage collection cache module into a garbage collection data cache module and a data cache module matched with the mapping granularity according to the addressing granularity of the logical address and the addressing granularity of the physical address, wherein the garbage collection data cache module comprises N data cache sets, and each data cache set corresponds to one set number one by one.
Specifically, the data granularity of the garbage collection data cache module is the same as the addressing granularity of the logical address, the data granularity of the data cache module matching the mapping granularity is the same as the addressing granularity of the physical address, and the N data cache sets of the garbage collection data cache module satisfy the following conditions: addressing granularity of physical address = addressing granularity of N logical address, where N is a positive integer and N is ≡2.
Specifically, in the garbage collection process, the read effective data is firstly divided into a garbage collection data buffer module as discrete data, the garbage collection data buffer module comprises N data buffer sets, each data buffer set corresponds to one set number one by one, the N is subjected to redundancy according to the address number of a logic address (LMA) corresponding to the effective data, and the redundancy result is used as the set number to classify the effective data.
For example: when the addressing granularity of the logical address (LMA) is 4K, n=2, and the addressing granularity of the physical address (PMA) is 8K, the garbage collection data buffer module includes 2 data buffer sets, set 0 and set 1.
Alternatively, when the addressing granularity of the logical address (LMA) is 4K, n=4, and the addressing granularity of the physical address (PMA) is 16K, the garbage collection data buffer module includes 4 data buffer sets, set 0, set 1, set 2, and set 3.
In this embodiment of the present application, the method for dividing the data cache set in the garbage collection data cache module is similar to the method for dividing the data cache set in the data cache module to be combined in the foregoing, and will not be described herein again, and the schematic diagram of the data cache set in the garbage collection data cache module is also similar to the schematic diagram of the data cache set in the data cache module to be combined in fig. 10.
Step S143: if the data corresponding to the logical address is effective data, carrying out data movement on the effective data;
in the embodiment of the present application, the data movement of the valid data specifically includes:
acquiring a minimum writing unit from each of N data cache sets of the garbage collection data cache module, and combining the N minimum writing units into a lower brushing unit;
and dividing the brushing units into data caching modules with matching mapping granularity, and brushing the brushing units into the flash memory medium by the data caching modules with the matching mapping granularity.
Specifically, the data granularity of the under-brushing unit is the same as the data granularity of the data cache module matching the mapping granularity.
Referring to fig. 21 again, fig. 21 is a schematic diagram of garbage recycling according to an embodiment of the present application;
As shown in fig. 21, the garbage collection module includes a read data module, a write data module, and a garbage collection buffer module, where the garbage collection buffer module includes a garbage collection data buffer module and a data buffer module matching a mapping granularity.
The garbage recycling process comprises the following steps:
(1) The data reading module reads data from the flash memory medium and stores the read data in the garbage collection cache module;
(2) The garbage collection data buffer module in the garbage collection buffer module obtains a minimum writing unit from each data buffer set in the N data buffer sets, and combines the N minimum writing units into a lower brushing unit;
(3) The garbage collection data caching module divides the lower refreshing unit into data caching modules with matched mapping granularity, and the data caching modules with matched mapping granularity forward the lower refreshing unit to the data writing module;
(4) The write data module swipes the swipe unit down to the flash media.
Referring to fig. 22 again, fig. 22 is a schematic diagram of another data cache set according to an embodiment of the present disclosure;
assuming that the addressing granularity of the logical address (LMA) is 4K, n=2, and the addressing granularity of the physical address (PMA) is 8K, as shown in fig. 22, the garbage collection data buffer module includes 2 data buffer sets, set 0 and set 1, and acquires a minimum write unit from set 0 and set 1, for example: the minimum writing unit corresponding to the logic address LMA4 is obtained from the set 0, the minimum writing unit corresponding to the logic address LMA1 is obtained from the set 1, 2 minimum writing units are combined into a lower brushing unit, the lower brushing unit is divided into a data caching module with matching mapping granularity, the lower brushing unit is downwards brushed to a flash memory medium by the data caching module with matching mapping granularity, an L2P table is updated at the same time, and the steps are repeated, so that the garbage recovery of the whole flash memory block is finally completed.
It should be noted that, the structure or function of the garbage collection data cache module is similar to the structure or function of the data cache module to be combined mentioned in the above embodiment, and the structure or function of the data cache module matching the mapping granularity is similar to the structure or function of the data cache module to be flushed mentioned in the above embodiment, and the specific content may refer to the data cache module to be combined or the data cache module to be flushed mentioned in the above embodiment, which is not repeated here.
In the embodiment of the present application, the specific method for performing data movement on effective data in the garbage collection process is similar to the method for combining the N minimum writing units in the data cache module to be combined into a lower brushing unit and brushing the lower brushing unit to the flash memory medium, which is not described herein.
In the embodiment of the application, the effective data are divided into different sets of the garbage collection data cache module, a minimum writing unit is obtained from each of N data cache sets of the garbage collection data cache module, N minimum writing units are combined into a lower brushing unit, and the lower brushing units are divided into data cache modules with matched mapping granularity, so that the mapping relation to which the effective data belongs is changed, and memory copy is not involved.
Step S144: if the data corresponding to the logical address is invalid data, the invalid data is not moved.
Specifically, if the data corresponding to the logical address is invalid data, the data is indicated to be written to other positions and not to be on the current flash memory block, and at the moment, the invalid data is not moved.
In an embodiment of the present application, by providing a data management method of a flash memory device, the flash memory device includes a flash memory medium and a cache module, the method includes: establishing a mapping table from a logical address to a physical address, wherein the addressing granularity of the physical address=n×the addressing granularity of the logical address, N is a positive integer and N is greater than or equal to 2; dividing the cache module into a data cache module to be flushed and a data cache module to be combined, wherein the data granularity of the data cache module to be combined is equal to the addressing granularity of the logical address, and the data granularity of the data cache module to be flushed is equal to the addressing granularity of the physical address; acquiring a write request sent by a host, wherein the write request comprises write data of the host, and the write data comprises a plurality of minimum write units; dividing the written data into a data cache module to be flushed and/or a data cache module to be combined according to the size of the written data; and brushing the data in the data cache module to be brushed down to the flash memory medium, or combining N minimum writing units in the data cache module to be combined into a brushing down unit, and brushing the brushing down unit down to the flash memory medium, wherein the size of the brushing down unit is equal to the addressing granularity of the physical address.
The performance of the flash memory device under a random write scene can be improved, data is not required to be written after the logical address is read, and therefore extra write amplification is not introduced, and the service life of the flash memory device can be prolonged.
Referring to fig. 23 again, fig. 23 is a schematic structural diagram of a flash memory device according to an embodiment of the present application;
as shown in fig. 23, the flash memory device 230 includes: one or more processors 231 and a memory 232. In fig. 23, a processor 231 is taken as an example.
The processor 231 and the memory 232 may be connected by a bus or otherwise, for example in fig. 23.
A processor 231 for providing computing and control capabilities to control the flash memory device 230 to perform corresponding tasks, for example, to control the flash memory device 230 to perform the data management method of the flash memory device in any of the method embodiments described above.
The processor 231 may be a general-purpose processor including a central processing unit (Centra l Process i ng Un it, CPU), a network processor (Network Processor, NP), a hardware chip, or any combination thereof; it may also be a digital signal processor (Di gita l Si gna l Process I ng, DSP), application specific integrated circuit (App l I cat I on Spec I f I C I ntegrated Ci rcu it, AS ic), programmable logic device (programmab l e l og I C devi ce, PLD), or a combination thereof. The PLD may be a complex programmable logic device (comp l ex programmab l e l ogi c devi ce, CPLD), a field programmable gate array (f i e l d-programmab l e gate array, FPGA), general array logic (gener i c array l og i c, GAL), or any combination thereof.
The memory 232 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the data management method of the flash memory device in the embodiments of the present application. The processor 231 may implement the data management method of the flash memory device in any of the above method embodiments by running non-transitory software programs, instructions, and modules stored in the memory 232. In particular, memory 232 may include volatile memory (vo l at i l e memory, VM), such as random access memory (random access memory, RAM); memory 232 may also include non-volatile memory (NVM) such as read-on-l y memory (ROM), flash memory (f l ash memory), hard disk (hard d i sk dr i ve, HDD) or solid state disk (so l i d-state dr i ve, SSD) or other non-transitory solid state storage device; memory 232 may also include a combination of the above types of memory.
One or more modules are stored in the memory 232 that, when executed by the one or more processors 231, perform the data management method of the flash memory device in any of the method embodiments described above, for example, performing the various steps shown in fig. 4 described above.
The embodiments of the present application also provide a nonvolatile computer storage medium storing computer executable instructions that are executable by one or more processors, for example, the one or more processors may perform a method for managing data of a flash memory device in any of the method embodiments described above, for example, perform the steps described above.
The apparatus or device embodiments described above are merely illustrative, in which the unit modules illustrated as separate components may or may not be physically separate, and the components shown as unit modules may or may not be physical units, may be located in one place, or may be distributed over multiple network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., and include several instructions for up to a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of each embodiment or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the present application as above, which are not provided in details for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
Claims (11)
1. A method of data management for a flash memory device, the flash memory device comprising a flash memory medium and a cache module, the method comprising:
establishing a mapping table from a logical address to a physical address, wherein the addressing granularity of the physical address=n×the addressing granularity of the logical address, N is a positive integer and N is greater than or equal to 2;
dividing the cache module into a data cache module to be flushed and a data cache module to be combined, wherein the data granularity of the data cache module to be combined is equal to the addressing granularity of the logical address, and the data granularity of the data cache module to be flushed is equal to the addressing granularity of the physical address;
acquiring a write request sent by a host, wherein the write request comprises write data of the host, and the write data comprises a plurality of minimum write units;
dividing the written data into the data caching module to be flushed and/or the data caching module to be combined according to the size of the written data;
and brushing the data in the data cache module to be brushed down to the flash memory medium, or combining N minimum writing units in the data cache module to be combined into a brushing unit, and brushing the brushing unit down to the data cache module to be brushed down or the flash memory medium, wherein the size of the brushing unit is equal to the addressing granularity of the physical address.
2. The method according to claim 1, wherein the partitioning the write data into the data-to-be-swiped and/or combined data-caching modules according to the size of the write data comprises:
if the size of the written data is larger than the addressing granularity of the physical address, dividing the minimum writing units of the continuous N in the written data into the data cache module to be flushed, and dividing the rest data in the written data into the data cache module to be combined;
dividing the written data into the data cache module to be flushed if the size of the written data is equal to the addressing granularity of the physical address;
and if the size of the written data is smaller than the addressing granularity of the physical address, dividing the written data into the data caching module to be combined.
3. The method of claim 2, wherein the data cache module to be combined includes N data cache sets, each data cache set having a set number, and wherein dividing the write data into the data cache module to be combined if the size of the write data is smaller than the addressing granularity of the physical address comprises:
If the size of the written data is smaller than the addressing granularity of the physical address, performing redundancy operation on the address number corresponding to each minimum writing unit in the written data to obtain redundancy results of the address number corresponding to each writing unit;
dividing each minimum writing unit in the writing data into the data cache set one by one according to the remainder result, wherein the remainder result of the address number of each minimum writing unit divided into the data cache set is equal to the set number.
4. The method of claim 3, wherein the performing a remainder operation on the address number corresponding to each minimum writing unit in the writing data to obtain a remainder result of the address number corresponding to each writing unit comprises:
and performing redundancy operation on N according to the address number corresponding to each minimum writing unit in the writing data to obtain redundancy results of the address number corresponding to each writing unit, wherein the redundancy results have a value range of [0, N-1].
5. The method of claim 1, wherein the combining the N minimum write units in the data cache module to be combined into one swipe unit and swipe the swipe unit down to the data cache module to be swiped or the flash medium, comprises:
Dividing the lower brushing units into the data caching module to be brushed, and piecing up a plurality of lower brushing units into the minimum data volume corresponding to the flash memory medium by the data caching module to be brushed so as to brush down a plurality of lower brushing units into the flash memory medium.
6. The method according to any one of claims 1-5, further comprising:
and updating the mapping table from the logical address to the physical address when the writing data of the host computer is written down to the flash memory medium, wherein N logical addresses in the mapping table from the logical address to the physical address correspond to one physical address.
7. The method of claim 6, wherein the method further comprises:
acquiring a read request sent by a host, wherein the read request comprises an address number corresponding to a logical block address;
determining the address number of the logical address corresponding to the logical block address according to the address number corresponding to the logical block address;
inquiring a mapping table from the logical address to a physical address according to the address number of the logical address, and determining the physical address corresponding to the logical address;
performing redundancy operation on the address number of the logical address, and determining a redundancy result corresponding to the address number of the logical address;
And determining the position of the data corresponding to the logical address in the physical address according to the remainder result corresponding to the address number of the logical address so as to read the data corresponding to the logical address.
8. The method according to claim 1, wherein the method further comprises:
establishing a mapping table from physical addresses to logical addresses, wherein one physical address in the mapping table from physical addresses to logical addresses corresponds to N logical addresses;
comparing the mapping table from the logical address to the physical address with the mapping table from the physical address to the logical address, and determining that the data corresponding to the logical address is effective data or invalid data;
if the data corresponding to the logical address is effective data, carrying out data movement on the effective data;
and if the data corresponding to the logical address is invalid data, not carrying out data movement on the invalid data.
9. The method of claim 8, wherein the determining that the data corresponding to the logical address is valid data or invalid data comprises:
traversing the mapping table from the physical address to the logical address to obtain a first physical address;
inquiring each first logical address corresponding to the first physical address according to the mapping table from the physical address to the logical address;
Inquiring a mapping table from the logical address to a physical address according to each first logical address to obtain a second physical address corresponding to each first logical address;
if the second physical address is the same as the first physical address, determining that the data corresponding to the first logical address is effective data;
and if the second physical address is different from the first physical address, determining that the data corresponding to the logical address is invalid data.
10. The method of claim 8 or 9, wherein the flash memory device further comprises a garbage collection cache module, the method further comprising:
dividing the garbage collection cache module into a garbage collection data cache module and a data cache module matched with the mapping granularity according to the addressing granularity of the logical address and the addressing granularity of the physical address, wherein the garbage collection data cache module comprises N data cache sets, and each data cache set corresponds to one set number one by one;
the data moving of the effective data specifically comprises the following steps:
acquiring a minimum writing unit from each of N data cache sets of the garbage collection data cache module, and combining the N minimum writing units into a lower brushing unit;
Dividing the brushing unit into a data cache module with the matching mapping granularity, and brushing the brushing unit into the flash memory medium by the data cache module with the matching mapping granularity.
11. A flash memory device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of data management of a flash memory device according to any one of claims 1-10.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211521508.7A CN115994101A (en) | 2022-11-30 | 2022-11-30 | Flash memory device and data management method thereof |
PCT/CN2023/093990 WO2024113688A1 (en) | 2022-11-30 | 2023-05-12 | Flash memory device and data management method therefor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211521508.7A CN115994101A (en) | 2022-11-30 | 2022-11-30 | Flash memory device and data management method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115994101A true CN115994101A (en) | 2023-04-21 |
Family
ID=85989654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211521508.7A Pending CN115994101A (en) | 2022-11-30 | 2022-11-30 | Flash memory device and data management method thereof |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115994101A (en) |
WO (1) | WO2024113688A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117311605A (en) * | 2023-08-23 | 2023-12-29 | 深圳华为云计算技术有限公司 | Distributed storage method, data indexing method, device and storage medium |
WO2024113688A1 (en) * | 2022-11-30 | 2024-06-06 | 深圳大普微电子科技有限公司 | Flash memory device and data management method therefor |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI650644B (en) * | 2018-01-05 | 2019-02-11 | 慧榮科技股份有限公司 | Method for managing flash memory module and related flash memory controller and electronic device |
CN114297092A (en) * | 2021-12-24 | 2022-04-08 | 阿里巴巴(中国)有限公司 | Data processing method, system, device, storage system and medium |
CN114546296B (en) * | 2022-04-25 | 2022-07-01 | 武汉麓谷科技有限公司 | ZNS solid state disk-based full flash memory system and address mapping method |
CN115994101A (en) * | 2022-11-30 | 2023-04-21 | 深圳大普微电子科技有限公司 | Flash memory device and data management method thereof |
-
2022
- 2022-11-30 CN CN202211521508.7A patent/CN115994101A/en active Pending
-
2023
- 2023-05-12 WO PCT/CN2023/093990 patent/WO2024113688A1/en unknown
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024113688A1 (en) * | 2022-11-30 | 2024-06-06 | 深圳大普微电子科技有限公司 | Flash memory device and data management method therefor |
CN117311605A (en) * | 2023-08-23 | 2023-12-29 | 深圳华为云计算技术有限公司 | Distributed storage method, data indexing method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2024113688A1 (en) | 2024-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180121351A1 (en) | Storage system, storage management apparatus, storage device, hybrid storage apparatus, and storage management method | |
EP2439645B1 (en) | Method and apparatus for storing data in a multi-level cell flash memory device | |
CN107066393B (en) | A method for improving the density of mapping information in the address mapping table | |
JP3708047B2 (en) | Managing flash memory | |
US7844772B2 (en) | Device driver including a flash memory file system and method thereof and a flash memory device and method thereof | |
US8645614B2 (en) | Method and apparatus for managing data of flash memory via address mapping | |
US8825946B2 (en) | Memory system and data writing method | |
US10657048B2 (en) | Garbage collection method for data storage device | |
US9921954B1 (en) | Method and system for split flash memory management between host and storage controller | |
CN115994101A (en) | Flash memory device and data management method thereof | |
KR20160105624A (en) | Data processing system and operating method thereof | |
CN113655955B (en) | Cache management method, solid state disk controller and solid state disk | |
EP3752905A1 (en) | Append only streams for storing data on a solid state device | |
KR20210039185A (en) | Apparatus and method for providing multi-stream operation in memory system | |
US20180276115A1 (en) | Memory system | |
KR20200014175A (en) | Apparatus and method for performing garbage collection to predicting required time | |
CN119278438A (en) | Namespace-level effective conversion unit count | |
WO2016206070A1 (en) | File updating method and storage device | |
US20240168889A1 (en) | Dynamic updates to logical-to-physical address translation table bitmaps | |
US9563363B2 (en) | Flexible storage block for a solid state drive (SSD)-based file system | |
CN111831576A (en) | Memory system including a plurality of regions for storing data and method of operating the same | |
US20210303212A1 (en) | Data processing method and memory controller utilizing the same | |
CN114974365A (en) | SSD (solid State disk) limited window data deduplication identification method and device and computer equipment | |
US9612770B2 (en) | Snapshot management using extended data units | |
CN112567327B (en) | Nonvolatile memory device, host device, and data storage system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |