US10417124B2 - Storage system that tracks mapping to a memory module to be detached therefrom - Google Patents
Storage system that tracks mapping to a memory module to be detached therefrom Download PDFInfo
- Publication number
- US10417124B2 US10417124B2 US16/112,314 US201816112314A US10417124B2 US 10417124 B2 US10417124 B2 US 10417124B2 US 201816112314 A US201816112314 A US 201816112314A US 10417124 B2 US10417124 B2 US 10417124B2
- Authority
- US
- United States
- Prior art keywords
- semiconductor memory
- memory module
- physical
- data
- logical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000015654 memory Effects 0.000 title claims abstract description 66
- 238000013507 mapping Methods 0.000 title claims abstract description 12
- 239000004065 semiconductor Substances 0.000 claims abstract description 60
- 238000006243 chemical reaction Methods 0.000 claims abstract description 55
- 238000000034 method Methods 0.000 claims description 30
- 230000004044 response Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 description 58
- 238000011084 recovery Methods 0.000 description 23
- 230000008569 process Effects 0.000 description 20
- 238000007726 management method Methods 0.000 description 10
- 230000002950 deficient Effects 0.000 description 9
- 238000012937 correction Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 238000012546 transfer Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000007423 decrease Effects 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0253—Garbage collection, i.e. reclamation of unreferenced memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1009—Address translation using page tables, e.g. page table structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0617—Improving the reliability of storage systems in relation to availability
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0635—Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1041—Resource optimization
- G06F2212/1044—Space efficiency improvement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7201—Logical to physical mapping or translation of blocks or pages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7205—Cleaning, compaction, garbage collection, erase control
Definitions
- the following embodiments relate to a storage system, in particular, a storage system including a plurality of memory modules that are detachably coupled with interface units thereof.
- a storage system of one type includes a plurality of memory modules, each of which includes a storage medium such as non-volatile semiconductor memory.
- the non-volatile semiconductor memories are nanoscaled to increase storage capacity.
- the nanoscaling of the non-volatile semiconductor memories may shorten the overwriting life thereof, the reliability of non-volatile semiconductor memories may decrease.
- FIG. 1 shows the configuration of a memory system according to an embodiment.
- FIG. 2 schematically illustrates a module interface in a NAND module included in the memory system according to the embodiment.
- FIG. 3 shows the functional configuration of a CPU in the memory system according to the embodiment.
- FIG. 4 shows an example of an address conversion table for logical block addresses (LBAs) that can be used in the memory system according to the embodiment.
- LBAs logical block addresses
- FIG. 5 shows an example of a key-value-type address conversion table that can be used in the memory system according to the embodiment.
- FIG. 6 schematically shows correspondence between a physical address space and logical blocks in the embodiment.
- FIG. 7 schematically shows the relationship between an LBA conversion table and physical and logical blocks in the embodiment.
- FIG. 8 schematically shows the relationship between a key-value-type address conversion table and physical and logical blocks in the embodiment.
- FIG. 9 is a flowchart showing a flow of process carried out for NAND module removal in the memory system according to the embodiment.
- FIG. 10 is a flowchart showing a flow of process carried out after removal of the NAND module according to the embodiment.
- FIG. 11 is a flowchart showing a flow of redundancy recovery processing according to the embodiment.
- FIG. 12 schematically illustrates garbage collection carried out during the redundancy recovery processing in the embodiment.
- FIG. 13 is a flowchart showing a flow of over-provisioning management processing according to the embodiment.
- FIG. 14 schematically illustrates management of an LBA space according to the embodiment.
- FIG. 15 is a flowchart showing a flow of processing to determine data to be deleted according to the embodiment.
- FIG. 16 schematically illustrates another management of an LBA space according to the embodiment.
- FIG. 17 schematically illustrates management of a key-value logical address space according to the embodiment.
- FIG. 18 is a flowchart showing a flow of processing to mount a NAND module according to the embodiment.
- a storage system connectable to a host includes a plurality of interface units, a plurality of semiconductor memory modules, each being detachably coupled with one of the interface units, and a controller configured to maintain an address conversion table indicating mappings between logical addresses and physical addresses of memory locations in the semiconductor memory modules.
- the controller determines that a first semiconductor memory module needs to be detached, the controller converts physical addresses of the first semiconductor memory module into corresponding logical addresses using the address conversion table and copies valid data stored in the corresponding logical addresses to another semiconductor memory module and update the address conversion table to indicate new mappings for the corresponding logical addresses of the valid data.
- FIG. 1 is a block diagram of a storage system 100 according to an embodiment.
- the storage system 100 of the present embodiment is, for example, a NAND flash array.
- the storage system 100 communicates with a host 200 .
- the storage system 100 and the host 200 perform data communication in accordance with an interface standard, such as ATA (Advanced Technology Attachment) or SATA (Serial ATA), SCSI (Small Computer System Interface), SAS (Serial Attached SCSI), NVM Express (Non-Volatile Memory Express), ZIF (Zero Insertion Force), LIF (Low Insertion Force), USB (Universal Serial Bus), PCI Express, TCP/IP or the like.
- ATA Advanced Technology Attachment
- SATA Serial ATA
- SCSI Serial Computer System Interface
- SAS Serial Attached SCSI
- NVM Express Non-Volatile Memory Express
- ZIF Zero Insertion Force
- LIF Low Insertion Force
- USB Universal Serial Bus
- PCI Express Transmission Control Protocol/IP or the like
- the host 200 designates a logical address in the storage system 100 and transmits a request to store write data (write request) in the storage system 100 . Also, the host 200 designates a logical address in the storage system 100 and transmits a request to read out read data (read request) from the storage system 100 .
- the write request or the read request includes the logical address of data that is the subject for writing and reading.
- Write data stored in the storage system 100 are data that are sent from the host 200 , which corresponds to a user of the storage system 100 .
- write data stored in the storage system 100 in accordance with the write request from the host 200 (or already stored therein) will be called user data.
- the storage system 100 includes a plurality of NAND modules 110 A, 110 B, 110 C, and 110 D, an interface unit 120 , a CPU 130 , a DRAM (dynamic random-access memory) 140 , and a brancher 150 .
- NAND module 110 unless a NAND module is distinguished from another NAND module, “NAND module 110 ” will be used.
- the storage system 100 of the present embodiment includes four NAND modules 110 , the storage system 100 of the present embodiment may include N NAND modules 110 (where N is an arbitrary natural number of 2 or greater).
- the NAND module 110 includes the NAND memories 112 a , 112 b , and 112 c and a NAND controller 114 . In the description below, unless a NAND memory is distinguished from another NAND memory, “NAND memory 112 ” will be used. Although the NAND module 110 of the present embodiment includes three NAND memories 112 , the NAND module 110 may include M NAND memories 112 (where M is an arbitrary natural number of 1 or greater).
- the NAND memory 112 is a NAND-type flash memory that includes a plurality of physical blocks, each including a plurality of memory cells.
- the NAND memory 112 writes write data in accordance with a write request output from the NAND controller 114 . Specifically, the NAND memory 112 writes write data associated with a write request into a location of a physical address corresponding to a logical address included in the write request.
- the NAND memory 112 reads out data in accordance with a read request output from the NAND controller 114 . Specifically, the NAND memory 112 reads data from a location of a physical address corresponding to a logical address included in the read request and outputs the read data to the NAND controller 114 .
- the plurality of NAND memories 112 may be a combination of different types of NAND memories.
- the type of a NAND memory 112 is categorized, for example, as an SLC (single-level cell) NAND memory or an MCL (multi-level cell) NAND memory.
- An SLC NAND memory stores one bit of data in a cell
- an MLC NAND memory stores multiple bits of data in a cell.
- MLC NAND memories of one type include TLC (triple-level cell) NAND memories that store 3 bits of data in a cell.
- NAND memory differs with regard to the number of times writing operations can be carried out and the number of times readout operations can be carried out, depending on an integration level of stored data.
- a single-level cell NAND memory although having a low integration level, includes a high durability and allows higher numbers of write operations and readout operations than a multi-level cell NAND memory.
- the storage system 100 of the present embodiment may include another type of memory in place of the NAND memory 112 .
- the storage system 100 may include a hard disk, a bit cost scalable (BiCS) memory, a magnetoresistive memory (MRAM), a phase change memory (PCM), a resistance random-access memory (RRAM®), or a combination thereof.
- BiCS bit cost scalable
- MRAM magnetoresistive memory
- PCM phase change memory
- RRAM® resistance random-access memory
- the NAND controller 114 is connected to a data bus 100 a , and receives a read request or a write request of user data with respect to the NAND memory 112 .
- the NAND controller 114 stores, as a correspondence table (not shown), physical addresses of the corresponding NAND memory 112 connected thereto. If the physical address designated by a write request or a read request is included in the correspondence table thereof, the NAND controller 114 outputs a write request or a read request to the corresponding NAND memory 112 . If the physical address designated by a write request or a read request is not included in the correspondence table thereof, the NAND controller 114 discards (ignores) the write request or the read request.
- the NAND controller 114 When outputting a write request or a read request to the NAND memory 112 , the NAND controller 114 converts the write request or the read request in accordance with a command format that is recognized by the corresponding NAND memory 112 , and output the converted write request or read request to the NAND memory 112 . Also, the NAND controller 114 converts data read out by the NAND memory 112 in accordance with a command format in the CPU 130 , and outputs the converted data to the CPU 130 , via the data bus 100 a and the brancher 150 .
- the brancher 150 is, for example, a switch called a fabric.
- FIG. 2 schematically illustrates a NAND module and a data bus interface according to the present embodiment.
- the NAND module 110 includes a module interface 110 a by which the NAND module 110 is physically and electrically attached to and detached from the data bus 100 a . While the module interface 110 a is attached to the interface 100 b of the data bus 100 a , the module interface 110 a transmits a write request or a read request over the data bus 100 a via the interface 100 b .
- the interface 100 b is implemented, for example, by a connector that can be mechanically and electrically attached to and detached from the module interface 110 a.
- the NAND module 110 includes a capacitor (not shown). For that reason, immediately after the module interface 110 a is detached from the interface 100 b , the NAND module 110 can still write into the NAND memory 112 data stored in volatile memory (not shown) such as RAM.
- the interface unit 120 communicates with the host 200 in accordance with a prescribed interface standard.
- the interface unit 120 for example, communicates with the host 200 in accordance with TCP/IP. If the interface unit 120 receives a write request or a read request transmitted from the host 200 , the interface unit 120 outputs the write request or the read request to the CPU 130 . If the interface unit 120 has received a read response output from the CPU 130 as the result of a read request, the interface unit 120 transmits the read response to the host 200 .
- the CPU 130 controls writing and reading of user data with respect to the NAND module 110 .
- FIG. 3 shows a functional configuration of the CPU 130 in the storage system 100 according to the present embodiment.
- the CPU 130 includes a communication controller 132 , a table manager 134 , and a storage controller 136 .
- the communication controller 132 When a write request or a read request is received from the interface unit 120 , the communication controller 132 outputs the write request or read request to the storage controller 136 . When a read response is output by the storage controller 136 , the communication controller 132 causes transmission of the read response from the interface unit 120 to the host 200 .
- the table manager 134 manages the address conversion table 142 stored in the DRAM 140 .
- the address conversion table 142 indicates the relationship (mapping) between logical addresses and physical addresses.
- the logical addresses uniquely identify locations of data stored in the NAND module 110 .
- the physical addresses are assigned to each unit storage region of the plurality of NAND modules 110 attached to the plurality of interfaces. Physical addresses are, for example, assigned as a continuous series of information that identifies storage locations in all of the plurality of NAND modules 110 attached to the plurality of interfaces.
- the table manager 134 updates the address conversion table 142 in accordance with a control operation by the storage controller 136 .
- the storage controller 136 in accordance with a write request or a read request output from the interface unit 120 , causes writing or reading of user data with respect to the NAND module 110 .
- the storage controller 136 refers to the address conversion table 142 and converts the logical address included in the write request to a physical address in the NAND memory 112 .
- the storage controller 136 outputs a write request including the physical address to the NAND controller 114 .
- the storage controller 136 refers to the address conversion table 142 and converts the logical address included in the read request to a physical address in the NAND memory 112 .
- the storage controller 136 outputs a read request including the physical address to the NAND controller 114 .
- the storage controller 136 outputs the read data that have been read out by the NAND controller 114 to the interface unit 120 , via the data bus 100 a.
- FIG. 4 shows an example of an LBA (logical block address) conversion table that can be used in the storage system 100 according to the present embodiment.
- LBA logical block address
- An LBA conversion table 142 a indicates correspondence between the logical address (e.g., the LBA) and physical addresses.
- the physical addresses are set to values that uniquely identify physical storage locations in the storage system 100 .
- the logical addresses are set to unique values that map to the physical addresses.
- FIG. 5 shows an example of a key-value address conversion table that can be used in the storage system 100 according to the present embodiment.
- a key-value address conversion table 142 b indicates correspondence between arbitrary key information of stored user data (i.e., identification information or logical address) and physical addresses of the NAND memory 112 in which the corresponding value (i.e., user data corresponding to the key information) is stored.
- the LBA conversion table 142 a and the key-value address conversion table 142 b may include values that indicate whether or not the user data are valid or invalid in association with the set of the logical address and the physical address.
- Valid user data are user data that are stored in a physical address and can be read out using a logical address mapped thereto (associated therewith).
- Invalid user data are user data that are stored in a physical address but are no longer considered to be valid.
- FIG. 6 shows the correspondence between a physical address space and logical blocks in the present embodiment.
- the physical address space (PAS) in the storage system 100 contains identification information of each storage regions of all NAND modules 110 .
- M is an arbitrary natural number of 1 or greater
- the physical address space (PAS) in the storage system 100 contains identification information of each storage regions of all NAND modules 110 .
- some of a NAND module may not be included in the physical address space (PAS).
- a plurality of physical blocks (PBs) is arranged in each NAND module 110 .
- the plurality of physical blocks is arranged, for example, in a form of matrix, that is, arranged in a first direction and also in a second direction that is orthogonal to the first direction.
- Each physical block includes a plurality of cells that can store data of a prescribed capacity.
- the physical block is, for example, a unit for data erasing in a NAND module 110 .
- the storage system 100 includes a plurality of logical blocks LB- 1 , . . . , LB-N (where N is an arbitrary natural number of 1 or greater).
- Each logical block is one virtual storage region that includes a plurality of physical blocks.
- Each logical block is assigned a prescribed range of logical block numbers (for example, LB- 1 , . . . , LB-N (where N is an arbitrary natural number of 1 or greater) as described above).
- a logical block corresponds to five physical blocks PB- 1 , PB- 2 , PB- 3 , PB- 4 , and PB- 5 .
- the plurality of physical blocks PB- 1 , PB- 2 , PB- 3 , PB- 4 , and PB- 5 is selected from physical blocks included in one or more of NAND modules 110 .
- the plurality of physical blocks PB- 1 , PB- 2 , PB- 3 , PB- 4 , and PB- 5 is arranged in NAND modules 110 that are different from each other.
- the correspondence of physical blocks and logical blocks is managed by a block correspondence table (not shown).
- the DRAM 140 stores (in a table storage unit) an address conversion table 142 that indicates the relationship of physical addresses of a plurality of storage locations (PB- 1 to PB- 5 ) distributed in different NAND modules 110 - 1 , 110 - 2 , . . . , 110 -M connected to the plurality of interfaces 110 b , with respect to one logical block (LB- 1 ) in the NAND modules 110 .
- the CPU 130 based on the address conversion table 142 , reads out data stored in a plurality of storage regions of NAND modules 110 - 1 , 110 - 2 , . . . , 110 -M corresponding to the logical block.
- FIG. 7 shows the relationship between the LBA conversion table 142 a and the physical and logical blocks in the present embodiment.
- each of the logical blocks LB- 1 , LB- 2 , . . . , LB-N includes the five physical blocks PB- 11 to PB- 1 N, PB- 21 to PB- 2 N, PB- 31 to PB- 3 N, PB- 41 to PB- 4 N, and PB- 51 to PB- 5 N.
- PB- 1 , PB- 2 , PB- 3 , PB- 4 , and PB- 5 will be used.
- the five physical blocks PB- 1 , PB- 2 , PB- 3 , PB- 4 , and PB- 5 are included in different NAND modules 110 (shown as NM- 1 , NM- 2 , NM- 3 , NM- 4 , and NM- 5 in FIG. 7 ), respectively.
- the physical blocks PB- 1 , PB- 2 , PB- 3 , PB- 4 , and PB- 5 store user data (UD).
- Error correction data corresponding to the user data stored in the physical blocks PB- 1 , PB- 2 , PB- 3 , and PB- 4 in the same logical block are written as redundant data in the physical block PB- 5 .
- Examples of error correction data include an error detection code such as a parity code or an error correcting code (ECC) such as a Reed-Solomon (RS) error correcting code.
- ECC error correcting code
- RS Reed-Solomon
- the DRAM 140 stores (in a table storage unit) an address conversion table 142 that indicates the relationship of physical addresses of the plurality of storage positions (PB- 11 to PB- 51 ) of different NAND modules 110 (NM- 1 to NM- 5 ) connected to the plurality of interfaces 100 b , with respect to each logical block (LB- 1 ) of the NAND modules 110 .
- the CPU 130 based on the address conversion table 142 , reads out data stored in a plurality of storage regions of the plurality of NAND modules corresponding to the logical block.
- FIG. 8 shows the relationship between a key-value address conversion table 142 and physical and logical blocks in the present embodiment.
- each of the logical blocks LB- 1 , LB- 2 , . . . , LB-N has the five physical blocks PB- 11 to PB- 1 N, PB- 21 to PB- 2 N, PB- 31 to PB- 3 N, PB- 41 to PB- 4 N, and PB- 51 to PB- 5 N.
- PB- 1 , PB- 2 , PB- 3 , PB- 4 , and PB- 5 will be used.
- the five physical blocks PB- 1 , PB- 2 , PB- 3 , PB- 4 , and PB- 5 are each included in different NAND modules 110 (shown as NM- 1 , NM- 2 , NM- 3 , NM- 4 , and NM- 5 in FIG. 8 ).
- the physical blocks PB- 1 , PB- 2 , PB- 3 , PB- 4 , and PB- 5 store user data (UD).
- User data are write data associated with a key transmitted from the host 200 .
- An error correction code corresponding to the user data of the physical blocks PB- 1 , PB- 2 , PB- 3 , and PB- 4 included in the same logical block is written in the physical block PB- 5 of the same logical block as redundant data.
- Examples of error correction codes include an error detection code such as a parity code or an ECC such as an RS code.
- redundant data represent the error correction code for recovering user data, which is, for example, an error detection code such as a parity code or an ECC such as an RS code.
- the error detection code or error correcting code is not limited to the above examples.
- a ratio of the lower limit of the amount of data required for storage of user data with respect to the amount of redundant data added to the lower limit of the amount of data will be referred to as the redundancy ratio.
- the redundant data may be a copy of user data that were stored in a physical block and that are stored into a physical block corresponding to a logical block that is the same as the physical block in which the user data were stored.
- a logical block is configured by rules that enable recovery of user data even if one NAND module 110 of the plurality of NAND modules 110 attached to the storage system 100 is removed.
- the first rule is that, if redundant data, such as a parity data, that enables recovery of data even if one symbol of user data is lost is included in the logical block, as shown in FIG. 7 and FIG. 8 , the physical blocks corresponding to the logical block are located in different NAND modules 110 , respectively.
- the second rule is that, if an RS code that enables recovery of data even if two symbols of user data among the physical blocks included in a logical block are lost is included in the logical block, no more than two physical blocks belong to the same NAND module 110 .
- a symbol is a group of data, which is a continuity of a prescribed bit length.
- the storage system 100 can recover user data that were stored in the removed NAND module 110 from data stored in the other NAND modules 110 .
- the storage system 100 configured as described above performs garbage collection with respect to a NAND module 110 installed in the storage system 100 .
- Garbage collection is generally an operation to transfer data other than invalid data from a physical block to other physical blocks, so that the physical block can be erased and storage regions in the physical block can be used for (new) data writing.
- garbage collection is performed to obtain a free logical block from one or more logical blocks included in the storage system 100 .
- Garbage collection is carried out in accordance with the following procedure.
- the CPU 130 refers to the address conversion table 142 and the block correspondence table (not shown), selects an arbitrary logical block as a garbage collection source logical block (target logical block, relocation source logical block), and reads valid data stored in physical blocks included in the garbage collection source logical block.
- the CPU 130 selects the garbage collection destination logical block (relocation destination logical block) and copies valid data read out from the garbage collection source logical block into physical blocks corresponding to the garbage collection destination logical block.
- the garbage collection destination logical block is a logical block that is set as a free logical block before the garbage collection, and is associated with arbitrary physical blocks.
- the CPU 130 updates the physical addresses in the address conversion table 142 from the physical addresses corresponding to the garbage collection source logical block to the physical address corresponding to the garbage collection destination logical block.
- the CPU 130 sets the garbage collection source logical block as a free logical block. By performing this operation, the CPU 130 collects the garbage collection source logical block as an unused (available) logical block.
- FIG. 9 is a flowchart showing a process carried out for NAND module removal in the storage system 100 according to the present embodiment.
- the CPU 130 determines whether or not a NAND module 110 needs to be removed (step S 100 ). In order to make this determination in step S 100 , the CPU 130 detects information regarding the reliability of the NAND module 110 . For this detection, the CPU 130 obtains information regarding the reliability of the NAND module 110 during the operation of the NAND module 110 or periodically monitors the information regarding the reliability of the NAND module 110 .
- the information regarding the reliability of the NAND module 110 is, for example, the number of times overwrite operations have been carried out with respect to the NAND module 110 or the error rate with respect to data read requests and data write requests.
- the CPU 130 monitors the number of overwrites or the number of error detections and determines that the NAND module 110 needs to be removed from the interface 100 b , if the number of overwrites, the error rate, or the number of errors exceeds a predetermined threshold.
- the predetermined threshold may be, for example, an average value of a plurality of pages of the NAND module 110 or a highest value among the plurality of pages. Alternatively, the determination may be made based on a combination of the number of overwrites, the error rate, and the number of errors.
- the information regarding the reliability of the NAND module 110 is not restricted to the above examples.
- the CPU 130 recognizes the NAND module 110 that needs to be removed from the data bus 100 a (hereinafter, the removal target NAND module 110 ) (step S 102 ). Then, the CPU 130 recognizes the logical address corresponding to the physical address included removal target NAND module 110 , by referring to the address conversion table 142 and tracks the logical address for future operations. Also, the CPU 130 recognizes data stored in the removal target NAND module 110 corresponding to the recognized physical address and logical address.
- the CPU 130 sends a notification about the removal to the host 200 before removal of the NAND module 110 (step S 104 ).
- the CPU 130 for example, sends a notification that the reliability of the NAND module 110 is low.
- the CPU 130 may send a notification about the removal by lighting an indicator, such as an LED, if the indicator is provided on the NAND module 110 .
- the CPU 130 transmit to the host 200 identification information of the logical address recognized in step S 102 .
- the identification information is, for example, LBA or key object information. This enables the host 200 to recognize user data that are stored in the removal target NAND module 110 .
- the host 200 receives the information transmitted by the storage system 100 .
- the host 200 for example, displays the content of the notification about the removal on a display unit.
- the host 200 causes a display unit to display information notifying the removal of the NAND module 110 , information identifying the data stored in the removal target NAND module 110 , or information regarding the possibility that readout might become impossible.
- the CPU 130 determines whether or not to transfer the data stored in the removal target NAND module 110 (step S 106 ). If a NAND module 110 that does not need to be removed (hereinafter “non-removal target NAND module 110 ”) has sufficient capacity to store the data stored in the removal target NAND module 110 , the CPU 130 determines to transfer the data stored in the removal target NAND module 110 . For example, the CPU 130 selects a non-removal target NAND module and compares a size of data stored in the removal target NAND module and remaining capacity of the non-removal target NAND module. If the remaining capacity is larger than the size of data in the removal target NAND module, the CPU 130 determines to transfer the data.
- the CPU 130 selects another non-removal target NAND module and repeats the same process. If the data stored in the removal target NAND module 110 are determined to be transferred (e.g., if there is a non-removal target NAND module that has sufficient remaining capacity) (Yes in step S 106 ), the process proceeds to step S 108 . If the data stored in the removal target NAND module 110 are determined to be not transferred, the process proceeds to step S 112 . The CPU 130 may determine to transfer the data stored in the removal target NAND module 110 not only to the free area of a NAND module 110 , but also to an external storage device having sufficient free area and capable of storing the data.
- the storage controller 136 transfers the data stored in the removal target NAND module 110 to the non-removal target NAND module 110 (different area) (step S 108 ). In other words, the storage controller 136 copies user data stored in a first area of the NAND module 110 to a second area of the NAND module 110 .
- the table manager 134 After the data are transferred to the storage region of the non-removal target NAND module 110 , the table manager 134 updates the correspondence between the physical address and the logical address of the write destination in the address conversion table 142 (step S 110 ). If the data are transferred to an area other than a NAND module 110 , the table manager 134 deletes the correspondence between the physical address and the logical address of the address conversion table 142 corresponding to the data.
- step S 112 the process proceeds to processing after removal of the NAND module 110 (step S 112 ).
- FIG. 10 is a flowchart showing the flow of processing after removal of a NAND module in the present embodiment.
- Redundancy recovery processing is processing to recover the redundancy after the NAND module 110 has been removed from the interface 100 b , to a preset redundancy or a previous redundancy before the NAND module 110 is removed from the interface 100 b .
- the CPU 130 executes the redundancy recovery processing in parallel with step S 122 and thereafter.
- redundancy as described above, is a ratio of the lower limit of the amount of data required for storage of user data with respect to the amount of redundant data added to the lower limit of the amount of data.
- the CPU 130 determines whether or not the redundancy recovery processing has been completed (step S 122 ). For example, if the redundancy after the removal target NAND module 110 was removed has reached a preset redundancy, the CPU 130 may determine that the redundancy recovery processing has been completed. If the redundancy after the removal target NAND module 110 was removed has not reached the preset redundancy, the CPU 130 may determine that the redundancy recovery processing has not been completed.
- the preset redundancy is, for example, a pre-established lower limit of redundancy or the redundancy immediately before the removal target NAND module 110 was removed.
- the preset redundancy may be a redundancy that is determined based on a condition for copying user data stored in the storage system 100 or a condition for generating the error correction code, or the like.
- the case in which the redundancy has not been recovered to the preset redundancy may be a case in which garbage collection has not been completed yet (NO in step S 144 ).
- step S 122 The process ends if the redundancy recovery processing has been completed (No in step S 122 ), and proceeds to step S 124 if the redundancy recovery processing has not been completed (Yes in step S 122 ).
- the CPU 130 determines whether or not a read request including a logical address of user data stored in the removal target NAND module 110 has been received from the host 200 (step S 124 ). If the read request including the logical address of user data stored in the removal target NAND module 110 has been received (Yes in step S 124 ), the process proceeds to step S 126 . If the read request including the logical address of user data stored in the removal target NAND module 110 was not received (No in step S 124 ), the process ends.
- the CPU 130 determines whether or not the user data corresponding to the logical address included in the received read request can be recovered (restored) (step S 126 ). If the data stored in the removal target NAND module 110 is recoverable (restorable) by a parity code or an RS code or the like in the same logical block as the physical block into which the user data requested by the read request have been stored, the CPU 130 determines that the user data is recoverable.
- the CPU 130 If the user data are determined to be recoverable (Yes in step S 126 ), the CPU 130 includes the data recovered by the parity code, RS code or the like in a read response, and the CPU 130 transmits the user data as recovered in accordance with the read request to the host 200 (step S 128 ).
- the CPU 130 determines whether or not the data transferred in the removal target NAND module 110 has been transferred to another storage region (step S 130 ). If it is determined that the data stored in the removal target NAND module 110 had been transferred to another storage region, the CPU 130 reads out the data from the storage region of the second area into which the data had been transferred in step S 108 , and returns a read response including the, to the host 200 (step S 132 )
- the CPU 130 If it is determined that the data stored in the removal target NAND module 110 had not been transferred to another storage region (No in step S 130 ), the CPU 130 returns a read response including a read error to the host 200 (step S 134 ).
- FIG. 11 is a flowchart showing the flow of the redundancy recovery processing according to the present embodiment.
- the CPU 130 specifies the logical block that includes the physical block of the NAND module 100 that was removed from the interface 100 b , as a garbage collection source logical block, based on information such as the block correspondence table (not shown) (step S 140 ).
- the CPU 130 performs garbage collection as redundancy recovery processing, with respect to the logical block specified at step S 140 as the garbage collection source logical block (step S 142 ).
- the garbage collection module (The CPU 130 ) for recovery of redundancy specifies, as the garbage collection source logical block, the logical block corresponding to the physical block of the NAND module 110 removed from the interface 100 b . Then, the CPU 130 writes the user data corresponding to the garbage collection source logical block to the physical block corresponding to the garbage collection destination logical block. Further, the CPU 130 releases the garbage collection source logical block. In this case, if it is necessary to read user data that were stored in the physical block of the removed NAND module 110 , it is possible to recover the desired user data by a parity code, an RS code, or the like of that logical block.
- the CPU 130 determines whether or not garbage collection has been completed for all of the logical blocks identified in step S 140 (step S 144 ). If garbage collection has not been completed for all of the logical blocks (Ne in step S 144 ), the CPU 130 continues garbage collection. If garbage collection has been completed for all of the logical blocks (Yes in step S 144 ), the CPU 130 sets the garbage collection source logical blocks as free logical blocks (step S 146 ). Specifically, the table manager 134 updates the collected garbage collection source logical blocks as free logical block, in which no valid user data are stored. More specifically, this is implemented by updating information registered in a logical block state management table (not shown) that manages the state of logical blocks.
- FIG. 12 schematically illustrates garbage collection in redundancy recovery processing of the present embodiment.
- the CPU 130 specifies logical block corresponding to the physical block PB- 5 as a garbage collection source logical block.
- the CPU 130 copies and writes data corresponding to the garbage collection source logical block (including valid data that had been stored into the physical block PB- 5 ) into the garbage collection destination logical block, and collects the garbage collection source logical block as a free logical block.
- the CPU 130 writes data into the collected free logical block in response to a write request or the like received subsequently.
- the CPU 130 writes data into the free logical block as a defective logical block and calculates and writes the RS code based on the written data.
- a defective logical block is a logical block that includes a physical block from which data cannot be read out normally.
- the physical block from which data cannot be read out normally may be, not only one in which a logical block no longer exists because of the removal of a NAND module 110 , but also a bad physical block with respect to which reading and writing of data cannot be done correctly.
- a logical block that include no physical block from which data cannot be read out normally, that is, a non-defective logical block, will be referred as a full logical block.
- the CPU 130 calculates and writes the RS code as if a prescribed value, for example a data value of 0, is written in the entire storage region of the physical block PB- 1 . That is, even if the physical block PB- 5 corresponding to the removal target NAND module 110 does not actually exist, the CPU 130 allocates a dummy bad physical block (virtual physical block) and stores into the virtual physical block a prescribed value (for example, 0). This enables the CPU 130 to maintain the redundancy with respect to data stored in response to a future write request or the like regarding a collected free logical block.
- a prescribed value for example a data value of 0
- the CPU 130 determines whether there are excess physical blocks in the overall physical block space of the storage system 100 (step S 148 ).
- the physical blocks of the NAND modules 110 may include a physical block set as an unused area into which data cannot be stored, because a read error is detected in the physical block at the manufacturing stage. For that reason, there may be an excess physical block that does not configure a logical block. Such a physical block is called as an excess physical block. If there is an excess physical block (Yes in step S 148 ), the CPU 130 can assign the excess physical block to a physical block included in the NAND module 110 removed from the interface 100 b set as a collected free logical block through the garbage collection in step S 146 . If the CPU 130 determines that the excess physical block exists (Yes in step S 148 ), the process proceeds to step S 150 . If there is no excess physical block (No in step S 148 ), the process ends.
- step S 148 the CPU 130 includes the excess physical block into the logical block collected in step S 146 (step S 150 ). If the excess physical block is included into the collected logical block, because a physical block actually exists, there is no need to treat the physical block as the defective logical block when a write request is received subsequently.
- An over-provisioning ratio is a proportion of available storage capacity, that is, unused (free) areas, of the storage capacity of the NAND modules 110 that is used for data storage.
- An unused area is a storage region in which no valid data are stored. More specifically, a storage region in which no valid data are stored may be a storage region in which no data have been written after block erasure processing, or a storage region in which valid data became invalid by updating of the address conversion table 142 .
- the storage system 100 for example, periodically calculates the over-provisioning ratio and manages unused areas so that the over-provisioning ratio does not fall below a prescribed threshold. By carrying out the over-provisioning management processing, the storage system 100 can maintain operational continuity of the NAND modules 110 by reducing the period of time during which free physical blocks cannot be detected when the storage controller 136 writes write data.
- FIG. 13 is a flowchart showing a process of the over-provisioning management processing according to the present embodiment.
- the over-provisioning management processing is performed, for example, in parallel with the above-described redundancy recovery processing when a removal target NAND module 110 is removed from the data bus 100 a.
- the CPU 130 determines whether or not the current over-provisioning (OP) ratio is lower than a prescribed threshold (step S 160 ).
- the prescribed threshold is set beforehand, and is, for example, approximately 5 to 20%. If a NAND module 110 is removed from the data bus 100 a , the overall storage region of the storage system 100 is reduced by the amount of physical blocks corresponding to the storage region of the removal target NAND module 110 . For this reason, if a removal target NAND module 110 is removed, the over-provisioning ratio may be reduced. If the CPU 130 determines that the current over-provisioning ratio is not below a prescribed threshold (No in step S 160 ), the process ends.
- step S 162 the CPU 130 determines whether or not to stop reception of write requests. If, for example, the CPU 130 sets to stop reception of write requests (Yes in step S 162 ), write requests are stopped. If the CPU 130 stops the reception of write requests, and if the above-described redundancy recovery processing is still being carried out, subsequent garbage collection regarding logical blocks that include physical blocks of removal target NAND module 110 may be stopped.
- step S 162 the CPU 130 determines whether or not a write request has be received. If a write request has been received (Yes in step S 164 ), the CPU 130 discards the write request (step S 166 ). If a write request has not been received (No in step S 164 ), the process ends.
- the CPU 130 determines data to be deleted from valid data, based on a prescribed rule (step S 168 ).
- the CPU 130 can increase the over-provisioning ratio by the amount of the data to be deleted.
- the “deleting” of data includes, for example, the deletion of the logical address associated with that data from the address conversion table 142 .
- “deleting” of data may include the erasure of a flag that is applied to indicate valid data, or the application of a flag that indicates invalid data.
- the CPU 130 notifies the host 200 that data stored in a NAND module 110 has been deleted (step S 170 ). It is desirable that the CPU 130 transmit to the host 200 information that identifies the data to be deleted as LBA, key-value, or the like.
- the CPU 130 deletes the data determined to be deleted in step S 168 (step S 172 ). Before deleting the data that is determined to be deleted in step S 168 , the CPU 130 may copy the data and store the data into a storage region of an external device separate from the storage system 100 . If the CPU 130 knows communication information such as the IP address of the external device into which the data to be deleted can be stored, and the data to be deleted are determined in step S 168 , the CPU 130 may transmit the data to be deleted to the external device. This enables the CPU 130 to store data to be deleted to an external device.
- the CPU 130 reads at least part of the data to be deleted from the NAND module 110 and transmits the part of the data to a storage region of an external device different from the NAND module 110 , before the CPU 130 deletes the data stored in the NAND module 110 .
- the CPU 130 can obtain the read request data as data stored in step S 132 in FIG. 10 . If, the CPU 130 receives a read request for the deleted data that are not stored anywhere, the CPU 130 returns either a read error or a prescribed value in step S 134 in FIG. 10 .
- the CPU 130 in addition to deleting the data to be deleted, performs processing so that a write request that designates the LBA corresponding to the physical block in which the deleted data have been written is not accepted.
- the processing so that a write request that designates the LBA corresponding to the deleted data is not accepted is, for example, the first through third processing described below.
- the first processing is processing of determining data stored in a physical block corresponding to LBA deleted from an LBA conversion table 142 a shown in FIG. 4 , as data to be deleted.
- FIG. 14 illustrates the LBA space in the present embodiment.
- the CPU 130 determines the LBAs identified as numbers 2 and 4 as the LBA to be deleted, and outputs these LBAs to the table manager 134 .
- the CPU 130 determines the user data stored in the physical block PB- 42 of the logical block LB- 2 and the physical block PB- 2 N of the logical block LB-N as data to be deleted.
- the table manager 134 treats the LBAs of the numbers 2 and 4 as not existing among the LBAs of the numbers 1 to N.
- the table manager 134 can notify the host 200 of a write error that the LBAs corresponding to the write request do not exist.
- user data corresponding to the LBAs stored in the physical block PB- 42 of the logical block LB- 2 and the physical block PB- 2 N of the logical block LB-N become invalid data, and it is possible to increase the over-provisioning ratio.
- the CPU 130 determines the data to be deleted, it is desirable that a distributed plurality of LBAs (part of the LBAs) be specified, based on the importance level of the user data stored in the physical block corresponding to the LBAs.
- the CPU 130 may set such that the importance level of user data becomes higher, as the frequency of readout or the frequency of updating becomes higher.
- the CPU 130 may receive information in which the LBA of user data and the importance level are associated by the host 200 or the storage controller 136 . In this case, the CPU 130 can determine the data to be deleted, based on the received importance level.
- the CPU 130 may set valid data included in the garbage collection source logical block as the data to be deleted.
- FIG. 15 is a flowchart illustrating a process of processing to determine data to be deleted according to the present embodiment.
- the CPU 130 acquires data from a physical block corresponding to a garbage collection source logical block (step S 180 ).
- the CPU 130 determines whether or not the data stored in the physical block corresponding to the garbage collection source logical block is valid data (step S 182 ). Then, the CPU 130 , for example, reads out from the address conversion table 142 a a value indicating whether the data corresponding to the garbage collection source logical block are valid or invalid and determines whether or not the data are valid. If the data stored in the physical block corresponding to the garbage collection source logical block are valid data (Yes in step S 182 ), the CPU 130 determines that the valid data are the data to be deleted (step S 184 ). If the data stored in the physical block corresponding to the garbage collection source logical block is invalid data (No in step S 182 ), the process returns to step S 180 and the CPU 130 acquires the data corresponding to the next garbage collection source logical block.
- the second processing reduces the maximum value of the logical address in the logical address (LBA) space of the storage system 100 when the storage system 100 includes an LBA conversion table 142 a as shown in FIG. 4 . That is, the second processing reduces the range of values in which the logical addresses can take.
- LBA logical address
- FIG. 16 describes another control of the LBA space according to the present embodiment.
- the CPU 130 reduces the maximum LBA in the LBA space to N- 3 .
- the CPU 130 deletes the LBAs of the numbers N- 2 , N- 1 , and N from the LBA space and determines that the data stored in the physical blocks corresponding to LBAs with the numbers N- 2 , N- 1 , and N are data to be deleted. If the CPU 130 receives a read request or a write request that designates the LBAs of the numbers N- 2 , N- 1 , and N, the CPU 130 notifies the host 200 of an error because these LBAs are not in the address conversion table 142 a .
- the user data corresponding to the LBAs stored in the physical block PB- 22 of the logical block LB- 2 and the physical blocks PB- 2 N and PB- 4 N of the logical block LB-N, which correspond to the LBAs of the numbers N- 2 , N- 1 , and N become invalid data, and the over-provisioning ratio can be increased as a result.
- the user data managed by the LBA space might have a higher importance level, as the number specifying the LBA in the LBA space decreases. For that reason, by reducing the maximum LBA space value, the CPU 130 can avoid determining that user data of a high importance level among the user data managed by the LBA space are data to be deleted.
- the third processing is processing of deleting data corresponding to an arbitrary key in a key-value address conversion table 142 b shown in FIG. 5 .
- FIG. 17 illustrates the control of a key-value logical address space according to the present embodiment.
- the CPU 130 determines the second and the fourth keys of the address conversion table as the part of the keys to be deleted and outputs the determined keys to the table manager 134 .
- the CPU 130 determines that the user data stored in the physical block PB- 42 of the logical block LB- 2 and the physical block PB- 2 N of the logical block LB-N are to be deleted.
- the table manager 134 can treat data corresponding to the second and the fourth keys in the address conversion table as having been deleted.
- the user data corresponding to these keys that were stored in the physical block PB- 42 of the logical block LB- 2 and the physical block PB- 2 N of the logical block LB-N become invalid data, and the over-provisioning ratio can be increased as a result.
- the CPU 130 can notify the host 200 of an error that there is insufficient writable capacity, and the over-provisioning ratio can be maintained as a result.
- the CPU 130 determines data to be deleted, it is desirable that the key be specified based on the importance level of the user data stored in physical block corresponding to the key.
- the CPU 130 sets the importance level higher, as the frequency of readouts or the frequency of updating of the user data increases.
- the CPU 130 may receive information in which the user data keys and the importance levels are associated with each other by the host 200 or the storage controller 136 . In that case, the CPU 130 can determine the data to be deleted, based on the received importance level. Alternatively, in order to simplify the processing to determine the data to be deleted, the CPU 130 may determine that valid data included in the garbage collection source logical block are the data to be deleted. Because it is possible to perform the same type of processing as shown in FIG. 15 , the details of the processing will not be described.
- the CPU 130 may select keys stored at the end of the address conversion table as part of the keys for deletion. Because the processing is the same as the above-described processing, except for the locations in which the keys are stored in the address conversion table, the details of the processing will not be described.
- FIG. 18 is a flowchart showing the flow of process to mount a NAND module 110 according to the present embodiment.
- the CPU 130 determines whether or not a NAND module 110 has been connected (attached) to the storage system 100 (step S 200 ).
- the CPU 130 determines that the NAND module 110 has been mounted to the storage system 100 . If the NAND module 110 is not electrically connected to the CPU 130 (No in step S 200 ), the CPU 130 waits.
- the CPU 130 determines whether or not the proportion of the number of defective logical blocks with respect to the number of the overall logical blocks of the storage system 100 exceeds a prescribed threshold (step S 202 ).
- the prescribed threshold is a value set beforehand, and can be changed by the administrator of the storage system 100 .
- the CPU 130 allocates a physical block of a NAND module 110 newly attached to the storage system 100 in place of the physical block incapable of performing normal data reading that is included in the defective logical blocks. By carrying out this process, the CPU 130 allocates physical blocks of a newly attached NAND module 110 to a physical block incapable of performing normal data reading and rebuilds a full logical block that does not include a physical block incapable of performing normal data reading. (step S 204 ). When a full logical block has been rebuilt, the CPU 130 changes the address conversion table 142 and the block correspondence table (not shown) regarding the rebuilt full logical block.
- the CPU 130 need not recalculate the RS code or the like. If a write request is received following the collection of a logical block as a free logical block through garbage collection, the CPU 130 can designate the physical block allocated in place of a physical block incapable of performing normal data reading as a physical block for data writing. In this manner, if a physical block of a NAND module 110 newly attached to the storage system 100 is allocated, an actual physical block for data writing exists. Thus, it is not necessary to treat the corresponding logical block as a defective logical block as described above when a write request or the like is received subsequently.
- the CPU 130 may allocate a physical block of a NAND module 110 that is newly attached, in place of a virtual physical block corresponding to a physical block for which garbage collection has not yet been completed during the redundancy recovery processing, which is described with reference to FIG. 11 .
- the CPU 130 may implement the redundancy recovery processing by restoring and writing the data that had been stored in a removed physical block into the physical block of a newly attached NAND module 110 .
- step S 206 the CPU 130 updates the address conversion table 142 and the block correspondence table (not shown) regarding the newly built full logical block.
- the storage system 100 can increase the over-provisioning ratio, by replacing a virtual logical block of a defective logical block with a physical block of a newly attached NAND module 110 , or by establishing a new full logical block. For that reason, the determination of whether or not the over-provisioning ratio exceeds the threshold in step S 160 switches to negative, i.e., No. As a result, according to the storage system 100 , the setting to stop reception of write requests can be released, enabling the writing of write data into a NAND module 110 based on a new write request.
- the writing of write data with respect to the deleted LBA restarts. If, as shown in FIG. 14 , a distributed plurality of LBAs is deleted, and a new write request with respect to a deleted LBA is received, the CPU 130 can write the write-requested data into the physical block that is the writing destination. If, as shown in FIG. 16 , the maximum value of the LBA space is reduced and then the maximum value of the LBA space is returned to the value before reduction, and a new write request with respect to the increased LBA is received, the CPU 130 can write the write-requested data into the physical block that is the write destination.
- the storage system 100 includes a key-value address conversion table 142 b as shown in FIG. 5 , if a request to write additional data is received from the host 200 , instead of sending a notice that there is insufficient writable capacity to the host 200 , it is possible to write the data requested to be written in the corresponding physical block.
- a storage system 100 includes a plurality of interfaces 100 b to or from which NAND modules 110 that store data can be attached or removed, and a CPU 130 that changes the correspondence relationship between physical addresses and logical addresses in an address conversion table 142 . Even if a NAND module 110 is removed from an interface 100 b , it is possible to avoid a situation in which writing and reading of data becomes impossible.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Security & Cryptography (AREA)
- Quality & Reliability (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
- Memory System (AREA)
Abstract
Description
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/112,314 US10417124B2 (en) | 2015-09-30 | 2018-08-24 | Storage system that tracks mapping to a memory module to be detached therefrom |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562234758P | 2015-09-30 | 2015-09-30 | |
US15/063,203 US10095423B2 (en) | 2015-09-30 | 2016-03-07 | Storage system that tracks mapping to a memory module to be detached therefrom |
US16/112,314 US10417124B2 (en) | 2015-09-30 | 2018-08-24 | Storage system that tracks mapping to a memory module to be detached therefrom |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/063,203 Continuation US10095423B2 (en) | 2015-09-30 | 2016-03-07 | Storage system that tracks mapping to a memory module to be detached therefrom |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180364924A1 US20180364924A1 (en) | 2018-12-20 |
US10417124B2 true US10417124B2 (en) | 2019-09-17 |
Family
ID=58409334
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/063,203 Active 2036-06-30 US10095423B2 (en) | 2015-09-30 | 2016-03-07 | Storage system that tracks mapping to a memory module to be detached therefrom |
US16/112,314 Active US10417124B2 (en) | 2015-09-30 | 2018-08-24 | Storage system that tracks mapping to a memory module to be detached therefrom |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/063,203 Active 2036-06-30 US10095423B2 (en) | 2015-09-30 | 2016-03-07 | Storage system that tracks mapping to a memory module to be detached therefrom |
Country Status (1)
Country | Link |
---|---|
US (2) | US10095423B2 (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102545166B1 (en) * | 2016-07-26 | 2023-06-19 | 삼성전자주식회사 | Host and Storage System securely deleting files and Operating Method of Host |
US10289550B1 (en) | 2016-12-30 | 2019-05-14 | EMC IP Holding Company LLC | Method and system for dynamic write-back cache sizing in solid state memory storage |
US11069418B1 (en) | 2016-12-30 | 2021-07-20 | EMC IP Holding Company LLC | Method and system for offline program/erase count estimation |
US10338983B2 (en) * | 2016-12-30 | 2019-07-02 | EMC IP Holding Company LLC | Method and system for online program/erase count estimation |
US10290331B1 (en) | 2017-04-28 | 2019-05-14 | EMC IP Holding Company LLC | Method and system for modulating read operations to support error correction in solid state memory |
US10403366B1 (en) | 2017-04-28 | 2019-09-03 | EMC IP Holding Company LLC | Method and system for adapting solid state memory write parameters to satisfy performance goals based on degree of read errors |
US11093408B1 (en) | 2018-04-26 | 2021-08-17 | Lightbits Labs Ltd. | System and method for optimizing write amplification of non-volatile memory storage media |
US11074173B1 (en) * | 2018-04-26 | 2021-07-27 | Lightbits Labs Ltd. | Method and system to determine an optimal over-provisioning ratio |
CN110895513B (en) * | 2018-09-12 | 2024-09-17 | 华为技术有限公司 | System garbage recycling method and garbage recycling method in solid state disk |
US11061598B2 (en) * | 2019-03-25 | 2021-07-13 | Western Digital Technologies, Inc. | Optimized handling of multiple copies in storage management |
US11907136B2 (en) * | 2020-03-16 | 2024-02-20 | Intel Corporation | Apparatuses, systems, and methods for invalidating expired memory |
TWI749685B (en) * | 2020-08-05 | 2021-12-11 | 宇瞻科技股份有限公司 | Memory storage device |
US11449419B2 (en) * | 2020-08-17 | 2022-09-20 | Micron Technology, Inc. | Disassociating memory units with a host system |
CN112637327B (en) * | 2020-12-21 | 2022-07-22 | 北京奇艺世纪科技有限公司 | Data processing method, device and system |
US11385798B1 (en) | 2020-12-28 | 2022-07-12 | Lightbits Labs Ltd. | Method and system for application aware, management of write operations on non-volatile storage |
CN117687580B (en) * | 2024-02-02 | 2025-01-14 | 深圳曦华科技有限公司 | A Flash data management system, micro control unit and vehicle |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090063895A1 (en) | 2007-09-04 | 2009-03-05 | Kurt Smith | Scaleable and maintainable solid state drive |
US20110202812A1 (en) | 2010-02-12 | 2011-08-18 | Kabushiki Kaisha Toshiba | Semiconductor memory device |
US20110214033A1 (en) | 2010-03-01 | 2011-09-01 | Kabushiki Kaisha Toshiba | Semiconductor memory device |
US8117376B2 (en) | 2007-02-06 | 2012-02-14 | Hitachi, Ltd. | Storage system and control method thereof |
US20120246383A1 (en) | 2011-03-24 | 2012-09-27 | Kabushiki Kaisha Toshiba | Memory system and computer program product |
US9013920B2 (en) | 2013-04-03 | 2015-04-21 | Western Digital Technologies, Inc. | Systems and methods of write precompensation to extend life of a solid-state memory |
US9268687B2 (en) | 2013-11-14 | 2016-02-23 | Phison Electronics Corp. | Data writing method, memory control circuit unit and memory storage apparatus |
-
2016
- 2016-03-07 US US15/063,203 patent/US10095423B2/en active Active
-
2018
- 2018-08-24 US US16/112,314 patent/US10417124B2/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8117376B2 (en) | 2007-02-06 | 2012-02-14 | Hitachi, Ltd. | Storage system and control method thereof |
US20090063895A1 (en) | 2007-09-04 | 2009-03-05 | Kurt Smith | Scaleable and maintainable solid state drive |
US20110202812A1 (en) | 2010-02-12 | 2011-08-18 | Kabushiki Kaisha Toshiba | Semiconductor memory device |
US20130232391A1 (en) | 2010-02-12 | 2013-09-05 | Kabushiki Kaisha Toshiba | Semiconductor memory device |
US20140310576A1 (en) | 2010-02-12 | 2014-10-16 | Kabushiki Kaisha Toshiba | Semiconductor memory device |
US20140310575A1 (en) | 2010-02-12 | 2014-10-16 | Kabushiki Kaisha Toshiba | Semiconductor memory device |
US20110214033A1 (en) | 2010-03-01 | 2011-09-01 | Kabushiki Kaisha Toshiba | Semiconductor memory device |
US20120246383A1 (en) | 2011-03-24 | 2012-09-27 | Kabushiki Kaisha Toshiba | Memory system and computer program product |
US9013920B2 (en) | 2013-04-03 | 2015-04-21 | Western Digital Technologies, Inc. | Systems and methods of write precompensation to extend life of a solid-state memory |
US9268687B2 (en) | 2013-11-14 | 2016-02-23 | Phison Electronics Corp. | Data writing method, memory control circuit unit and memory storage apparatus |
Also Published As
Publication number | Publication date |
---|---|
US10095423B2 (en) | 2018-10-09 |
US20170090783A1 (en) | 2017-03-30 |
US20180364924A1 (en) | 2018-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10417124B2 (en) | Storage system that tracks mapping to a memory module to be detached therefrom | |
CN112597069B (en) | Storage system, host system, and method of operating storage system | |
US11113149B2 (en) | Storage device for processing corrupted metadata and method of operating the same | |
KR101459861B1 (en) | Stripe-based memory operation | |
US8489806B2 (en) | Storage apparatus and method of managing data storage area | |
US11055002B2 (en) | Placement of host data based on data characteristics | |
US8601347B1 (en) | Flash memory device and storage control method | |
US20130173954A1 (en) | Method of managing bad storage region of memory device and storage device using the method | |
US9891989B2 (en) | Storage apparatus, storage system, and storage apparatus control method for updating stored data stored in nonvolatile memory | |
US10474527B1 (en) | Host-assisted error recovery | |
US11340986B1 (en) | Host-assisted storage device error correction | |
US10459803B2 (en) | Method for management tables recovery | |
KR20220001222A (en) | Memory system for handling a bad block and operation method thereof | |
CN103377143A (en) | Memory management method, memory controller and memory storage device | |
CN103699491A (en) | Data storage method, memory controller and memory storage device | |
KR20210099784A (en) | Data Storage Apparatus and Operation Method Thereof | |
KR20220103378A (en) | Apparatus and method for handling data stored in a memory system | |
CN115220652A (en) | Method and storage device for improving performance of NAND flash memory with dense read workload | |
KR20230072886A (en) | Apparatus and method for improving data input/output performance of storage | |
KR20220049230A (en) | Apparatus and method for checking an error of a non-volatile memory device in a memory system | |
CN113553631A (en) | Apparatus and method for protecting data in a memory system | |
US12032843B2 (en) | Apparatus and method for increasing operation efficiency in data processing system | |
KR20220108342A (en) | Apparatus and method for securing a free memory block in a memory system | |
KR20220086934A (en) | Journaling apparatus and method in a non-volatile memory system | |
CN118567570A (en) | Mapping table management method and memory storage device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: K.K. PANGEA, JAPAN Free format text: MERGER;ASSIGNOR:TOSHIBA MEMORY CORPORATION;REEL/FRAME:055659/0471 Effective date: 20180801 Owner name: TOSHIBA MEMORY CORPORATION, JAPAN Free format text: CHANGE OF NAME AND ADDRESS;ASSIGNOR:K.K. PANGEA;REEL/FRAME:055669/0401 Effective date: 20180801 Owner name: KIOXIA CORPORATION, JAPAN Free format text: CHANGE OF NAME AND ADDRESS;ASSIGNOR:TOSHIBA MEMORY CORPORATION;REEL/FRAME:055669/0001 Effective date: 20191001 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |