CN106776361B - Caching method and system for large-scale nonvolatile storage medium - Google Patents
Caching method and system for large-scale nonvolatile storage medium Download PDFInfo
- Publication number
- CN106776361B CN106776361B CN201710141173.9A CN201710141173A CN106776361B CN 106776361 B CN106776361 B CN 106776361B CN 201710141173 A CN201710141173 A CN 201710141173A CN 106776361 B CN106776361 B CN 106776361B
- Authority
- CN
- China
- Prior art keywords
- block
- mapping item
- logical address
- mapping
- item
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000013507 mapping Methods 0.000 claims abstract description 726
- 238000004422 calculation algorithm Methods 0.000 claims description 51
- 238000010276 construction Methods 0.000 claims description 37
- 238000012546 transfer Methods 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 14
- 238000007906 compression Methods 0.000 claims description 8
- 230000006835 compression Effects 0.000 claims description 8
- 238000013144 data compression Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000010845 search algorithm Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 230000005012 migration Effects 0.000 description 9
- 238000013508 migration Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000007726 management method Methods 0.000 description 6
- 230000006837 decompression Effects 0.000 description 4
- 230000003139 buffering effect Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
- G06F12/0897—Caches characterised by their organisation or structure with two or more cache hierarchy levels
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
技术领域technical field
本发明涉及数据存储检索技术领域,尤其涉及一种面向大规模非易失性存储介质的缓存方法和系统。The present invention relates to the technical field of data storage and retrieval, in particular to a cache method and system for large-scale non-volatile storage media.
背景技术Background technique
相比于传统的磁盘,基于非易失性存储介质(Non-Volatile Memory,NVM)的固态硬盘呈现出许多低功耗、读写速度快、防震较好等优点,在一些存储性能要求的较高的场所(如数据中心),NVM(Non-Volatile Memory)介质设备使用比例已超过磁盘。Compared with traditional disks, solid-state drives based on non-volatile storage media (Non-Volatile Memory, NVM) show many advantages such as low power consumption, fast read and write speed, and better shock resistance. In high places (such as data centers), the use ratio of NVM (Non-Volatile Memory) media devices has exceeded that of disks.
大数据时代,国际数据公司预测,从2016年开始的五年内,全球每年产生的数据总量均将增加40%,到2020年全球数据总量将达到44ZB。基于NVM介质的固态盘系统中,传统的数据组织和管理形式的弊端开始显现。首先,随着存储数据量增长,一方面,管理逻辑数据与物理数据映射的元数据量较大,这些元数据所需的缓存空间和存储空间较大;另一方面,基于这些元数据的缓存和索引管理的实现难度较复杂,尤其随着大容量三维NVM介质被应用到固态盘系统中,传统缓存和索引管理的方法逐渐不能完成满足高性能存储访问的需求,这将成为数据存储访问性能的瓶颈之一。In the era of big data, the International Data Corporation predicts that in the five years from 2016, the total amount of data generated every year in the world will increase by 40%, and the total amount of global data will reach 44ZB by 2020. In the solid state disk system based on NVM medium, the disadvantages of the traditional data organization and management form begin to appear. First, as the amount of stored data increases, on the one hand, the amount of metadata that manages the mapping between logical data and physical data is large, and the cache space and storage space required for these metadata are large; on the other hand, the cache based on these metadata It is more difficult to implement and index management, especially with the application of large-capacity 3D NVM media to solid-state disk systems, traditional cache and index management methods are gradually unable to meet the needs of high-performance storage access, which will become a problem in data storage access performance. one of the bottlenecks.
现有固态盘结构中,为了兼容传统主机端文件系统对块设备的访问模式,通常在固态盘软件栈设计一种NVM设备转换层(Non-Volatile Memory Translation Layer,NVMTL),其主要包括地址映射、磨损平衡和垃圾回收等模块,其中地址映射是NVMTL最核心的功能,它负责将来自文件系统的逻辑地址转换为NVM设备中的物理地址。根据逻辑地址与物理地址映射粒度的大小,NVMTL地址映射可以分为页映射、块映射以及混合映射。In the existing solid-state disk structure, in order to be compatible with the access mode of the traditional host-side file system to the block device, an NVM device translation layer (Non-Volatile Memory Translation Layer, NVMTL) is usually designed in the solid-state disk software stack, which mainly includes address mapping. , wear leveling and garbage collection modules, among which address mapping is the core function of NVMTL, which is responsible for converting logical addresses from the file system to physical addresses in NVM devices. According to the size of the logical address and physical address mapping granularity, NVMTL address mapping can be divided into page mapping, block mapping and mixed mapping.
大多数NVMTL地址映射管理中,逻辑地址与物理地址的关系条目均采用二维表的形式组织管理。在基于NVM介质的大容量存储系统中,首先,当数据量较大时,数据检索性能较低;此外,NVM介质现有部分解决方案采用采用二级映射的方法来缓存全体映射项中的一部分经常被访问的映射条目,这在访问不命中条件下的检索性能较差,存储访问管理上较难实现。In most NVMTL address mapping management, the relationship entries between logical addresses and physical addresses are organized and managed in the form of a two-dimensional table. In the large-capacity storage system based on NVM media, first, when the amount of data is large, the data retrieval performance is low; in addition, the existing partial solution of NVM media adopts the method of second-level mapping to cache a part of the entire mapping items For frequently accessed mapping entries, the retrieval performance under the access miss condition is poor, and it is difficult to implement storage access management.
发明内容SUMMARY OF THE INVENTION
基于背景技术存在的技术问题,本发明提出了一种面向大规模非易失性存储介质的缓存方法和系统;Based on the technical problems existing in the background art, the present invention provides a caching method and system for large-scale non-volatile storage media;
本发明提出的一种面向大规模非易失性存储介质的缓存方法,该方法包括以下步骤:A caching method for a large-scale non-volatile storage medium proposed by the present invention, the method comprises the following steps:
S1、根据用户命令请求判断其为目标映射项写命令请求或目标映射项读命令请求,当判断所述命令请求为目标映射项写命令请求时执行S2;当判断所述命令请求为目标映射项读命令请求时执行S3;S1. According to the user command request, it is judged that it is a target mapping item write command request or a target mapping item read command request, and S2 is executed when it is judged that the command request is a target mapping item write command request; when it is judged that the command request is a target mapping item Execute S3 when the read command is requested;
S2、将目标映射项末尾标志位置为0,再将目标映射项根据其逻辑地址前缀存入H-Cache区中;S2. Set the end flag position of the target mapping item to 0, and then store the target mapping item in the H-Cache area according to its logical address prefix;
S3、获取目标映射项读命令请求中的逻辑地址,判断H-Cache中是否有所述逻辑地址对应的目标映射项,当判断结果为是时,输出所述目标映射项对应的物理地址;当判断结果为否时,检索C-Cache中是否有所述逻辑地址对应的目标映射项,S3, obtain the logical address in the target mapping item read command request, determine whether there is a target mapping item corresponding to the logical address in the H-Cache, and when the judgment result is yes, output the physical address corresponding to the target mapping item; when When the judgment result is no, retrieve whether there is a target mapping item corresponding to the logical address in the C-Cache,
当检索结果为是时,输出所述目标映射项对应的物理地址,当检索结果为否时,在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址。When the retrieval result is yes, output the physical address corresponding to the target mapping item; when the retrieval result is no, retrieve the target mapping item corresponding to the logical address in the Mapping area and output the physical address corresponding to the target mapping item .
步骤S2,具体包括:Step S2, specifically includes:
S20、将目标映射项末尾标志位置为0;S20. Set the end flag position of the target mapping item to 0;
S21、将目标映射项根据其逻辑地址前缀存入H-Cache区的非压缩前缀完全二叉树中,判断H-Cache区内剩余空间是否小于预设阈值A,当判断结果为是时,对H-Cache区中的原有映射项进行迁移,执行S22;当判断结果为否时,不进行操作;S21. Store the target mapping item in the uncompressed prefix complete binary tree in the H-Cache area according to its logical address prefix, and determine whether the remaining space in the H-Cache area is less than the preset threshold A. The original mapping item in the Cache area is migrated, and S22 is executed; when the judgment result is no, no operation is performed;
S22、将H-Cache区中最长时间没被访问的映射项迁移到C-Cache区的压缩前缀完全二叉树中,判断C-Cache区内剩余空间是否小于预设阈值B,当判断结果为是时,将C-Cache区中的Block块按照访问次数排序,并逐项检查访问次数最少的Block块中映射项的标志位为0或1,若标志位为0,将该映射项迁移到Mapping区中,再将该映射项标志位置为1,若标志位为1,删除该映射项;当判断结果为否时,不进行操作。S22. Migrate the mapping items that have not been accessed for the longest time in the H-Cache area to the compressed prefix complete binary tree in the C-Cache area, and determine whether the remaining space in the C-Cache area is less than the preset threshold B, when the determination result is yes , sort the blocks in the C-Cache area according to the number of accesses, and check item by item that the flag bit of the mapping item in the block with the least number of accesses is 0 or 1. If the flag bit is 0, the mapping item is migrated to Mapping If the flag bit is 1, delete the mapping item; when the judgment result is no, no operation is performed.
在步骤S21中,所述将目标映射项根据其逻辑地址前缀存入H-Cache区的非压缩前缀完全二叉树之前,还包括:通过非压缩前缀完全二叉树构建方法建立H-Cache区的非压缩前缀完全二叉树,其中,所述非压缩前缀完全二叉树构建方法,具体包括:In step S21, before storing the target mapping item in the uncompressed prefix complete binary tree of the H-Cache area according to its logical address prefix, the method further includes: establishing an uncompressed prefix of the H-Cache area by using an uncompressed prefix complete binary tree construction method Complete binary tree, wherein the method for constructing the uncompressed prefix complete binary tree specifically includes:
S200、建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为1;S200, establish a Block block as the root node, and set the Block block prefix to 0; wherein, the number of mapping items stored in the Block block is 1;
S201、将H-Cache区中需要存入树中的映射项的逻辑地址首位设置为0,并将第一个映射项存入所述Block块中;S201, the logical address first position of the mapping item that needs to be stored in the tree in the H-Cache area is set to 0, and the first mapping item is stored in the Block block;
S202、再存入一个新的映射项到所述Block块中,所述Block块存储的映射项数量已满,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;S202, store a new mapping item into the Block block again, the number of mapping items stored in the Block block is full, split the Block block into two sub-nodes, and the prefixes of the two sub-Block block nodes are respectively set as 00 and 01, and then check the first two bits of the logical address of the mapping items in the original block node with the
S203、重复步骤S202操作,直至H-Cache区中需要存入树中的映射项全部存入树中后,通过虚拟节点补全所述树,得到非压缩前缀完全二叉树。S203. Repeat the operation of step S202 until all the mapping items in the H-Cache area that need to be stored in the tree are stored in the tree, and then complete the tree through virtual nodes to obtain a complete binary tree with uncompressed prefixes.
在步骤S3中,在判断H-Cache中是否有所述逻辑地址对应的目标映射项之前,包括:通过非压缩前缀完全二叉树构建方法建立H-Cache区的非压缩前缀完全二叉树,其中,所述非压缩前缀完全二叉树构建方法,具体包括:In step S3, before judging whether there is a target mapping item corresponding to the logical address in the H-Cache, the method includes: establishing an uncompressed prefix complete binary tree in the H-Cache area by using an uncompressed prefix complete binary tree construction method, wherein the Non-compressed prefix complete binary tree construction method, including:
S210、建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为1;S210, establish a Block block as the root node, and set the Block block prefix to 0; Wherein, the number of mapping items stored in the Block block is 1;
S211、将H-Cache区中需要存入树中的映射项的逻辑地址首位设置为0,并将第一个映射项存入所述Block块中;S211, the logical address first position of the mapping item that needs to be stored in the tree in the H-Cache area is set to 0, and the first mapping item is stored in the Block block;
S212、再存入一个新的映射项到所述Block块中,所述Block块存储的映射项数量已满,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;S212, store a new mapping item into the Block block, and the number of mapping items stored in the Block block is full, split the Block block into two sub-nodes, and the prefixes of the two sub-Block block nodes are respectively set as 00 and 01, and then check the first two bits of the logical address of the mapping items in the original block node with the
S213、重复步骤S212操作,直至H-Cache区中需要存入树中的映射项全部存入树中后,通过虚拟节点补全所述树,得到非压缩前缀完全二叉树;S213, repeat the operation of step S212, until after the mapping items that need to be stored in the tree in the H-Cache area are all stored in the tree, the tree is completed by the virtual node, and the non-compressed prefix complete binary tree is obtained;
在步骤S3中,通过非压缩前缀完全二叉树检索算法判断H-Cache中是否有所述逻辑地址对应的目标映射项,其中,所述非压缩前缀完全二叉树检索算法具体包括:In step S3, it is judged whether there is a target mapping item corresponding to the logical address in the H-Cache through an uncompressed prefix complete binary tree retrieval algorithm, wherein the uncompressed prefix complete binary tree retrieval algorithm specifically includes:
S300、接收目标映射项的逻辑地址;S300. Receive the logical address of the target mapping item;
S301、根据H-Cache区前缀完全二叉树的高度k,取目标映射项的逻辑地址前k位,并利用二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号;S301, according to the height k of the complete binary tree of the prefix of the H-Cache area, get the first k bits of the logical address of the target mapping item, and utilize the binary tree Block block sequence number fast calculation algorithm to calculate the block number that the target mapping item may exist;
S302、检索所述Block块中是否存储目标映射项的逻辑地址,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,执行S303;S302, search whether the logical address of the target mapping item is stored in the Block, when the search result is yes, return the physical address corresponding to the logical address of the target mapping item; when the search result is no, execute S303;
S303、令k=k-1,执行S301、S302操作,直至K=1,执行S304;S303, set k=k-1, perform operations S301 and S302, until K=1, perform S304;
S304、根据目标映射项的逻辑地址前1位和二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号,检索所述Block块中是否存储目标映射项的逻辑地址,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,表示目标映射项的逻辑地址在所述非压缩前缀完全二叉树中不存在;S304, according to the first 1 bit of the logical address of the target mapping item and the binary tree Block block sequence number fast calculation algorithm, calculate the Block block number that may exist in the target mapping item, retrieve whether the logical address of the target mapping item is stored in the Block block, when the retrieval When the result is yes, the physical address corresponding to the logical address of the target mapping item is returned; when the retrieval result is no, it means that the logical address of the target mapping item does not exist in the uncompressed prefix complete binary tree;
在步骤S3中,所述检索C-Cache中是否有所述逻辑地址对应的目标映射项之前,还包括:通过压缩前缀完全二叉树构建方法建立C-Cache区的压缩前缀完全二叉树;所述在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址之前,还包括:通过压缩前缀完全二叉树构建方法建立Mapping区的压缩前缀完全二叉树;其中,所述压缩前缀完全二叉树构建方法,具体包括:In step S3, before retrieving whether there is a target mapping item corresponding to the logical address in the C-Cache, the method further includes: building a compressed prefix complete binary tree in the C-Cache area by using a compressed prefix complete binary tree construction method; Before retrieving the target mapping item corresponding to the logical address in the area and outputting the physical address corresponding to the target mapping item, the method further includes: building a compressed prefix complete binary tree of the Mapping area by using a compressed prefix complete binary tree construction method; wherein, the compressed prefix Complete binary tree construction methods, including:
S310、建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为M,其中,1<M;S310, establish a Block block as the root node, and set the Block block prefix to 0; wherein, the number of mapping items stored in the Block block is M, where 1<M;
S311、将需要存入树中的映射项的逻辑地址首位设置为0,将第一个映射项存入所述Block块中并执行数据压缩;S311, the logical address first position of the mapping item that needs to be stored in the tree is set to 0, the first mapping item is stored in the Block block and data compression is performed;
S312、解压缩所述Block块,再逐项存入新的映射项到所述Block块中,当所述Block块中的映射项数目已达设定阈值M时,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;S312: Decompress the Block block, and then store new mapping items into the Block block item by item, when the number of mapping items in the Block block has reached a set threshold M, split the Block block into The prefixes of the two child nodes and the two child block nodes are set to 00 and 01 respectively, and then the mapping items in the block node with the original prefix of 0 are checked item by item for the first two bits of its logical address, and the first two bits of the logical address are The mapping item of 00 is transferred to the sub-block node with the
S313、重复步骤S312操作,直至需要存入树中的映射项全部存入树后,通过虚拟节点补全所述树,得到压缩前缀完全二叉树;S313, repeat the operation of step S312, until after the mapping items that need to be stored in the tree are all stored in the tree, the tree is completed through the virtual node to obtain a compressed prefix complete binary tree;
在步骤S3中,通过压缩前缀完全二叉树检索算法检索C-Cache中是否有所述逻辑地址对应的目标映射项,且通过压缩前缀完全二叉树检索算法检索Mapping区中是否有所述逻辑地址对应的目标映射项,其中,所述压缩前缀完全二叉树检索算法具体包括:In step S3, whether there is a target mapping item corresponding to the logical address in the C-Cache is searched by a compressed prefix complete binary tree search algorithm, and whether there is a target corresponding to the logical address in the Mapping area by a compressed prefix complete binary tree search algorithm Mapping items, wherein the compressed prefix complete binary tree retrieval algorithm specifically includes:
S320、接收目标映射项的逻辑地址;S320, receiving the logical address of the target mapping item;
S321、根据C-Cache区或Mapping区中压缩前缀完全二叉树的高度k取目标映射项的逻辑地址前k位,并利用二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号;S321. According to the height k of the compressed prefix complete binary tree in the C-Cache area or the Mapping area, take the first k bits of the logical address of the target mapping item, and use the binary tree Block block sequence number fast calculation algorithm to calculate the block number that may exist in the target mapping item. ;
S322、检索所述Block块的Bloom filter中是否含有目标映射项的存储标记,当检索结果为是时,解压缩所述Block块并返回目标映射项的逻辑地址对应的物理地址,再压缩所述Block块;当检索结果为否时,执行S323;S322. Retrieve whether the Bloom filter of the Block block contains the storage tag of the target mapping item, when the retrieval result is yes, decompress the Block block and return the physical address corresponding to the logical address of the target mapping item, and then compress the Block block; when the retrieval result is no, execute S323;
S323、令k=k-1,执行S321、S322操作,直至K=1,执行S324;S323, set k=k-1, perform operations S321 and S322, until K=1, perform S324;
S324、根据目标映射项的逻辑地址前1位和二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号,检索所述Block块的Bloom filter中是否含有目标映射项的存储标记,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,表示目标映射项的逻辑地址在所述压缩前缀完全二叉树中不存在。S324, according to the first 1 bit of the logical address of the target mapping item and the binary tree Block block sequence number fast calculation algorithm, calculate the Block block number that may exist in the target mapping item, and retrieve whether the Bloom filter of the Block block contains the storage mark of the target mapping item , when the search result is yes, the physical address corresponding to the logical address of the target mapping item is returned; when the search result is no, it means that the logical address of the target mapping item does not exist in the compressed prefix complete binary tree.
在步骤S3中,所述检索C-Cache中是否有所述逻辑地址对应的目标映射项,当检索结果为是时,输出所述目标映射项对应的物理地址过程中,具体包括:记录此次检索时间,当预设时间内所述逻辑地址对应的目标映射项被再次被检索到时,将所述目标映射项转移到H-Cache区;In step S3, in the process of retrieving whether there is a target mapping item corresponding to the logical address in the C-Cache, when the retrieval result is yes, the process of outputting the physical address corresponding to the target mapping item specifically includes: recording this time Retrieval time, when the target mapping item corresponding to the logical address is retrieved again within the preset time, transfer the target mapping item to the H-Cache area;
在步骤S3中,所述在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址过程中,还包括:在Mapping区中检索所述逻辑地址对应的目标映射项所在的Block块,并将所述Block块中目标映射项备份到H-Cache区,将所述Block块中其他映射项备份到C-Cache区。In step S3, the process of retrieving the target mapping item corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target mapping item further includes: retrieving the target corresponding to the logical address in the Mapping area The Block block where the mapping item is located, and the target mapping item in the Block block is backed up to the H-Cache area, and the other mapping items in the Block block are backed up to the C-Cache area.
一种面向大规模非易失性存储介质的缓存系统,包括:A cache system for large-scale non-volatile storage media, including:
判断模块,用于根据用户命令请求判断其为目标映射项写命令请求或目标映射项读命令请求;The judgment module is used for judging according to the user command request that it is the target mapping item write command request or the target mapping item read command request;
写命令模块,用于当判断模块判断所述命令请求为目标映射项写命令请求时,将目标映射项末尾标志位置为0,再将目标映射项根据其逻辑地址前缀存入H-Cache区中;The write command module is used for, when the judgment module judges that the command request is a write command request of the target mapping item, the end flag position of the target mapping item is set to 0, and then the target mapping item is stored in the H-Cache area according to its logical address prefix ;
读命令模块,用于当判断模块判断所述命令请求为目标映射项读命令请求时,获取目标映射项读命令请求中的逻辑地址,判断H-Cache中是否有所述逻辑地址对应的目标映射项,当判断结果为是时,输出所述目标映射项对应的物理地址;当判断结果为否时,检索C-Cache中是否有所述逻辑地址对应的目标映射项,The read command module is used to obtain the logical address in the target mapping item read command request when the judgment module judges that the command request is a target mapping item read command request, and judges whether there is a target mapping corresponding to the logical address in the H-Cache item, when the judgment result is yes, output the physical address corresponding to the target mapping item; when the judgment result is no, search whether there is a target mapping item corresponding to the logical address in the C-Cache,
当检索结果为是时,输出所述目标映射项对应的物理地址,当检索结果为否时,在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址。When the retrieval result is yes, output the physical address corresponding to the target mapping item; when the retrieval result is no, retrieve the target mapping item corresponding to the logical address in the Mapping area and output the physical address corresponding to the target mapping item .
所述写命令模块包括:标志子模块、判断子模块、处理子模块、写入子模块;The writing command module includes: a marking submodule, a judging submodule, a processing submodule, and a writing submodule;
标志子模块,用于将目标映射项末尾标志位置为0;The flag submodule is used to set the flag position at the end of the target mapping item to 0;
判断子模块,用于判断H-Cache区内剩余空间是否小于预设阈值A;A judging submodule for judging whether the remaining space in the H-Cache area is less than the preset threshold A;
处理子模块,用于当判断子模块的判断结果为是时,将目标映射项根据其逻辑地址前缀存入H-Cache区的非压缩前缀完全二叉树中,对H-Cache区中的原有映射项进行迁移,将最长时间没被访问的原有映射项迁移到C-Cache区中,再判断C-Cache区内剩余空间是否小于预设阈值B,当判断结果为是时,将C-Cache区中的Block块按照访问次数排序,并检查访问次数最少的Block块中映射项的标志位为0或1,若标志位为0,将该映射项写到Mapping区中,再将该映射项标志位置为1,若标志位为1,删除该映射项;当判断结果为否时,不进行任何操作;The processing sub-module is used to store the target mapping item in the uncompressed prefix complete binary tree of the H-Cache area according to its logical address prefix when the judgment result of the judging sub-module is yes, and map the original mapping in the H-Cache area. Migrate the original mapping items that have not been accessed for the longest time to the C-Cache area, and then judge whether the remaining space in the C-Cache area is less than the preset threshold B. When the judgment result is yes, change the C-Cache area. The Block blocks in the Cache area are sorted according to the number of visits, and the flag bit of the mapping item in the Block block with the least number of visits is checked to be 0 or 1. If the flag bit is 0, the mapping item is written to the Mapping area, and then the mapping is performed. The item flag position is 1. If the flag bit is 1, the mapping item is deleted; when the judgment result is no, no operation is performed;
写入子模块,用于当判断子模块的判断结果为否时,将目标映射项根据其逻辑地址前缀存入H-Cache区的非压缩前缀完全二叉树中。The writing sub-module is used to store the target mapping item in the uncompressed prefix complete binary tree in the H-Cache area according to its logical address prefix when the judgment result of the judging sub-module is no.
所述写命令模块,还用于:在判断H-Cache中是否有所述逻辑地址对应的目标映射项之前,通过非压缩前缀完全二叉树构建方法建立H-Cache区的非压缩前缀完全二叉树,其中,所述非压缩前缀完全二叉树构建方法,具体包括:The write command module is further configured to: before judging whether there is a target mapping item corresponding to the logical address in the H-Cache, establish an uncompressed prefix complete binary tree in the H-Cache area by using an uncompressed prefix complete binary tree construction method, wherein , the non-compressed prefix complete binary tree construction method specifically includes:
建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为1;Establish a Block block as the root node, and set the Block block prefix to 0; wherein, the number of mapping items stored in the Block block is 1;
将H-Cache区中需要存入树中的映射项的逻辑地址首位设置为0,并将第一个映射项存入所述Block块中;The logical address first position of the mapping item that needs to be stored in the tree in the H-Cache area is set to 0, and the first mapping item is stored in the Block block;
再存入一个新的映射项到所述Block块中,所述Block块存储的映射项数量已满,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;A new mapping item is then stored in the Block block, the number of mapping items stored in the Block block is full, and the Block block is split into two sub-nodes, and the prefixes of the two sub-Block block nodes are respectively set to 00 and 01, and then check the first two bits of its logical address for the mapping items in the original block node with the
将H-Cache区中需要存入树中的映射项全部存入树中后,通过虚拟节点补全所述树,得到非压缩前缀完全二叉树。After all the mapping items in the H-Cache area that need to be stored in the tree are stored in the tree, the tree is completed through virtual nodes to obtain a complete binary tree with uncompressed prefixes.
所述读命令模块,具体用于:在判断H-Cache中是否有所述逻辑地址对应的目标映射项之前,通过非压缩前缀完全二叉树构建方法建立H-Cache区的非压缩前缀完全二叉树,其中,所述非压缩前缀完全二叉树构建方法,具体包括:The read command module is specifically used to: before judging whether there is a target mapping item corresponding to the logical address in the H-Cache, establish an uncompressed prefix complete binary tree in the H-Cache area by using an uncompressed prefix complete binary tree construction method, wherein , the non-compressed prefix complete binary tree construction method specifically includes:
建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为1;Establish a Block block as the root node, and set the Block block prefix to 0; wherein, the number of mapping items stored in the Block block is 1;
将H-Cache区中需要存入树中的映射项的逻辑地址首位设置为0,并将第一个映射项存入所述Block块中;The logical address first position of the mapping item that needs to be stored in the tree in the H-Cache area is set to 0, and the first mapping item is stored in the Block block;
再存入一个新的映射项到所述Block块中,所述Block块存储的映射项数量已满,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;A new mapping item is then stored in the Block block, the number of mapping items stored in the Block block is full, and the Block block is split into two sub-nodes, and the prefixes of the two sub-Block block nodes are respectively set to 00 and 01, and then check the first two bits of its logical address for the mapping items in the original block node with the
将H-Cache区中需要存入树中的映射项全部存入树中后,通过虚拟节点补全所述树,得到非压缩前缀完全二叉树;After all the mapping items in the H-Cache area that need to be stored in the tree are stored in the tree, the tree is completed through the virtual node to obtain a non-compressed prefix complete binary tree;
通过非压缩前缀完全二叉树检索算法判断H-Cache中是否有所述逻辑地址对应的目标映射项,其中,所述非压缩前缀完全二叉树检索算法具体包括:Whether there is a target mapping item corresponding to the logical address in the H-Cache is determined by an uncompressed prefix complete binary tree retrieval algorithm, wherein the uncompressed prefix complete binary tree retrieval algorithm specifically includes:
S400、接收目标映射项的逻辑地址;S400. Receive the logical address of the target mapping item;
S401、根据H-Cache区前缀完全二叉树的高度k,取目标映射项的逻辑地址前k位,并利用二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号;S401, according to the height k of the complete binary tree of the prefix of the H-Cache area, take the first k bits of the logical address of the target mapping item, and utilize the binary tree Block block sequence number fast calculation algorithm to calculate the block number that the target mapping item may exist;
S402、检索所述Block块中是否存储目标映射项的逻辑地址,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,执行S403;S402, search whether the logical address of the target mapping item is stored in the Block block, when the retrieval result is yes, return the physical address corresponding to the logical address of the target mapping item; when the retrieval result is no, execute S403;
S403、令k=k-1,执行S401、S402操作,直至K=1,执行S404;S403, set k=k-1, perform operations S401 and S402, until K=1, perform S404;
S404、根据目标映射项的逻辑地址前1位和二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号,检索所述Block块中是否存储目标映射项的逻辑地址,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,表示目标映射项的逻辑地址在所述非压缩前缀完全二叉树中不存在;S404, according to the first 1 bit of the logical address of the target mapping item and the binary tree Block block sequence number fast calculation algorithm, calculate the Block block number that may exist in the target mapping item, retrieve whether the logical address of the target mapping item is stored in the Block block, when the retrieval When the result is yes, the physical address corresponding to the logical address of the target mapping item is returned; when the retrieval result is no, it means that the logical address of the target mapping item does not exist in the uncompressed prefix complete binary tree;
在检索C-Cache中是否有所述逻辑地址对应的目标映射项之前,通过压缩前缀完全二叉树构建方法建立C-Cache区的压缩前缀完全二叉树;在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址之前,通过压缩前缀完全二叉树构建方法建立Mapping区的压缩前缀完全二叉树;其中,所述压缩前缀完全二叉树构建方法,包括:Before retrieving whether there is a target mapping item corresponding to the logical address in the C-Cache, a compressed prefix complete binary tree in the C-Cache area is built by a compression prefix complete binary tree construction method; the target mapping corresponding to the logical address is retrieved in the Mapping area Before outputting the corresponding physical address of the target mapping item, a compressed prefix complete binary tree in the Mapping area is established by a compressed prefix complete binary tree construction method; wherein, the compressed prefix complete binary tree construction method, including:
建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为M,其中,1<M;Establish a Block block as the root node, and set the Block block prefix to 0; wherein, the number of mapping items stored in the Block block is M, where 1<M;
将需要存入树中的映射项的逻辑地址首位设置为0,将第一个映射项存入所述Block块中并执行数据压缩;The logical address first position of the mapping item that needs to be stored in the tree is set to 0, and the first mapping item is stored in the Block block and data compression is performed;
解压缩所述Block块,再逐项存入新的映射项到所述Block块中,当所述Block块中的映射项数目已达设定阈值M时,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;Decompress the Block block, and then store new mapping items into the Block block item by item, when the number of mapping items in the Block block has reached the set threshold M, split the Block block into two sub-blocks node, the prefixes of the two sub-Block block nodes are set to 00 and 01 respectively, and then the mapping items in the Block block node with the original prefix of 0 are checked item by item for the first two bits of its logical address, and the first two bits of the logical address are 00. The mapping item is transferred to the sub-block node with the
将需要存入树中的映射项全部存入树后,通过虚拟节点补全所述树,得到压缩前缀完全二叉树;After all the mapping items that need to be stored in the tree are stored in the tree, the tree is completed through the virtual node to obtain a complete binary tree with a compressed prefix;
通过压缩前缀完全二叉树检索算法检索C-Cache中是否有所述逻辑地址对应的目标映射项,且通过压缩前缀完全二叉树检索算法检索Mapping区中是否有所述逻辑地址对应的目标映射项,其中,所述压缩前缀完全二叉树检索算法具体包括:Whether there is a target mapping item corresponding to the logical address in the C-Cache is searched through a compressed prefix complete binary tree retrieval algorithm, and whether there is a target mapping item corresponding to the logical address in the Mapping area through a compressed prefix complete binary tree retrieval algorithm, wherein, The compressed prefix complete binary tree retrieval algorithm specifically includes:
S410、接收目标映射项的逻辑地址;S410, receiving the logical address of the target mapping item;
S411、根据C-Cache区或Mapping区中压缩前缀完全二叉树的高度k取目标映射项的逻辑地址前k位,并利用二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号;S411. According to the height k of the compressed prefix complete binary tree in the C-Cache area or the Mapping area, take the first k bits of the logical address of the target mapping item, and use the binary tree Block block sequence number fast calculation algorithm to calculate the block number that may exist in the target mapping item. ;
S412、检索所述Block块的Bloom filter中是否含有目标映射项的存储标记,当检索结果为是时,解压缩所述Block块并返回目标映射项的逻辑地址对应的物理地址,再压缩所述Block块;当检索结果为否时,执行S413;S412: Retrieve whether the Bloom filter of the Block block contains the storage tag of the target mapping item, when the retrieval result is yes, decompress the Block block and return the physical address corresponding to the logical address of the target mapping item, and then compress the Block block; when the retrieval result is no, execute S413;
S413、令k=k-1,执行S411、S412操作,直至K=1,执行S414;S413, set k=k-1, perform operations S411, S412, until K=1, perform S414;
S414、根据目标映射项的逻辑地址前1位和二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号,检索所述Block块的Bloom filter中是否含有目标映射项的存储标记,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,表示目标映射项的逻辑地址在所述压缩前缀完全二叉树中不存在。S414, according to the first 1 bit of the logical address of the target mapping item and the binary tree Block block sequence number fast calculation algorithm, calculate the Block block number that may exist in the target mapping item, and retrieve whether the Bloom filter of the Block block contains the storage mark of the target mapping item , when the search result is yes, the physical address corresponding to the logical address of the target mapping item is returned; when the search result is no, it means that the logical address of the target mapping item does not exist in the compressed prefix complete binary tree.
所述读命令模块,还用于:检索C-Cache中是否有所述逻辑地址对应的目标映射项,当检索结果为是时,输出所述目标映射项对应的物理地址过程中,记录此次检索时间,当预设时间内所述逻辑地址对应的目标映射项被再次被检索到时,将所述目标映射项转移到H-Cache区;The read command module is also used for: retrieving whether there is a target mapping item corresponding to the logical address in the C-Cache, and when the retrieval result is yes, in the process of outputting the physical address corresponding to the target mapping item, recording this time Retrieval time, when the target mapping item corresponding to the logical address is retrieved again within the preset time, transfer the target mapping item to the H-Cache area;
在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址过程中,在Mapping区中检索所述逻辑地址对应的目标映射项所在的Block块,并将所述Block块中目标映射项备份到H-Cache区,将所述Block块中其他映射项备份到C-Cache区。In the process of retrieving the target mapping item corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target mapping item, the Block block where the target mapping item corresponding to the logical address is located is retrieved in the Mapping area, and all The target mapping items in the Block block are backed up to the H-Cache area, and other mapping items in the Block block are backed up to the C-Cache area.
本发明将Cache分为两部分C-Cache和H-Cache。H-Cache部分用以保证系统的高速存取访问性能,C-Cache部分通过对映射项数据进行压缩使得有限的存储空间能够存储更多的映射项,提高了映射项的命中率,从而提高了系统的运行效率,在C-Cache、H-Cache、Mapping区空间管理分别通过三棵前缀完全二叉树来实现,并利用前缀完全二叉树的检索算法,提高了检索效率;Mapping区空间采用具有本地更新特性的NVM介质来存储,相对于将映射项存储到闪存介质而言,消除了映射项异地更新所带来的写放大问题,降低了异地更新时复制并转移映射项带来的时间开销,与传统闪存设备相比,采用这种NVM介质存储Mapping区的读写速率将会明显提高;由于DRAM设备相对其他存储设备存取速度更快,将数据频繁存取的操作放到DRAM完成,提高了系统运行效率,充分考虑到数据访问局部性原理,以Block为单位取出映射项备份到缓存,避免了频繁地到Mapping区中检索映射项,提高了系统运行效率,通过部分写回策略,当C-Cache中的某个Block需要迁移出Cache时,只将新生成的和更新的映射项写回Mapping区,在保证数据一致性的减少了写回NVM设备的数据量,提高了写回时间效率。The present invention divides the Cache into two parts C-Cache and H-Cache. The H-Cache part is used to ensure the high-speed access performance of the system. The C-Cache part compresses the mapping item data so that the limited storage space can store more mapping items, which improves the hit rate of the mapping items, thereby improving the The operation efficiency of the system is realized by three prefix complete binary trees in the C-Cache, H-Cache and Mapping areas respectively, and the retrieval algorithm of the prefix complete binary tree is used to improve the retrieval efficiency; the space in the Mapping area adopts the local update feature Compared with storing mapping items in flash media, it eliminates the problem of write amplification caused by off-site updating of mapping items, and reduces the time overhead caused by copying and transferring mapping items during off-site updates. Compared with flash memory devices, using this NVM medium to store the Mapping area's read and write rate will be significantly improved; because DRAM devices have faster access speeds than other storage devices, the operation of frequent data access is completed in DRAM, which improves the system performance. Operation efficiency, fully considering the principle of data access locality, taking the block as a unit to take out the mapping item and backing it up to the cache, avoiding frequent retrieval of mapping items in the mapping area, and improving the system operation efficiency. When a block in the Cache needs to be migrated out of the Cache, only the newly generated and updated mapping items are written back to the Mapping area, which reduces the amount of data written back to the NVM device while ensuring data consistency and improves the write back time efficiency.
附图说明Description of drawings
图1为本发明提出的一种面向大规模非易失性存储介质的缓存方法的流程示意图;1 is a schematic flowchart of a caching method for a large-scale non-volatile storage medium proposed by the present invention;
图2为缓存管理执行逻辑示意图;Fig. 2 is a schematic diagram of cache management execution logic;
图3为前缀完全二叉树的结构示意图;FIG. 3 is a schematic structural diagram of a prefix complete binary tree;
图4为映射项结构示意图;FIG. 4 is a schematic diagram of a mapping item structure;
图5为前缀完全二叉树检索算法流程示意图;5 is a schematic flowchart of a prefix complete binary tree retrieval algorithm;
图6为地址映射模块图;Fig. 6 is an address mapping module diagram;
图7为系统逻辑层次图;Fig. 7 is a system logic hierarchy diagram;
图8为系统硬件构架图;Figure 8 is a system hardware architecture diagram;
图9为地址映射流程示意图;Fig. 9 is a schematic diagram of an address mapping process;
图10为本发明提出的一种面向大规模非易失性存储介质的缓存系统的模块示意图。FIG. 10 is a schematic block diagram of a large-scale non-volatile storage medium-oriented cache system proposed by the present invention.
具体实施方式Detailed ways
参照图1,本发明提出的一种面向大规模非易失性存储介质的缓存方法,该方法包括以下步骤:Referring to FIG. 1 , a caching method for large-scale non-volatile storage media proposed by the present invention includes the following steps:
步骤S1,根据用户命令请求判断其为目标映射项写命令请求或目标映射项读命令请求,当判断所述命令请求为目标映射项写命令请求时执行S2;当判断所述命令请求为目标映射项读命令请求时执行S3;Step S1, according to the user command request, judge that it is a target mapping item write command request or a target mapping item read command request, and execute S2 when judging that the command request is a target mapping item write command request; when judging that the command request is a target mapping Execute S3 when the item read command is requested;
当用户发出命令请求时,判断命令请求是写命令请求还是读命令请求,根据不同的命令请求进行相应的操作。When a user sends a command request, it is determined whether the command request is a write command request or a read command request, and corresponding operations are performed according to different command requests.
步骤S2,将目标映射项末尾标志位置为0,再将目标映射项根据其逻辑地址前缀存入H-Cache区中;Step S2, the target mapping item end flag position is 0, and then the target mapping item is stored in the H-Cache area according to its logical address prefix;
步骤S2,具体包括:Step S2, specifically includes:
S20、将目标映射项末尾标志位置为0;S20. Set the end flag position of the target mapping item to 0;
S21、将目标映射项根据其逻辑地址前缀存入H-Cache区的非压缩前缀完全二叉树中,判断H-Cache区内剩余空间是否小于预设阈值A,当判断结果为是时,对H-Cache区中的原有映射项进行迁移,执行S22;当判断结果为否时,不进行操作;S21. Store the target mapping item in the uncompressed prefix complete binary tree in the H-Cache area according to its logical address prefix, and determine whether the remaining space in the H-Cache area is less than the preset threshold A. The original mapping item in the Cache area is migrated, and S22 is executed; when the judgment result is no, no operation is performed;
S22、将H-Cache区中最长时间没被访问的映射项迁移到C-Cache区的压缩前缀完全二叉树中,判断C-Cache区内剩余空间是否小于预设阈值B,当判断结果为是时,将C-Cache区中的Block块按照访问次数排序,并逐项检查访问次数最少的Block块中映射项的标志位为0或1,若标志位为0,将该映射项迁移到Mapping区中,再将该映射项标志位置为1,若标志位为1,删除该映射项;当判断结果为否时,不进行操作;S22. Migrate the mapping items that have not been accessed for the longest time in the H-Cache area to the compressed prefix complete binary tree in the C-Cache area, and determine whether the remaining space in the C-Cache area is less than the preset threshold B, when the determination result is yes , sort the blocks in the C-Cache area according to the number of accesses, and check item by item that the flag bit of the mapping item in the block with the least number of accesses is 0 or 1. If the flag bit is 0, the mapping item is migrated to Mapping If the flag bit is 1, delete the mapping item; when the judgment result is no, no operation is performed;
参考图2,由于新生成的映射项或已更新的映射项存储时会首先存储到H-Cache中,当映射项进入H-Cache时,如图2中1,系统将该映射项末尾标志位置为0,方便后续可能的由于数据迁移到C-Cache后,再次向Mapping区中迁移映射项时的鉴别,减少再次写入Mapping区的数据量,再判断H-Cache区内剩余空间是否小于预设阈值,当H-Cache区中的剩余空间小于该阈值时,为准备新写入映射项预留空间,起缓冲作用,使得写入与数据迁移并发运行,提高时间效率,若再有数据传入H-Cache时,则需要在写入新的映射项数据的启动H-Cache区的映射项数据迁移,将最长时间没被访问的映射项迁移到C-Cache区中的合适位置,如图2中2,再判断C-Cache区内剩余空间是否小于预设阈值,当C-Cache区中的剩余空间小于该阈值时,为准备新写入映射项预留空间,起缓冲作用,使得写入与数据迁移并发运行,提高时间效率,若再有数据传入C-Cache,则需要在写入新的映射项数据的同时启动C-Cache区的映射项数据迁移,对C-Cache区中的Block在最近一段时间内的访问次数排序,将访问次数最少的Block作为牺牲项,逐项检索该Block中的所有映射项的标志位,若标志位为0,则将该映射项写回到Mapping区中的合适位置,写回到Mapping区中,如图2中3,然后将该映射项标志位置为1;若标志位为1,则直接将该映射项丢弃,以减少写回Mapping区中的映射项数据量,降低写回时间,提高系统效率。Referring to Figure 2, since the newly generated mapping item or the updated mapping item will be stored in the H-Cache first, when the mapping item enters the H-Cache, as shown in Figure 2, the system will mark the end of the mapping item. Set to 0, which is convenient for subsequent identification when data is migrated to the C-Cache, and the mapping items are migrated to the Mapping area again, reducing the amount of data written to the Mapping area again, and then judging whether the remaining space in the H-Cache area is smaller than the expected value. Set a threshold, when the remaining space in the H-Cache area is less than the threshold, reserve space for preparing new write mapping items, play a buffering role, make writing and data migration run concurrently, and improve time efficiency. When entering the H-Cache, you need to start the mapping item data migration in the H-Cache area where the new mapping item data is written, and migrate the mapping items that have not been accessed for the longest time to the appropriate location in the C-Cache area, such as 2 in Figure 2, then judge whether the remaining space in the C-Cache area is less than the preset threshold, when the remaining space in the C-Cache area is less than the threshold, reserve space for preparing new write mapping items, and play a buffering role, so that Writing and data migration run concurrently to improve time efficiency. If there is more data into the C-Cache, you need to start the mapping item data migration in the C-Cache area while writing new mapping item data. The number of accesses of the blocks in the block in the recent period is sorted, and the block with the least number of accesses is used as the sacrifice item, and the flag bits of all mapping items in the block are retrieved item by item. If the flag bit is 0, the mapping item is written back. Go to the appropriate position in the Mapping area, write it back to the Mapping area, as shown in Figure 2, 3, and then set the flag bit of the mapping item to 1; if the flag bit is 1, directly discard the mapping item to reduce writing back to Mapping The amount of mapping item data in the area reduces the write-back time and improves system efficiency.
参考图3,在本步骤中,所述将目标映射项根据其逻辑地址前缀存入H-Cache区的非压缩前缀完全二叉树之前,还包括:通过非压缩前缀完全二叉树构建方法建立H-Cache区的非压缩前缀完全二叉树,其中,所述非压缩前缀完全二叉树构建方法,具体包括:Referring to FIG. 3, in this step, before storing the target mapping item in the uncompressed prefix complete binary tree of the H-Cache area according to its logical address prefix, it also includes: establishing the H-Cache area by the uncompressed prefix complete binary tree construction method The uncompressed prefix complete binary tree, wherein the method for constructing the uncompressed prefix complete binary tree specifically includes:
S200、建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为1;S200, establish a Block block as the root node, and set the Block block prefix to 0; Wherein, the number of mapping items stored in the Block block is 1;
S201、将H-Cache区中需要存入树中的映射项的逻辑地址首位设置为0,并将第一个映射项存入所述Block块中;S201, the logical address first position of the mapping item that needs to be stored in the tree in the H-Cache area is set to 0, and the first mapping item is stored in the Block block;
S202、再存入一个新的映射项到所述Block块中,所述Block块存储的映射项数量已满,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;S202, store a new mapping item into the Block block again, the number of mapping items stored in the Block block is full, split the Block block into two sub-nodes, and the prefixes of the two sub-Block block nodes are respectively set as 00 and 01, and then check the first two bits of the logical address of the mapping items in the original block node with the
S203、重复步骤S202操作,直至H-Cache区中需要存入树中的映射项全部存入树中后,通过虚拟节点补全所述树,得到非压缩前缀完全二叉树;S203, repeating the operation of step S202, until after the mapping items that need to be stored in the tree in the H-Cache area are all stored in the tree, the tree is completed by the virtual node, and the non-compressed prefix complete binary tree is obtained;
H-Cache中的二叉树为非压缩的前缀完全二叉树,可提高检索效率,消除压缩解压缩开销,每个节点存储一个逻辑地址-物理地址映射项。The binary tree in H-Cache is an uncompressed prefix complete binary tree, which can improve retrieval efficiency and eliminate the overhead of compression and decompression. Each node stores a logical address-physical address mapping item.
参考图4,映射项结构中标志位有两种状态,其中,0表示新形成的映射项或已更新映射项,1表示未更新的映射项。Referring to FIG. 4 , the flag bit in the map entry structure has two states, wherein 0 represents a newly formed map entry or an updated map entry, and 1 represents an unupdated map entry.
步骤S3,获取目标映射项读命令请求中的逻辑地址,判断H-Cache中是否有所述逻辑地址对应的目标映射项,当判断结果为是时,输出所述目标映射项对应的物理地址;当判断结果为否时,检索C-Cache中是否有所述逻辑地址对应的目标映射项,Step S3, obtain the logical address in the target mapping item read command request, determine whether there is a target mapping item corresponding to the logical address in the H-Cache, and when the judgment result is yes, output the physical address corresponding to the target mapping item; When the judgment result is no, retrieve whether there is a target mapping item corresponding to the logical address in the C-Cache,
当检索结果为是时,输出所述目标映射项对应的物理地址,当检索结果为否时,在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址;When the retrieval result is yes, output the physical address corresponding to the target mapping item; when the retrieval result is no, retrieve the target mapping item corresponding to the logical address in the Mapping area and output the physical address corresponding to the target mapping item ;
在本步骤中,所述在检索C-Cache中有所述逻辑地址对应的目标映射项时,输出所述目标映射项对应的物理地址过程中,还包括:记录此次检索时间,当预设时间内所述逻辑地址对应的目标映射项被再次被检索到时,将所述目标映射项转移到H-Cache区;在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址过程中,还包括:在Mapping区中检索所述逻辑地址对应的目标映射项所在的Block块,并将所述Block块中目标映射项备份到H-Cache区,将所述Block块中其他映射项备份到C-Cache区。In this step, when there is a target mapping item corresponding to the logical address in the retrieval C-Cache, the process of outputting the physical address corresponding to the target mapping item further includes: recording the retrieval time, when preset When the target mapping item corresponding to the logical address is retrieved again within the time limit, transfer the target mapping item to the H-Cache area; retrieve the target mapping item corresponding to the logical address in the Mapping area and output the target In the process of mapping the physical address corresponding to the item, it also includes: retrieving the Block block where the target mapping item corresponding to the logical address is located in the Mapping area, and backing up the target mapping item in the Block block to the H-Cache area. Other mapping items in the Block block are backed up to the C-Cache area.
当获取目标映射项读命令请求中的逻辑地址,寻找其对应的逻辑地址-物理地址映射项时,首先在H-Cache中检索该映射项,若在H-Cache中检索到则直接返回其对应的物理地址结果,若未搜索到,则到C-Cache中检索,当在C-Cache中检索到时,向上层系统返回需要的物理地址结果,记录此次在C-Cache中的访问时间,当过一段时间后该逻辑地址对应的映射项在C-Cache中再次命中时,计算该映射项两次访问的时间间隔,若该时间间隔小于一个特定的阈值时,将该映射项转移到H-Cache区中的相应位置,如图2中4,若仍未检索则到Mapping区中检索,如图2中5;找到Mapping区中该映射项所在的Block,然后将该映射项备份到H-Cache的树中的合适位置并返回该映射项的物理地址结果,该Block中的其他映射项逐项备份到C-Cache的树的合适位置,依据局部性原理,该映射项周围的映射项极有可能被再次访问,所以一起备份到Cache中,目的是考虑到访问局部性的特征,如图2中6。When obtaining the logical address in the read command request of the target mapping item and looking for its corresponding logical address-physical address mapping item, the mapping item is first retrieved in the H-Cache, and if it is retrieved in the H-Cache, its corresponding item is directly returned. If the physical address result is not found, it will be retrieved in the C-Cache. When retrieved in the C-Cache, the required physical address result will be returned to the upper-level system, and the access time in the C-Cache this time will be recorded. When the mapping item corresponding to the logical address hits again in the C-Cache after a period of time, the time interval between two accesses of the mapping item is calculated, and if the time interval is less than a specific threshold, the mapping item is transferred to H -The corresponding position in the Cache area, as shown in Figure 2, 4, if it has not been retrieved, go to the Mapping area for retrieval, as shown in Figure 2, 5; find the Block where the mapping item is located in the Mapping area, and then back up the mapping item to H -The appropriate position in the tree of the Cache and return the physical address result of the mapping item. Other mapping items in the Block are backed up item by item to the appropriate position in the tree of the C-Cache. According to the principle of locality, the mapping items around the mapping item It is very likely to be accessed again, so they are backed up in the Cache together, in order to take into account the characteristics of access locality, as shown in Figure 2 6.
参考图3,在本步骤中,在判断H-Cache中是否有所述逻辑地址对应的目标映射项之前,包括:通过非压缩前缀完全二叉树构建方法建立H-Cache区的非压缩前缀完全二叉树,其中,所述非压缩前缀完全二叉树构建方法,具体包括:Referring to FIG. 3, in this step, before judging whether there is a target mapping item corresponding to the logical address in the H-Cache, the method includes: establishing an uncompressed prefix complete binary tree in the H-Cache area by using an uncompressed prefix complete binary tree construction method, Wherein, the non-compressed prefix complete binary tree construction method specifically includes:
S210、建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为1;S210, establish a Block block as the root node, and set the Block block prefix to 0; Wherein, the number of mapping items stored in the Block block is 1;
S211、将H-Cache区中需要存入树中的映射项的逻辑地址首位设置为0,并将第一个映射项存入所述Block块中;S211, the logical address first position of the mapping item that needs to be stored in the tree in the H-Cache area is set to 0, and the first mapping item is stored in the Block block;
S212、再存入一个新的映射项到所述Block块中,所述Block块存储的映射项数量已满,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;S212, store a new mapping item into the Block block, and the number of mapping items stored in the Block block is full, split the Block block into two sub-nodes, and the prefixes of the two sub-Block block nodes are respectively set as 00 and 01, and then check the first two bits of the logical address of the mapping items in the original block node with the
S213、重复步骤S212操作,直至H-Cache区中需要存入树中的映射项全部存入树中后,通过虚拟节点补全所述树,得到非压缩前缀完全二叉树;S213, repeat the operation of step S212 until all the mapping items that need to be stored in the tree in the H-Cache area are stored in the tree, and then complete the tree through the virtual node to obtain a complete binary tree with a non-compressed prefix;
H-Cache中的二叉树为非压缩的前缀完全二叉树,可提高检索效率,消除压缩解压缩开销,每个节点存储一个逻辑地址-物理地址映射项。The binary tree in H-Cache is an uncompressed prefix complete binary tree, which can improve retrieval efficiency and eliminate the overhead of compression and decompression. Each node stores a logical address-physical address mapping item.
参考图5,在本步骤中,通过非压缩前缀完全二叉树检索算法判断H-Cache中是否有所述逻辑地址对应的目标映射项,其中,所述非压缩前缀完全二叉树检索算法具体包括:Referring to Fig. 5, in this step, it is judged whether there is a target mapping item corresponding to the logical address in the H-Cache through an uncompressed prefix complete binary tree retrieval algorithm, wherein the uncompressed prefix complete binary tree retrieval algorithm specifically includes:
S300、接收目标映射项的逻辑地址;S300. Receive the logical address of the target mapping item;
S301、根据H-Cache区前缀完全二叉树的高度k,取目标映射项的逻辑地址前k位,并利用二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号;S301, according to the height k of the complete binary tree of the prefix of the H-Cache area, get the first k bits of the logical address of the target mapping item, and utilize the binary tree Block block sequence number fast calculation algorithm to calculate the block number that the target mapping item may exist;
S302、检索所述Block块中是否存储目标映射项的逻辑地址,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,执行S303;S302, search whether the logical address of the target mapping item is stored in the Block, when the search result is yes, return the physical address corresponding to the logical address of the target mapping item; when the search result is no, execute S303;
S303、令k=k-1,执行S301、S302操作,直至K=1,执行S304;S303, set k=k-1, perform operations S301 and S302, until K=1, perform S304;
S304、根据目标映射项的逻辑地址前1位和二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号,检索所述Block块中是否存储目标映射项的逻辑地址,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,表示目标映射项的逻辑地址在所述非压缩前缀完全二叉树中不存在。S304, according to the first 1 bit of the logical address of the target mapping item and the binary tree Block block sequence number fast calculation algorithm, calculate the Block block number that may exist in the target mapping item, retrieve whether the logical address of the target mapping item is stored in the Block block, when the retrieval When the result is yes, the physical address corresponding to the logical address of the target mapping item is returned; when the retrieval result is no, it means that the logical address of the target mapping item does not exist in the uncompressed prefix complete binary tree.
在本步骤中,所述检索C-Cache中是否有所述逻辑地址对应的目标映射项之前,还包括:通过压缩前缀完全二叉树构建方法建立C-Cache区的压缩前缀完全二叉树;所述在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址之前,还包括:通过压缩前缀完全二叉树构建方法建立Mapping区的压缩前缀完全二叉树;其中,所述压缩前缀完全二叉树构建方法,具体包括:In this step, before retrieving whether there is a target mapping item corresponding to the logical address in the C-Cache, the method further includes: building a compressed prefix complete binary tree in the C-Cache area by using a compressed prefix complete binary tree construction method; Before retrieving the target mapping item corresponding to the logical address in the area and outputting the physical address corresponding to the target mapping item, the method further includes: building a compressed prefix complete binary tree of the Mapping area by using a compressed prefix complete binary tree construction method; wherein, the compressed prefix Complete binary tree construction methods, including:
S310、建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为M,其中,1<M;S310, establish a Block block as the root node, and set the Block block prefix to 0; wherein, the number of mapping items stored in the Block block is M, where 1<M;
S311、将需要存入树中的映射项的逻辑地址首位设置为0,将第一个映射项存入所述Block块中并执行数据压缩;S311, the logical address first position of the mapping item that needs to be stored in the tree is set to 0, the first mapping item is stored in the Block block and data compression is performed;
S312、解压缩所述Block块,再逐项存入新的映射项到所述Block块中,当所述Block块中的映射项数目已达设定阈值M时,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;S312: Decompress the Block block, and then store new mapping items into the Block block item by item, when the number of mapping items in the Block block has reached a set threshold M, split the Block block into The prefixes of the two child nodes and the two child block nodes are set to 00 and 01 respectively, and then the mapping items in the block node with the original prefix of 0 are checked item by item for the first two bits of its logical address, and the first two bits of the logical address are The mapping item of 00 is transferred to the sub-block node with the
S313、重复步骤S312操作,直至需要存入树中的映射项全部存入树后,通过虚拟节点补全所述树,得到压缩前缀完全二叉树;S313, repeat the operation of step S312, until after the mapping items that need to be stored in the tree are all stored in the tree, the tree is completed through the virtual node to obtain a compressed prefix complete binary tree;
在本步骤中,通过压缩前缀完全二叉树检索算法检索C-Cache中是否有所述逻辑地址对应的目标映射项,且通过压缩前缀完全二叉树检索算法检索Mapping区中是否有所述逻辑地址对应的目标映射项,其中,所述压缩前缀完全二叉树检索算法具体包括:In this step, whether there is a target mapping item corresponding to the logical address in the C-Cache is searched by a compressed prefix complete binary tree search algorithm, and whether there is a target corresponding to the logical address in the Mapping area by a compressed prefix complete binary tree search algorithm Mapping items, wherein the compressed prefix complete binary tree retrieval algorithm specifically includes:
S320、接收目标映射项的逻辑地址;S320, receiving the logical address of the target mapping item;
S321、根据C-Cache区或Mapping区中压缩前缀完全二叉树的高度k取目标映射项的逻辑地址前k位,并利用二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号;S321. According to the height k of the compressed prefix complete binary tree in the C-Cache area or the Mapping area, take the first k bits of the logical address of the target mapping item, and use the binary tree Block block sequence number fast calculation algorithm to calculate the block number that may exist in the target mapping item. ;
S322、检索所述Block块的Bloom filter中是否含有目标映射项的存储标记,当检索结果为是时,解压缩所述Block块并返回目标映射项的逻辑地址对应的物理地址,再压缩所述Block块;当检索结果为否时,执行S323;S322. Retrieve whether the Bloom filter of the Block block contains the storage tag of the target mapping item, when the retrieval result is yes, decompress the Block block and return the physical address corresponding to the logical address of the target mapping item, and then compress the Block block; when the retrieval result is no, execute S323;
S323、令k=k-1,执行S321、S322操作,直至K=1,执行S324;S323, set k=k-1, perform operations S321 and S322, until K=1, perform S324;
S324、根据目标映射项的逻辑地址前1位和二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号,检索所述Block块的Bloom filter中是否含有目标映射项的存储标记,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,表示目标映射项的逻辑地址在所述压缩前缀完全二叉树中不存在;S324, according to the first 1 bit of the logical address of the target mapping item and the binary tree Block block sequence number fast calculation algorithm, calculate the Block block number that may exist in the target mapping item, and retrieve whether the Bloom filter of the Block block contains the storage mark of the target mapping item , when the retrieval result is yes, the physical address corresponding to the logical address of the target mapping item is returned; when the retrieval result is no, it means that the logical address of the target mapping item does not exist in the compressed prefix complete binary tree;
C-Cache和Mapping区中的二叉树为压缩的前缀完全二叉树,每个节点为一个压缩的Block,每个Block中存储着多个前缀与Block前缀相同的映射项。The binary tree in the C-Cache and Mapping area is a compressed prefix complete binary tree, each node is a compressed block, and each block stores multiple mapping items with the same prefix as the block prefix.
参考图6,Cache基于DRAM实现,分为H-Cache和C-Cache两部分。H-Cache主要存储被频繁访问的“活跃”映射项;C-Cache主要存放被从H-Cache中迁移出的“相对不活跃”的映射项,多个映射项根据其逻辑地址前缀存放到合适的Block中进行压缩,通过压缩技术,能够使得有限的Cache空间存储更多的映射项,提高映射项的命中率,提升系统运行效率;Referring to Figure 6, Cache is implemented based on DRAM and is divided into two parts: H-Cache and C-Cache. H-Cache mainly stores "active" mapping items that are frequently accessed; C-Cache mainly stores "relatively inactive" mapping items that have been migrated from H-Cache. Multiple mapping items are stored in appropriate locations according to their logical address prefixes. Compression is carried out in the block, through the compression technology, the limited Cache space can store more mapping items, improve the hit rate of mapping items, and improve the operating efficiency of the system;
Mapping区基于支持本地更新的NVM设备实现,其目标是存放完整的映射项数据,其映射项数据主要来源于C-Cache迁移数据,多个其中的映射项根据其逻辑地址前缀存放到合适的Block中。The Mapping area is implemented based on NVM devices that support local updates. Its goal is to store complete mapping item data. The mapping item data is mainly derived from C-Cache migration data. Multiple mapping items are stored in the appropriate Block according to their logical address prefixes. middle.
参考图7,D-Cache部分基于DRAM介质实现,用于临时存储用户写入或读取的数据;NVM Data Storage部分基于NVM介质实现,其中只存放数据信息,不存储地址映射等其他信息。Referring to Figure 7, D-Cache is partially implemented based on DRAM media and is used to temporarily store data written or read by users; NVM Data Storage is partially implemented based on NVM media, where only data information is stored, and other information such as address mapping is not stored.
参考图8、图9,系统往存储系统写数据时,主机操作系统上层传来数据和逻辑地址、offset、size,数据暂时存放到D-Cache中,控制器根据数据的size为其在数据存储设备上分配合适大小的物理空间,并将该空间首地址物理地址返回组成逻辑地址-物理地址映射项,存入地址映射模块中。Referring to Figure 8 and Figure 9, when the system writes data to the storage system, the upper layer of the host operating system transmits the data and the logical address, offset, and size. The data is temporarily stored in the D-Cache, and the controller stores the data for it according to the size of the data. A physical space of a suitable size is allocated on the device, and the physical address of the first address of the space is returned to form a logical address-physical address mapping item, which is stored in the address mapping module.
系统从存储系统读数据时,主机操作系统上层将其逻辑地址传入地址映射模块,利用本方案的数据缓存和检索方法,得到该逻辑地址对应的物理地址,然后从该物理地址所对应的存储空间读取数据,经过D-Cache返回给上层系统。When the system reads data from the storage system, the upper layer of the host operating system transfers its logical address to the address mapping module, and uses the data caching and retrieval method of this scheme to obtain the physical address corresponding to the logical address, and then obtains the physical address corresponding to the logical address from the storage corresponding to the physical address. The data is read from the space and returned to the upper system through D-Cache.
参考图10,本发明提出的一种面向大规模非易失性存储介质的缓存系统,该系统包括:Referring to FIG. 10 , a large-scale non-volatile storage medium-oriented cache system proposed by the present invention includes:
判断模块,用于获取用户命令请求,并判断所述命令请求为目标映射项写命令请求或目标映射项读命令请求;a judgment module, used to obtain a user command request, and judge that the command request is a target mapping item write command request or a target mapping item read command request;
当用户发出命令请求时,判断命令请求是写命令请求还是读命令请求,根据不同的命令请求进行相应的操作。When a user sends a command request, it is determined whether the command request is a write command request or a read command request, and corresponding operations are performed according to different command requests.
写命令模块,用于当判断模块判断所述命令请求为目标映射项写命令请求时,将目标映射项末尾标志位置为0,再将目标映射项根据其逻辑地址前缀存入H-Cache区中;The write command module is used for, when the judgment module judges that the command request is a write command request of the target mapping item, the end flag position of the target mapping item is set to 0, and then the target mapping item is stored in the H-Cache area according to its logical address prefix ;
写命令模块包括:标志子模块、判断子模块、处理子模块、写入子模块;The writing command module includes: a marking submodule, a judging submodule, a processing submodule, and a writing submodule;
标志子模块,用于将目标映射项末尾标志位置为0;The flag submodule is used to set the flag position at the end of the target mapping item to 0;
判断子模块,用于判断H-Cache区内剩余空间是否小于预设阈值A;A judging submodule for judging whether the remaining space in the H-Cache area is less than the preset threshold A;
处理子模块,用于当判断子模块的判断结果为是时,将目标映射项根据其逻辑地址前缀存入H-Cache区的非压缩前缀完全二叉树中,对H-Cache区中的原有映射项进行迁移,将最长时间没被访问的原有映射项迁移到C-Cache区中,再判断C-Cache区内剩余空间是否小于预设阈值B,当判断结果为是时,将C-Cache区中的Block块按照访问次数排序,并检查访问次数最少的Block块中映射项的标志位为0或1,若标志位为0,将该映射项写到Mapping区中,再将该映射项标志位置为1,若标志位为1,删除该映射项;当判断结果为否时,不进行任何操作;The processing sub-module is used to store the target mapping item in the uncompressed prefix complete binary tree of the H-Cache area according to its logical address prefix when the judgment result of the judging sub-module is yes, and map the original mapping in the H-Cache area. Migrate the original mapping items that have not been accessed for the longest time to the C-Cache area, and then judge whether the remaining space in the C-Cache area is less than the preset threshold B. When the judgment result is yes, change the C-Cache area. The Block blocks in the Cache area are sorted according to the number of visits, and the flag bit of the mapping item in the Block block with the least number of visits is checked to be 0 or 1. If the flag bit is 0, the mapping item is written to the Mapping area, and then the mapping is performed. The item flag position is 1. If the flag bit is 1, the mapping item is deleted; when the judgment result is no, no operation is performed;
写入子模块,用于当判断子模块的判断结果为否时,将目标映射项根据其逻辑地址前缀存入H-Cache区的非压缩前缀完全二叉树中。The writing sub-module is used to store the target mapping item in the uncompressed prefix complete binary tree in the H-Cache area according to its logical address prefix when the judgment result of the judging sub-module is no.
参考图2,由于新生成的映射项或已更新的映射项存储时会首先存储到H-Cache中,当映射项进入H-Cache时,如图2中1,系统将该映射项末尾标志位置为0,方便后续可能的由于数据迁移到C-Cache后,再次向Mapping区中迁移映射项时的鉴别,减少再次写入Mapping区的数据量,再为H-Cache分配一个阈值,当H-Cache区中的剩余空间小于该阈值时,为准备新写入映射项预留空间,起缓冲作用,使得写入与数据迁移并发运行,提高时间效率,若再有数据传入H-Cache时,则需要在写入新的映射项数据的同时启动H-Cache区的映射项数据迁移,将最长时间没被访问的映射项迁移到C-Cache区中的合适位置,如图2中2,再为C-Cache分配一个阈值,当C-Cache区中的剩余空间小于该阈值时,为准备新写入映射项预留空间,起缓冲作用,使得写入与数据迁移并发运行,提高时间效率,若再有数据传入C-Cache,则需要在写入新的映射项数据的同时启动C-Cache区的映射项数据迁移,对C-Cache区中的Block在最近一段时间内的访问次数排序,将访问次数最少的Block作为牺牲项,逐项检索该Block中的所有映射项的标志位,若标志位为0,则将该映射项写回到Mapping区中的合适位置,写回到Mapping区中,如图2中3,然后将该映射项标志位置为1;若标志位为1,则直接将该映射项丢弃,以减少写回Mapping区中的映射项数据量,降低写回时间,提高系统效率。Referring to Figure 2, since the newly generated mapping item or the updated mapping item will be stored in the H-Cache first, when the mapping item enters the H-Cache, as shown in Figure 2, the system will mark the end of the mapping item. It is 0, which is convenient for subsequent identification when the data is migrated to the C-Cache and the mapping items are migrated to the Mapping area again, reducing the amount of data written to the Mapping area again, and then assigning a threshold to the H-Cache. When the remaining space in the Cache area is less than the threshold, space is reserved for preparing new write mapping items, which acts as a buffer, so that writing and data migration run concurrently, improving time efficiency. Then it is necessary to start the data migration of the mapping item in the H-Cache area while writing the new mapping item data, and migrate the mapping item that has not been accessed for the longest time to the appropriate location in the C-Cache area, as shown in Figure 2 in 2, Then assign a threshold value to C-Cache. When the remaining space in the C-Cache area is less than the threshold value, reserve space for preparing new write mapping items, play a buffering role, make writing and data migration run concurrently, and improve time efficiency , if there is more data into the C-Cache, you need to start the mapping item data migration in the C-Cache area while writing the new mapping item data, and the number of accesses to the Block in the C-Cache area in the recent period Sort, take the block with the least number of visits as the victim item, and retrieve the flag bits of all mapping items in the block one by one. If the flag bit is 0, write the mapping item back to the appropriate location in the Mapping area, and write back In the Mapping area, as shown in Figure 2, 3, and then the flag bit of the mapping item is set to 1; if the flag bit is 1, the mapping item is directly discarded to reduce the amount of mapping item data written back to the Mapping area and reduce write-back time and improve system efficiency.
参考图3,写命令模块,还用于:将目标映射项根据其逻辑地址前缀存入H-Cache区的非压缩前缀完全二叉树之前,通过非压缩前缀完全二叉树构建方法建立H-Cache区的非压缩前缀完全二叉树,其中,所述非压缩前缀完全二叉树构建方法,具体包括:Referring to Figure 3, the write command module is also used to: before storing the target mapping item in the uncompressed prefix complete binary tree of the H-Cache area according to its logical address prefix, establish a non-compressed prefix complete binary tree construction method in the H-Cache area. A compressed prefix complete binary tree, wherein the method for constructing the uncompressed prefix complete binary tree specifically includes:
建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为1;Establish a Block block as the root node, and set the Block block prefix to 0; wherein, the number of mapping items stored in the Block block is 1;
将H-Cache区中需要存入树中的映射项的逻辑地址首位设置为0,并将第一个映射项存入所述Block块中;The logical address first position of the mapping item that needs to be stored in the tree in the H-Cache area is set to 0, and the first mapping item is stored in the Block block;
再存入一个新的映射项到所述Block块中,所述Block块存储的映射项数量已满,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;A new mapping item is then stored in the Block block, the number of mapping items stored in the Block block is full, and the Block block is split into two sub-nodes, and the prefixes of the two sub-Block block nodes are respectively set to 00 and 01, and then check the first two bits of its logical address for the mapping items in the original block node with the
将H-Cache区中需要存入树中的映射项全部存入树中后,通过虚拟节点补全所述树,得到非压缩前缀完全二叉树;After all the mapping items in the H-Cache area that need to be stored in the tree are stored in the tree, the tree is completed through the virtual node to obtain a non-compressed prefix complete binary tree;
H-Cache中的二叉树为非压缩的前缀完全二叉树,可提高检索效率,消除压缩解压缩开销,每个节点存储一个逻辑地址-物理地址映射项。The binary tree in H-Cache is an uncompressed prefix complete binary tree, which can improve retrieval efficiency and eliminate the overhead of compression and decompression. Each node stores a logical address-physical address mapping item.
读命令模块,用于当判断模块判断所述命令请求为目标映射项读命令请求时,获取目标映射项读命令请求中的逻辑地址,并判断H-Cache中是否有所述逻辑地址对应的目标映射项,当判断结果为是时,输出所述目标映射项对应的物理地址;当判断结果为否时,检索C-Cache中是否有所述逻辑地址对应的目标映射项,当检索结果为是时,输出所述目标映射项对应的物理地址,当检索结果为否时,在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址;The read command module is used to obtain the logical address in the target mapping item read command request when the judgment module judges that the command request is a target mapping item read command request, and determine whether there is a target corresponding to the logical address in the H-Cache mapping item, when the judgment result is yes, output the physical address corresponding to the target mapping item; when the judgment result is no, search whether there is a target mapping item corresponding to the logical address in the C-Cache, when the search result is yes When the corresponding physical address of the target mapping item is output, when the retrieval result is no, the target mapping item corresponding to the logical address is retrieved in the Mapping area and the physical address corresponding to the target mapping item is output;
读命令模块,还用于:在检索C-Cache中有所述逻辑地址对应的目标映射项时,输出所述目标映射项对应的物理地址过程中,记录此次检索时间,当预设时间内所述逻辑地址对应的目标映射项被再次被检索到时,将所述目标映射项转移到H-Cache区;The read command module is further configured to: record the retrieval time during the process of outputting the physical address corresponding to the target mapping item when there is a target mapping item corresponding to the logical address in the retrieval C-Cache, when the preset time When the target mapping item corresponding to the logical address is retrieved again, the target mapping item is transferred to the H-Cache area;
在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址过程中,在Mapping区中检索所述逻辑地址对应的目标映射项所在的Block块,并将所述Block块中目标映射项备份到H-Cache区,将所述Block块中其他映射项备份到C-Cache区。In the process of retrieving the target mapping item corresponding to the logical address in the Mapping area and outputting the physical address corresponding to the target mapping item, the Block block where the target mapping item corresponding to the logical address is located is retrieved in the Mapping area, and all The target mapping items in the Block block are backed up to the H-Cache area, and other mapping items in the Block block are backed up to the C-Cache area.
当获取目标映射项读命令请求中的逻辑地址,寻找其对应的逻辑地址-物理地址映射项时,首先在H-Cache中检索该映射项,若在H-Cache中检索到则直接返回其对应的物理地址结果,若未搜索到,则到C-Cache中检索,当在C-Cache中检索到时,向上层系统返回需要的物理地址结果,记录此次在C-Cache中的访问时间,当过一段时间后该逻辑地址对应的映射项在C-Cache中再次命中时,计算该映射项两次访问的时间间隔,若该时间间隔小于一个特定的阈值时,将该映射项转移到H-Cache区中的相应位置,如图2中4,若仍未检索则到Mapping区中检索,如图2中5;找到Mapping区中该映射项所在的Block,然后将该映射项备份到H-Cache的树中的合适位置并返回该映射项的物理地址结果,该Block中的其他映射项逐项备份到C-Cache的树的合适位置,依据局部性原理,该映射项周围的映射项极有可能被再次访问,所以一起备份到Cache中,目的是考虑到访问局部性的特征,如图2中6。When obtaining the logical address in the read command request of the target mapping item and looking for its corresponding logical address-physical address mapping item, the mapping item is first retrieved in H-Cache, and if it is retrieved in H-Cache, its corresponding item is directly returned. If the physical address result is not found, it will be retrieved in the C-Cache. When retrieved in the C-Cache, the required physical address result will be returned to the upper-layer system, and the access time in the C-Cache this time will be recorded. When the mapping item corresponding to the logical address is hit again in the C-Cache after a period of time, the time interval between two accesses of the mapping item is calculated, and if the time interval is less than a specific threshold, the mapping item is transferred to H -The corresponding position in the Cache area, as shown in Figure 2, 4, if it has not been retrieved, search it in the Mapping area, as shown in Figure 2, 5; find the Block where the mapping item is located in the Mapping area, and then back up the mapping item to H -The appropriate position in the tree of the Cache and return the physical address result of the mapping item. Other mapping items in the Block are backed up item by item to the appropriate position in the tree of the C-Cache. According to the principle of locality, the mapping items around the mapping item It is very likely to be accessed again, so they are backed up in the Cache together, in order to take into account the characteristics of access locality, as shown in Figure 2 6.
读命令模块,具体用于:在判断H-Cache中是否有所述逻辑地址对应的目标映射项之前,通过非压缩前缀完全二叉树构建方法建立H-Cache区的非压缩前缀完全二叉树,其中,所述非压缩前缀完全二叉树构建方法,具体包括:The read command module is specifically used for: before judging whether there is a target mapping item corresponding to the logical address in the H-Cache, establish a non-compressed prefix complete binary tree in the H-Cache area by the non-compressed prefix complete binary tree construction method, wherein, all Describe the construction method of uncompressed prefix complete binary tree, including:
建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为1;Establish a Block block as the root node, and set the Block block prefix to 0; wherein, the number of mapping items stored in the Block block is 1;
将H-Cache区中需要存入树中的映射项的逻辑地址首位设置为0,并将第一个映射项存入所述Block块中;The logical address first position of the mapping item that needs to be stored in the tree in the H-Cache area is set to 0, and the first mapping item is stored in the Block block;
再存入一个新的映射项到所述Block块中,所述Block块存储的映射项数量已满,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;A new mapping item is then stored in the Block block, the number of mapping items stored in the Block block is full, and the Block block is split into two sub-nodes, and the prefixes of the two sub-Block block nodes are respectively set to 00 and 01, and then check the first two bits of its logical address for the mapping items in the original block node with the
将H-Cache区中需要存入树中的映射项全部存入树中后,通过虚拟节点补全所述树,得到非压缩前缀完全二叉树;After all the mapping items in the H-Cache area that need to be stored in the tree are stored in the tree, the tree is completed through the virtual node to obtain a non-compressed prefix complete binary tree;
H-Cache中的二叉树为非压缩的前缀完全二叉树,可提高检索效率,消除压缩解压缩开销,每个节点存储一个逻辑地址-物理地址映射项。The binary tree in H-Cache is an uncompressed prefix complete binary tree, which can improve retrieval efficiency and eliminate the overhead of compression and decompression. Each node stores a logical address-physical address mapping item.
参考图5,读命令模块,还用于:通过非压缩前缀完全二叉树检索算法判断H-Cache中是否有所述逻辑地址对应的目标映射项,其中,所述非压缩前缀完全二叉树检索算法具体包括:Referring to Figure 5, the read command module is also used to: determine whether there is a target mapping item corresponding to the logical address in the H-Cache through an uncompressed prefix complete binary tree retrieval algorithm, wherein the uncompressed prefix complete binary tree retrieval algorithm specifically includes :
S400、接收目标映射项的逻辑地址;S400. Receive the logical address of the target mapping item;
S401、根据H-Cache区前缀完全二叉树的高度k,取目标映射项的逻辑地址前k位,并利用二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号;S401, according to the height k of the complete binary tree of the prefix of the H-Cache area, take the first k bits of the logical address of the target mapping item, and utilize the binary tree Block block sequence number fast calculation algorithm to calculate the block number that the target mapping item may exist;
S402、检索所述Block块中是否存储目标映射项的逻辑地址,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,执行S403;S402, search whether the logical address of the target mapping item is stored in the Block block, when the retrieval result is yes, return the physical address corresponding to the logical address of the target mapping item; when the retrieval result is no, execute S403;
S403、令k=k-1,执行S401、S402操作,直至K=1,执行S404;S403, set k=k-1, perform operations S401 and S402, until K=1, perform S404;
S404、根据目标映射项的逻辑地址前1位和二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号,检索所述Block块中是否存储目标映射项的逻辑地址,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,表示目标映射项的逻辑地址在所述非压缩前缀完全二叉树中不存在。S404, according to the first 1 bit of the logical address of the target mapping item and the binary tree Block block sequence number fast calculation algorithm, calculate the Block block number that may exist in the target mapping item, retrieve whether the logical address of the target mapping item is stored in the Block block, when the retrieval When the result is yes, the physical address corresponding to the logical address of the target mapping item is returned; when the retrieval result is no, it means that the logical address of the target mapping item does not exist in the uncompressed prefix complete binary tree.
读命令模块,还用于:在检索C-Cache中是否有所述逻辑地址对应的目标映射项之前,通过压缩前缀完全二叉树构建方法建立C-Cache区的压缩前缀完全二叉树;在Mapping区中检索所述逻辑地址对应的目标映射项并输出所述目标映射项对应的物理地址之前,通过压缩前缀完全二叉树构建方法建立Mapping区的压缩前缀完全二叉树;其中,所述压缩前缀完全二叉树构建方法,包括:The read command module is also used to: before retrieving whether there is a target mapping item corresponding to the logical address in the C-Cache, build a compressed prefix complete binary tree in the C-Cache area by using the compressed prefix complete binary tree construction method; search in the Mapping area Before outputting the target mapping item corresponding to the logical address and outputting the physical address corresponding to the target mapping item, a compressed prefix complete binary tree construction method in the Mapping area is used to build a compressed prefix complete binary tree; wherein, the compressed prefix complete binary tree construction method includes: :
建立一个Block块作为根节点,并设置Block块前缀为0;其中,所述Block块存储的映射项数量为M,其中,1<M;Establish a Block block as the root node, and set the Block block prefix to 0; wherein, the number of mapping items stored in the Block block is M, where 1<M;
将需要存入树中的映射项的逻辑地址首位设置为0,将第一个映射项存入所述Block块中并执行数据压缩;The logical address first position of the mapping item that needs to be stored in the tree is set to 0, and the first mapping item is stored in the Block block and data compression is performed;
解压缩所述Block块,再逐项存入新的映射项到所述Block块中,,当所述Block块中的映射项数目已达设定阈值M时,将所述Block块分裂出两个子节点,两个子Block块节点的前缀分别设置为00和01,然后将原来前缀为0的Block块节点中的映射项逐项检查其逻辑地址的前两位,将逻辑地址前两位为00的映射项转移到前缀为00的子Block块节点中,将逻辑地址前两位为01的映射项转移到前缀为01的子Block块节点中,最后将新存入的映射项根据其逻辑地址前两位存入相应的子Block块节点中;Decompress the Block block, and then store new mapping items in the Block block item by item, when the number of mapping items in the Block block has reached the set threshold M, split the Block block into two blocks. The prefixes of the two sub-block nodes are set to 00 and 01 respectively, and then the mapping items in the block node with the original prefix of 0 are checked one by one for the first two bits of its logical address, and the first two bits of the logical address are set to 00 The mapping item is transferred to the sub-block node with the
将需要存入树中的映射项全部存入树后,通过虚拟节点补全所述树,得到压缩前缀完全二叉树;After all the mapping items that need to be stored in the tree are stored in the tree, the tree is completed through the virtual node to obtain a complete binary tree with a compressed prefix;
读命令模块,还用于:通过压缩前缀完全二叉树检索算法检索C-Cache中是否有所述逻辑地址对应的目标映射项,且通过压缩前缀完全二叉树检索算法检索Mapping区中是否有所述逻辑地址对应的目标映射项,其中,所述压缩前缀完全二叉树检索算法具体包括:The read command module is also used for: retrieving whether there is a target mapping item corresponding to the logical address in the C-Cache through the compressed prefix complete binary tree retrieval algorithm, and retrieving whether there is the logical address in the Mapping area through the compressed prefix complete binary tree retrieval algorithm Corresponding target mapping items, wherein, the compressed prefix complete binary tree retrieval algorithm specifically includes:
S410、接收目标映射项的逻辑地址;S410, receiving the logical address of the target mapping item;
S411、根据C-Cache区或Mapping区中压缩前缀完全二叉树的高度k取目标映射项的逻辑地址前k位,并利用二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号;S411. According to the height k of the compressed prefix complete binary tree in the C-Cache area or the Mapping area, take the first k bits of the logical address of the target mapping item, and use the binary tree Block block sequence number fast calculation algorithm to calculate the block number that may exist in the target mapping item. ;
S412、检索所述Block块的Bloom filter中是否含有目标映射项的存储标记,当检索结果为是时,解压缩所述Block块并返回目标映射项的逻辑地址对应的物理地址,再压缩所述Block块;当检索结果为否时,执行S413;S412: Retrieve whether the Bloom filter of the Block block contains the storage tag of the target mapping item, when the retrieval result is yes, decompress the Block block and return the physical address corresponding to the logical address of the target mapping item, and then compress the Block block; when the retrieval result is no, execute S413;
S413、令k=k-1,执行S411、S412操作,直至K=1,执行S414;S413, set k=k-1, perform operations S411, S412, until K=1, perform S414;
S414、根据目标映射项的逻辑地址前1位和二叉树Block块序号快速计算算法,计算得到目标映射项可能存在的Block块号,检索所述Block块的Bloomfilter中是否含有目标映射项的存储标记,当检索结果为是时,返回目标映射项的逻辑地址对应的物理地址;当检索结果为否时,表示目标映射项的逻辑地址在所述压缩前缀完全二叉树中不存在;S414, according to the first 1 bit of the logical address of the target mapping item and the fast calculation algorithm of the binary tree Block block sequence number, calculate the Block block number that the target mapping item may exist, and retrieve whether the Bloomfilter of the Block block contains the storage mark of the target mapping item, When the retrieval result is yes, the physical address corresponding to the logical address of the target mapping item is returned; when the retrieval result is no, it means that the logical address of the target mapping item does not exist in the compressed prefix complete binary tree;
C-Cache和Mapping区中的二叉树为压缩的前缀完全二叉树,每个节点为一个压缩的Block,每个Block中存储着多个前缀与Block前缀相同的映射项。The binary tree in the C-Cache and Mapping area is a compressed prefix complete binary tree, each node is a compressed block, and each block stores multiple mapping items with the same prefix as the block prefix.
本实施方式将Cache分为两部分C-Cache和H-Cache。H-Cache部分用以保证系统的高速存取访问性能,C-Cache部分通过对映射项数据进行压缩使得有限的存储空间能够存储更多的映射项,提高了映射项的命中率,从而提高了系统的运行效率,在C-Cache、H-Cache、Mapping区空间管理分别通过三棵前缀完全二叉树来实现,并利用前缀完全二叉树的检索算法,提高了检索效率;Mapping区空间采用具有本地更新特性的NVM介质来存储,相对于将映射项存储到闪存介质而言,消除了映射项异地更新所带来的写放大问题,降低了异地更新时复制并转移映射项带来的时间开销,同时与传统闪存设备相比,采用这种NVM介质存储Mapping区的读写速率将会明显提高;由于DRAM设备相对其他存储设备存取速度更快,将数据频繁存取的操作放到DRAM完成,提高了系统运行效率,充分考虑到数据访问局部性原理,以Block为单位取出映射项备份到缓存,避免了频繁地到Mapping区中检索映射项,提高了系统运行效率,通过部分写回策略,当C-Cache中的某个Block需要迁移出Cache时,只将新生成的和更新的映射项写回Mapping区,在保证数据一致性的同时减少了写回NVM设备的数据量,提高了写回时间效率。In this embodiment, the Cache is divided into two parts, C-Cache and H-Cache. The H-Cache part is used to ensure the high-speed access performance of the system. The C-Cache part compresses the mapping item data so that the limited storage space can store more mapping items, which improves the hit rate of the mapping items, thereby improving the The operation efficiency of the system is realized by three prefix complete binary trees in the C-Cache, H-Cache and Mapping areas respectively, and the retrieval algorithm of the prefix complete binary tree is used to improve the retrieval efficiency; the space in the Mapping area adopts the local update feature Compared with storing mapping items in flash media, it eliminates the problem of write amplification caused by off-site updating of mapping items, and reduces the time overhead caused by copying and transferring mapping items when updating off-site. Compared with traditional flash memory devices, using this NVM medium to store the Mapping area will significantly increase the read and write rates; since DRAM devices have faster access speeds than other storage devices, the frequent data access operations are done in DRAM, which improves the performance. The operating efficiency of the system fully considers the principle of data access locality. The mapping items are taken out in blocks and backed up to the cache, which avoids frequent retrieval of mapping items in the mapping area, and improves the operating efficiency of the system. Through the partial write-back strategy, when C -When a block in the Cache needs to be migrated out of the Cache, only the newly generated and updated mapping items are written back to the Mapping area, which reduces the amount of data written back to the NVM device while ensuring data consistency and improves the write back time. efficiency.
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都应涵盖在本发明的保护范围之内。The above description is only a preferred embodiment of the present invention, but the protection scope of the present invention is not limited to this. The equivalent replacement or change of the inventive concept thereof shall be included within the protection scope of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710141173.9A CN106776361B (en) | 2017-03-10 | 2017-03-10 | Caching method and system for large-scale nonvolatile storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710141173.9A CN106776361B (en) | 2017-03-10 | 2017-03-10 | Caching method and system for large-scale nonvolatile storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106776361A CN106776361A (en) | 2017-05-31 |
CN106776361B true CN106776361B (en) | 2020-07-10 |
Family
ID=58961930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710141173.9A Active CN106776361B (en) | 2017-03-10 | 2017-03-10 | Caching method and system for large-scale nonvolatile storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106776361B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110019985B (en) * | 2017-12-29 | 2021-09-24 | 阿里巴巴(中国)有限公司 | Index file establishing and inquiring methods and devices |
CN110377233B (en) * | 2019-07-22 | 2022-03-29 | 深圳忆联信息系统有限公司 | SSD (solid State disk) reading performance optimization method and device, computer equipment and storage medium |
CN113792171B (en) * | 2021-11-15 | 2022-02-18 | 西安热工研究院有限公司 | Image retrieval method, system, device and storage medium based on memory management |
CN116886719B (en) * | 2023-09-05 | 2024-01-23 | 苏州浪潮智能科技有限公司 | Data processing method and device of storage system, equipment and medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7069390B2 (en) * | 2003-09-04 | 2006-06-27 | International Business Machines Corporation | Implementation of a pseudo-LRU algorithm in a partitioned cache |
CN103019963A (en) * | 2012-12-31 | 2013-04-03 | 华为技术有限公司 | Cache mapping method and storage device |
CN103514249A (en) * | 2013-06-20 | 2014-01-15 | 易乐天 | Method and system for automatic data reduction and storage device |
WO2016065610A1 (en) * | 2014-10-31 | 2016-05-06 | 华为技术有限公司 | Method for accessing files, distributed storage system and storage node |
CN105786717A (en) * | 2016-03-22 | 2016-07-20 | 华中科技大学 | DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management |
CN105786721A (en) * | 2014-12-25 | 2016-07-20 | 研祥智能科技股份有限公司 | Memory address mapping management method and processor |
CN105930356A (en) * | 2016-04-08 | 2016-09-07 | 上海交通大学 | Method for implementing log type heterogeneous hybrid memory file system |
CN105975215A (en) * | 2016-05-25 | 2016-09-28 | 深圳大学 | STL mapping table management method based on Ondemand algorithm |
CN106055679A (en) * | 2016-06-02 | 2016-10-26 | 南京航空航天大学 | Multi-level cache sensitive indexing method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9128845B2 (en) * | 2012-07-30 | 2015-09-08 | Hewlett-Packard Development Company, L.P. | Dynamically partition a volatile memory for a cache and a memory partition |
GB2507758A (en) * | 2012-11-08 | 2014-05-14 | Ibm | Cache hierarchy with first and second level instruction and data caches and a third level unified cache |
US20150205724A1 (en) * | 2014-01-20 | 2015-07-23 | Honeywell International Inc. | System and method of cache partitioning for processors with limited cached memory pools |
-
2017
- 2017-03-10 CN CN201710141173.9A patent/CN106776361B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7069390B2 (en) * | 2003-09-04 | 2006-06-27 | International Business Machines Corporation | Implementation of a pseudo-LRU algorithm in a partitioned cache |
CN103019963A (en) * | 2012-12-31 | 2013-04-03 | 华为技术有限公司 | Cache mapping method and storage device |
CN103514249A (en) * | 2013-06-20 | 2014-01-15 | 易乐天 | Method and system for automatic data reduction and storage device |
WO2016065610A1 (en) * | 2014-10-31 | 2016-05-06 | 华为技术有限公司 | Method for accessing files, distributed storage system and storage node |
CN105786721A (en) * | 2014-12-25 | 2016-07-20 | 研祥智能科技股份有限公司 | Memory address mapping management method and processor |
CN105786717A (en) * | 2016-03-22 | 2016-07-20 | 华中科技大学 | DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management |
CN105930356A (en) * | 2016-04-08 | 2016-09-07 | 上海交通大学 | Method for implementing log type heterogeneous hybrid memory file system |
CN105975215A (en) * | 2016-05-25 | 2016-09-28 | 深圳大学 | STL mapping table management method based on Ondemand algorithm |
CN106055679A (en) * | 2016-06-02 | 2016-10-26 | 南京航空航天大学 | Multi-level cache sensitive indexing method |
Non-Patent Citations (2)
Title |
---|
一种优化的闪存地址映射方法;张琦等;《软件学报》;20140228;第25卷(第2期);第315-324页 * |
多核系统共享内存资源分配和管理研究;高珂等;《计算机学报》;20150531;第38卷(第5期);第1020-1031页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106776361A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107066393B (en) | A method for improving the density of mapping information in the address mapping table | |
CN108572796B (en) | SSD with heterogeneous NVM types | |
KR101289931B1 (en) | Method and apparatus for storing data in flash memory using address mapping with various block sizes | |
CN102364474B (en) | Metadata storage system for cluster file system and metadata management method | |
KR101663667B1 (en) | Method and apparatus for data management in flash memory by address mapping | |
JP2019020788A (en) | Memory system and control method | |
CN114860163A (en) | A storage system, memory management method and management node | |
CN105930282B (en) | A kind of data cache method for NAND FLASH | |
WO2017113213A1 (en) | Method and device for processing access request, and computer system | |
WO2014015828A1 (en) | Data storage space processing method and processing system, and data storage server | |
CN106776361B (en) | Caching method and system for large-scale nonvolatile storage medium | |
CN105302744A (en) | Invalidation data area for cache | |
JP6608468B2 (en) | Storage apparatus and control method thereof | |
CN110321301A (en) | A kind of method and device of data processing | |
CN110968269A (en) | SCM and SSD-based key value storage system and read-write request processing method | |
CN104598386B (en) | By following the trail of and reusing solid-state drive block using two level map index | |
WO2017113211A1 (en) | Method and device for processing access request, and computer system | |
KR101936364B1 (en) | Memory management system using flash memory and method thereof | |
CN109002400B (en) | Content-aware computer cache management system and method | |
JP2018181202A (en) | Storage control device, storage control method and storage control program | |
CN113253926A (en) | Memory internal index construction method for improving query and memory performance of novel memory | |
CN115203079A (en) | Method for writing data into solid state disk | |
Lv et al. | Zonedstore: A concurrent zns-aware cache system for cloud data storage | |
CN110968527B (en) | FTL provided caching | |
TW202203016A (en) | Key value storage device and method for sorting key |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |