[go: up one dir, main page]

CN105975402B - The caching method and system of data perception are eliminated under a kind of mixing memory environment - Google Patents

The caching method and system of data perception are eliminated under a kind of mixing memory environment Download PDF

Info

Publication number
CN105975402B
CN105975402B CN201610278653.5A CN201610278653A CN105975402B CN 105975402 B CN105975402 B CN 105975402B CN 201610278653 A CN201610278653 A CN 201610278653A CN 105975402 B CN105975402 B CN 105975402B
Authority
CN
China
Prior art keywords
page
superseded
caching
eliminated
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610278653.5A
Other languages
Chinese (zh)
Other versions
CN105975402A (en
Inventor
吴松
贾佑闯
金海�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201610278653.5A priority Critical patent/CN105975402B/en
Publication of CN105975402A publication Critical patent/CN105975402A/en
Application granted granted Critical
Publication of CN105975402B publication Critical patent/CN105975402B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种混合内存环境下淘汰数据感知的缓存系统,其目标是在保证缓存系统自身性能的前提下尽量减少写回非易失性主存的次数从而提升其寿命。系统主要包含元数据监控模块、页面置换模块和自适应空间划分模块。监控模块收集缓存淘汰数据的元数据信息,然后基于元数据信息分析出缓存中页面的淘汰权重;页面置换模块,实现基于缓存性能和非易失性主存寿命来选择淘汰页面;自适应空间划分模块结合请求的特点和页面是否发生过淘汰来判断当前阶段应该选择哪种类型的页面作为淘汰对象。本发明能够的在保证缓存系统性能的前提下有效的减少对非易失性主存的写回次数,从而提升其寿命。

The invention discloses a cache system that eliminates data perception in a mixed memory environment, the goal of which is to minimize the number of times of writing back to the non-volatile main memory under the premise of ensuring the performance of the cache system itself, thereby increasing its life. The system mainly includes a metadata monitoring module, a page replacement module and an adaptive space division module. The monitoring module collects the metadata information of the cache elimination data, and then analyzes the elimination weight of the pages in the cache based on the metadata information; the page replacement module realizes the selection of eliminated pages based on cache performance and non-volatile main memory life; adaptive space division The module combines the characteristics of the request and whether the page has been eliminated to determine which type of page should be selected as the elimination object at the current stage. The present invention can effectively reduce the number of times of writing back to the non-volatile main memory on the premise of ensuring the performance of the cache system, thereby improving its lifespan.

Description

The caching method and system of data perception are eliminated under a kind of mixing memory environment
Technical field
The invention belongs to memory calculating fields, more particularly, to superseded data perception under a kind of mixing memory environment Caching method and caching system, main target are that the service life of non-volatile main memory is promoted under the premise of guaranteeing caching performance.
Background technique
The rapid development of memory computing technique, cause it is existing based on the main storage system of DRAM scalability, in terms of It has been difficult to adapt to further develop, and the appearance of novel non-volatile main memory provides newly to optimize existing memory system Chance because non-volatile main memory has that non-volatile, low energy consumption and better advantages such as scalability when as memory, but want Think to replace existing DRAM completely, writing rate, dynamic energy consumption and in terms of non-volatile main memory also at a distance of DRAM very Far, so a kind of relatively common internal storage structure is to take DRAM and non-volatile main memory as mixing main memory, wherein DRAM makees Cached for the upper layer of non-volatile main memory, in this way can preferably the complementation of advantage disadvantage to promote whole performance and reliable Property.
When non-volatile main memory is as memory, one of them important problem is exactly restricted lifetime.For example, phase change memory, Its write operation principle is that resistive material changes between low-resistance crystalline state (logic 1) and high resistant amorphous state (logical zero), to lead Phase change memory is caused to be difficult to bear a large amount of write operation, the write operation service life of phase change memory only has 108-1012 for the DRAM that compares It is secondary, poorer than DRAM several orders of magnitude, if concentrating and writing, it is only necessary to very short time (100 days or so, based on typical SPECCPU application) certain phase change memory units in phase change memory can be write it is bad, therefore, in order to make full use of it is non-easily The performance of the property lost main memory, needs to focus on solving the life problems of non-volatile main memory
Current generation existing research work is concentrated mainly on following three classes: non-volatile main memory number is write in reduction, such as It is write again after write-after-read, overturning, coset mapping write-in and traditional caching level improve hit rate and write to reduce accordingly Return the number of data;Abrasion equilibrium, from section, the page, memory line, the exchange and overturning for carrying out than top grade different grain size data; Error correction and the multiplexing of bad block carry out hardware view and misread errored bit and be grouped progress mistake after being write badly mainly for local page Accidentally restore and other blocks is assisted to carry out hardware error correction using normal bit remaining in bad block.
The write request that the studies above scheme is sent mainly for caching is handled to optimize, actual writeback request quantity It is not reduced, and brings the expenses such as additional record, migration, it can be considered to carry out optimization request from caching level to write The problem of entering non-volatile main memory.Existing research work is mainly concentrated on based on data cached apoplexy involving the solid organs in caching system at present Data need to write back non-volatile main memory when eliminating and clean data are then not necessarily to the difference write back, need to eliminate number in spatial cache According to when the intentional clean data of preferentially selection it is superseded to carry out, ratio shared by dirty data in data of eliminating is reduced, to reach The purpose that spatial cache writes back dirty data number to non-volatile main memory is reduced, or passes through analysis concrete application specific procedure The clean data that the access hit rate distribution of section carrys out the intentional low hit rate of selection are superseded to carry out.Although above-mentioned work is from totality On reduce the number for writing back non-volatile main memory, but this strategy for deliberately reducing dirty data written-back operation number being brought Following problem: (1) clean data are deliberately eliminated retain dirty data will affect data distribution in spatial cache, and then damage To the hit rate of upper access caching system, adverse effect is brought to the overall performance of system.(2) work of caching performance is considered It needs first to be analyzed and then selected specific strategy execution to concrete application operation characteristic in different time periods, it cannot be fine Ground adapts to the cache environment of not homologous ray.
Summary of the invention
For the disadvantages described above of the prior art, the present invention provides a kind of caching system of superseded data perception, purposes Be guarantee caching access hit rate to effectively reduce under the premise of caching system performance is unaffected write back it is non-volatile Property main memory number to promoting the non-volatile main memory service life, thus solve that the non-volatile main memory write operation longevity can not be promoted simultaneously The technical issues of life and guarantee caching performance.
To achieve the above object, it is an aspect of this invention to provide that providing a kind of caching method of superseded data perception, packet Containing following steps:
(1) it after being determined wait eliminate the page, caches and eliminates the corresponding metadata information of the data monitoring method acquisition page, packet It containing page address, eliminates time and superseded number etc., and initializes or update to cache and eliminate corresponding in set of records ends wash in a pan Eliminate record;When being not hit by generation, monitoring method eliminates the locality characteristic of record based on the metadata information analysis being collected into (temporal locality and spatial locality) and the superseded weight that the corresponding caching page is calculated based on features described above;
(2) page frame replacement strategy determines different method of replacing based on the type of superseded data, if it is nearest to eliminate object Do not eliminated the clean page that then preferential selection did not eliminated in page type recently carry out it is superseded, and if eliminating object most Closely occurred to eliminate, and based on the superseded weight that monitoring obtains, calculated corresponding superseded threshold value, and combine the visit of the page itself Ask that feature preferentially selects the clean page as superseded object in superseded threshold range, it is non-volatile with reduction writing back additionally Hosting operations select the containing dirty pages face of minimal weight as superseded object if without the corresponding clean page, guarantee to delay to reach Sustainability can and promote the target in non-volatile main memory service life;
(3) when adaptive space partition mechanism occurs mainly in request and is not hit by, if request page eliminates record in caching There is corresponding record then to illustrate that the record occurred to eliminate recently in set, then selects the caching page that there is no eliminating excessively recently As superseded object;And if request page is eliminated set of records ends in caching and is not recorded, and illustrates the record recently without interviewed It asked, was not more eliminated, so selecting the page once eliminated recently as superseded object.
Contemplated above technical scheme through the invention, compared with prior art, system of the invention have below Advantage and technical effect:
1, due to using step (1), it can be obtained in time when eliminating operation and executing and analyze eliminating for the page Feature provides the theory support that whether can be eliminated when the page is stored in caching again for it;
2, due to using step (2), the superseded page can be down to most the performance negative effect of caching next stage It is small and can be reduced unnecessary written-back operation, to improve the non-volatile main memory write operation service life;
3, due to using step (3), the data distribution of spatial cache can more meet the spy of current generation access request Point, thus corresponding access hit rate can also be promoted, to be further ensured that the performance of caching system.
It is another aspect of this invention to provide that additionally providing a kind of caching system for mixing and eliminating data perception under memory environment System, including eliminate data monitoring module, page frame replacement module and adaptive space division module.Monitoring module, for caching In the data record eliminated carry out metadata collecting and superseded weight analysis, to support following page frame replacement and space to divide; Page frame replacement module is when request occurs for caching system to be not hit by, and selection is influenced minimum and is not required to as far as possible on caching performance Write back non-volatile main memory the page carry out it is superseded;Adaptive space division module is mainly in view of in caching and did not eliminated Influence of the page crossed to caching performance determines to eliminate page in conjunction with the data of request and monitoring module using adaptive thought The type in face, to achieve the purpose that space divides.
It eliminates data monitoring module and is carrying out superseded when progress when caching selected specific webpage, be mainly used to monitor nearest one The section time caches the page record eliminated, and goes out its position in superseded set of records ends especially by address calculation to judge to be It is no to have had corresponding superseded record, if recording the metadata record information of the page without then initializing, if having then more The relevant metadata information (eliminating time, superseded number etc.) of the new record;When needing to replace the page, all washed in a pan is analyzed The locality weight of the corresponding superseded record of the page was eliminated, mainly eliminates the weight of time and superseded frequency, and based on above-mentioned Weight obtains the superseded weight of the corresponding caching page.
Page frame replacement module is carried out when request miss occurs for each caching, for selecting final superseded object, first It determines whether the page to be eliminated defaulted and chosen belongs to and the superseded page once occurred, and is superseded and naughty recently for not occurring respectively The page eliminated implements different Replacement Strategies, wherein the page that do not eliminated is preferentially selected completely to eliminate, and it is right The superseded weight of the page is then based in the page eliminated recently to select, and the lesser clean page of preoption weight is as superseded Object, to achieve the purpose that guarantee cache access hit rate and reduce to write back.
Adaptive space division module determines that the type of superseded data, specifically basis are asked when request is not hit by first State and the data record of monitoring module are asked to judge that the which type of page is more likely to be accessed recently, and adaptive It selects the another type of page as superseded object, is divided by the space of two types data to adapt to upper access request Variation.
Contemplated above technical scheme through the invention, compared with prior art, system of the invention have below Advantage and technical effect:
1, data monitoring module is eliminated due to using, it being capable of accurately each page in detection and analysis buffered packet Eliminate weight size, and carry out the storage of low spatial expense, thus page-out next stage be accessed again can energy range Degree divides for following page frame replacement and space and provides effective support;
2, due to using page frame replacement module, the access for eliminating weight and the page itself based on the page monitored is special Sign, the write operation service life of the performance of caching system itself and non-volatile main memory can be combined, because will not be because of It pursues to reduce and writes back and bring big negative effect to caching performance;
3, due to that the page that do not eliminated in caching can be taken into account using adaptive space division module, So that the distribution of Various types of data meets the tendency requested access at this stage in spatial cache, so that the performance of caching obtains further Promotion.
In general, through the invention it is contemplated above technical scheme is compared with the prior art, due to eliminate data when The size for combining the superseded weight of the page selects to eliminate target with the feature requested access to, enables to guaranteeing caching performance The service life of non-volatile main memory is effectively promoted under the premise of being substantially unaffected.
Detailed description of the invention
Fig. 1 is the data monitoring method flow chart of the embodiment of the present invention;
Fig. 2 is the flow chart of the page frame replacement strategy of the embodiment of the present invention;
Fig. 3 is the flow chart of the adaptive space partition mechanism of the embodiment of the present invention;
Fig. 4 is the system module block diagram of the embodiment of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below Not constituting a conflict with each other can be combined with each other.
The present invention provides a kind of caching methods of superseded data perception, comprising the following steps:
(1) it as shown in Figure 1, eliminating data monitoring method, is carried out during the entire process of system operation, specifically comprising as follows Several sub-steps:
(1.1) when the selected specific webpage of caching carries out superseded, the position of its storage data collection: is gone out by address calculation It sets, then judges whether there has been corresponding superseded record, if recording the page address of the page without then initializing, having washed in a pan Eliminate number (being set as 1) and superseded time (current time), if having, take out the record of the data, update its eliminate time be Current time eliminates number increase by one.
(1.2) data are analyzed: when needing to replace the page, analysis is all to have eliminated the corresponding superseded record of the page.One It is the weight for eliminating the time, by all records to eliminate time-sequencing, and according to the successive assignment weight (1- of sequence from small to large n);Second is that eliminating the weight of frequency, all records are sorted from small to large according to superseded number, and is weighed according to this sequence assignment Weight (1-n).Finally, each page corresponding time and frequency weight are summed to obtain corresponding superseded weight, and according to superseded Weight sorts from small to large cached in once eliminate the page superseded weight sequencing.
(2) it as shown in Fig. 2, page frame replacement strategy, caching occurs to carry out when request miss every time, specifically includes following several Sub-steps:
(2.1) page to be eliminated chosen to default judges whether the page has record in the data of monitoring module.
(2.2) it if having record then to illustrate that the page belongs to once occurs the superseded page, is then analyzed based on monitoring module To weight further judged, if do not record then enter (2.4).
(2.3) threshold value of the superseded page based on weight at this stage once occurred to all, and selected to eliminate object, if weight Less than there are clean data in the page of threshold value, then the page is preferentially selected to eliminate, if existing without clean data, then selected The smallest dirty data page of weight carries out superseded.
(2.4) if the page did not occurred to eliminate, illustrate that the page belongs to and the superseded page does not occur.According to default LRU rule select to eliminate the page, while paying the utmost attention to the clean page to be selected.
(3) as shown in figure 3, adaptive space partition mechanism, specifically include following several sub-steps:
(3.1) the access cache request that upper layer is sent, judges whether to hit.
(3.2) if hit, continues access operation, if miss, whether query monitor module has corresponding wash in a pan Record is eliminated, if otherwise entering (3.4) in the presence of then (3.3) are entered.
(3.3) page once occurred to eliminate recently, then explanation results in being not hit by for this, institute to the superseded of the page To need to expand the affiliated space size for once occurring to eliminate the page, i.e., it is suitable never to occur to select in the superseded page The page carries out superseded.
(3.4) page then illustrates that the page is not visited recently and is not more washed in a pan recently there is no excessively superseded It eliminated, so the space size for not occurring to eliminate the page belonging to needing to expand, i.e., from once occurring to select in the superseded page The suitable page is as superseded object.
As shown in figure 4, the present invention provides the caching system of data perception is eliminated under a kind of mixing memory environment, including wash in a pan Eliminate data monitoring module, page frame replacement module and adaptive space division module.Monitoring module, for the number eliminated in caching Metadata collecting and superseded weight analysis are carried out according to record, to support following page frame replacement and space to divide;Page frame replacement mould Block is when being not hit by for caching system generation request, and selection influences minimum and do not need to write back as far as possible non-easy on caching performance The page of the property lost main memory carries out superseded;Adaptive space division module is mainly in view of the page pair that do not eliminated in caching The influence of caching performance determines the type for eliminating the page using adaptive thought in conjunction with the data of request and monitoring module, To achieve the purpose that space divides.
It eliminates data monitoring module and is carrying out superseded when progress when caching selected specific webpage, be mainly used to monitor nearest one The section time caches the page record eliminated, and goes out its position in superseded set of records ends especially by address calculation to judge to be It is no to have had corresponding superseded record, if recording the metadata record information of the page without then initializing, if having then more The relevant metadata information (eliminating time, superseded number etc.) of the new record;When needing to replace the page, all washed in a pan is analyzed The locality weight of the corresponding superseded record of the page was eliminated, mainly eliminates the weight of time and superseded frequency, and based on above-mentioned Weight obtains the superseded weight of the corresponding caching page.
Page frame replacement module is carried out when request miss occurs for each caching, for selecting final superseded object, first It determines whether the page to be eliminated defaulted and chosen belongs to and the superseded page once occurred, and is superseded and naughty recently for not occurring respectively The page eliminated implements different Replacement Strategies, wherein the page that do not eliminated is preferentially selected completely to eliminate, and it is right The superseded weight of the page is then based in the page eliminated recently to select, and the lesser clean page of preoption weight is as superseded Object, to achieve the purpose that guarantee cache access hit rate and reduce to write back.
Adaptive space division module determines that the type of superseded data, specifically basis are asked when request is not hit by first State and the data record of monitoring module are asked to judge that the which type of page is more likely to be accessed recently, and adaptive It selects the another type of page as superseded object, is divided by the space of two types data to adapt to upper access request Variation.
The present invention provides the caching methods that data perception is eliminated under a kind of mixing memory environment, due to using step (1), the superseded feature that can obtain and analyze the page in time when eliminating operation and executing, when the page is stored in caching again The theory support that whether can be eliminated is provided when middle for it;It, can be next to caching by the superseded page due to using step (2) The performance negative effect in stage minimizes and can be reduced unnecessary written-back operation, to improve non-volatile main memory write operation Service life;Due to using step (3), the characteristics of data distribution of spatial cache can more meet current generation access request, because And corresponding access hit rate can also be promoted, to be further ensured that the performance of caching system.
For eliminating the caching system of data perception under mixing memory environment provided by the invention, number is eliminated due to using According to monitoring module, can accurately in detection and analysis buffered packet each page superseded weight size, and carry out low spatial The storage of expense, therefore the possibility degree that page-out is accessed again in next stage is obtained, it is following page frame replacement and space Division provides effective support;Due to using page frame replacement module, weight and page sheet are eliminated based on the page monitored The access feature of body can combine the write operation service life of the performance of caching system itself and non-volatile main memory, because Big negative effect is brought to caching performance for that will not write back because of reduction is pursued;Mould is divided due to using adaptive space Block can take into account the page that do not eliminated in caching, so that the distribution of Various types of data meets existing rank in spatial cache The tendency that section requests access to, so that the performance of caching is further promoted.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should all include Within protection scope of the present invention.

Claims (2)

1. eliminating the caching method of data perception under a kind of mixing memory environment, which comprises the steps of:
(1) the superseded weight that data analyze the caching page is eliminated by the caching system being collected into;
(2) based on the obtained superseded weight of monitoring and combine the access Feature Selection of the page itself to caching performance and non-volatile The page that the main memory service life has had carries out superseded;
(3) based on request feature and superseded data record adaptive adjustment page frame replacement the type for eliminating object is chosen when;
The step (1) specifically includes following sub-step:
(1.1) after the superseded page determines, monitoring method obtains the corresponding metadata information of the page, comprising page address, eliminates Time and superseded number;
(1.2) when being not hit by generation, the locality characteristic of record is eliminated based on the metadata information analysis being collected into and is calculated The superseded weight of the corresponding caching page;
The step (2) specifically includes following sub-step:
(2.1) if eliminating object is the page eliminated recently, then the page is eliminated based on nearest generations all in buffered packet Superseded weight, calculate corresponding superseded threshold value;
(2.2) preferentially select the clean page as superseded object in superseded threshold range, it is non-easily with reduction writing back additionally The property lost hosting operations, select the containing dirty pages face of minimal weight as superseded object if without the corresponding clean page;
The step (3) specifically includes following sub-step:
(3.1) request is not hit by, and illustrates that the record is nearest if request page has corresponding record in caching superseded set of records ends Occurred to eliminate and then selects recently that there is no the excessively superseded caching pages as superseded object;
(3.2) it is not recorded if request page eliminates set of records ends in caching, illustrates that the record is not visited recently It was not eliminated, and selected the page once eliminated recently as superseded object.
2. eliminating the caching system of data perception under a kind of mixing memory environment, which is characterized in that set including monitoring module, the page Change the mold block and adaptive space division module, in which:
The monitoring module is used for for carrying out metadata collecting and superseded weight analysis to the data record eliminated in caching Following page frame replacement and space is supported to divide;
The page frame replacement module, when being not hit by for caching system generation request, selection influences caching performance minimum and most The page that amount does not need to write back non-volatile main memory carries out superseded;
The adaptive space division module, for when request is not hit by, if request page is eliminated in set of records ends caching There is corresponding record then to illustrate that the record occurred to eliminate recently, then selects recently that there is no the excessively superseded caching pages as naughty Eliminate object;And if request page is eliminated set of records ends in caching and is not recorded, and illustrates that the record is not visited recently, more It was not eliminated, so selecting the page once eliminated recently as superseded object;
The monitoring module includes eliminating page metadata to collect submodule and the superseded weight analysis submodule of the page, wherein eliminating Page metadata collects the metadata information that submodule is used to collect the page that is eliminated after eliminating operation and occurring, metadata information The time is eliminated comprising page address, the page and the page eliminates number;The page eliminates weight analysis submodule, for needing to eliminate When the page, the locality weight for eliminating each record in set of records ends is analyzed, eliminating for the corresponding caching page is then calculated Weight;
The page frame replacement module includes eliminating threshold value assessment submodule and superseded page selection submodule, is commented wherein eliminating threshold value Estimate submodule, for the superseded weight size of each page in being grouped based on current cache, calculate weight mean value and middle position Number, being then based on two is worth target to eliminate threshold value;It eliminates the page and chooses submodule, it is true for the type based on superseded data Fixed different method of replacing, preferential selection was not eliminated in page type recently if superseded object is not eliminated recently The clean page carries out superseded weight that is superseded, and obtaining if eliminating object and occurring to eliminate recently based on monitoring, calculates Corresponding superseded threshold value, and combine the access feature of the page itself preferentially selected in superseded threshold range the clean page as Eliminate object, with reduce it is additional write back non-volatile hosting operations, select minimal weight if without the corresponding clean page Containing dirty pages face as object is eliminated, to reach the target for guaranteeing caching performance and promoting non-volatile main memory service life.
CN201610278653.5A 2016-04-28 2016-04-28 The caching method and system of data perception are eliminated under a kind of mixing memory environment Active CN105975402B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610278653.5A CN105975402B (en) 2016-04-28 2016-04-28 The caching method and system of data perception are eliminated under a kind of mixing memory environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610278653.5A CN105975402B (en) 2016-04-28 2016-04-28 The caching method and system of data perception are eliminated under a kind of mixing memory environment

Publications (2)

Publication Number Publication Date
CN105975402A CN105975402A (en) 2016-09-28
CN105975402B true CN105975402B (en) 2019-01-18

Family

ID=56994878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610278653.5A Active CN105975402B (en) 2016-04-28 2016-04-28 The caching method and system of data perception are eliminated under a kind of mixing memory environment

Country Status (1)

Country Link
CN (1) CN105975402B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11481143B2 (en) 2020-11-10 2022-10-25 Red Hat, Inc. Metadata management for extent-based storage system

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106528454B (en) * 2016-11-04 2019-03-29 中国人民解放军国防科学技术大学 A kind of memory system caching method based on flash memory
CN107844511B (en) * 2017-06-16 2021-08-17 珠海金山网络游戏科技有限公司 Game resource caching method and system based on cycle cost
CN109086462A (en) * 2018-09-21 2018-12-25 郑州云海信息技术有限公司 The management method of metadata in a kind of distributed file system
CN111177024B (en) * 2019-12-30 2022-09-06 青岛海尔科技有限公司 A kind of memory optimization processing method and device
WO2022021178A1 (en) * 2020-07-30 2022-02-03 华为技术有限公司 Cache method, system, and chip
CN112764681B (en) * 2021-01-21 2024-02-13 上海七牛信息技术有限公司 Cache elimination method and device with weight judgment and computer equipment
CN112926206B (en) * 2021-02-25 2024-04-26 北京工业大学 Workflow engine cache elimination method based on industrial process background

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1585347A (en) * 2004-05-21 2005-02-23 中国科学院计算技术研究所 Network agent buffer substitution by using access characteristics of network users
US20100153646A1 (en) * 2008-12-11 2010-06-17 Seagate Technology Llc Memory hierarchy with non-volatile filter and victim caches
US20110145506A1 (en) * 2009-12-16 2011-06-16 Naveen Cherukuri Replacing Cache Lines In A Cache Memory
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1585347A (en) * 2004-05-21 2005-02-23 中国科学院计算技术研究所 Network agent buffer substitution by using access characteristics of network users
US20100153646A1 (en) * 2008-12-11 2010-06-17 Seagate Technology Llc Memory hierarchy with non-volatile filter and victim caches
US20110145506A1 (en) * 2009-12-16 2011-06-16 Naveen Cherukuri Replacing Cache Lines In A Cache Memory
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
H-ARC: A non-volatile memory based cache policy for Solid State Drives;Ziqi Fan等;《2014 30th Symposium on Mass Storage Systems and》;20141231;第1-11页 *
VAIL: A Victim-Aware Cache Policy for Improving Lifetime of Hybrid Memory;Youchuang Jia等;《PMAM"18: Proceedings of the 9th International Workshop on Programming Models and Applications for Multicores and Manycores》;20180228;第1-6页 *
基于混合内存的高效缓存系统;贾佑闯;《中国优秀硕士学位论文全文数据库》;20171115(第11期);第I137-19 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11481143B2 (en) 2020-11-10 2022-10-25 Red Hat, Inc. Metadata management for extent-based storage system

Also Published As

Publication number Publication date
CN105975402A (en) 2016-09-28

Similar Documents

Publication Publication Date Title
CN105975402B (en) The caching method and system of data perception are eliminated under a kind of mixing memory environment
US9430376B2 (en) Priority-based garbage collection for data storage systems
US9846641B2 (en) Variability aware wear leveling
US10430084B2 (en) Multi-tiered memory with different metadata levels
CN102103547B (en) Replace the cache line in cache memory
CN111143243B (en) A cache prefetching method and system based on NVM hybrid memory
CN103608782B (en) Selective data storage in LSB page face and the MSB page
CN105094686B (en) Data cache method, caching and computer system
CN106528454B (en) A kind of memory system caching method based on flash memory
CN104081364B (en) Collaborative caching
CN105095116A (en) Cache replacing method, cache controller and processor
CN106233265A (en) Access frequency hierarchical structure is used for evicting from the selection of target
CN106569960B (en) A kind of last level cache management method mixing main memory
CN108762671A (en) Hybrid memory system based on PCM and DRAM and management method thereof
CN105930282A (en) Data cache method used in NAND FLASH
JP2014517394A (en) Large RAM cache
CN107391035A (en) It is a kind of that the method for reducing solid-state mill damage is perceived by misprogrammed
Wu et al. APP-LRU: A new page replacement method for PCM/DRAM-based hybrid memory systems
CN107590084A (en) A kind of page level buffering area improved method based on classification policy
CN109542803A (en) A kind of mixing multi-mode dsc data cache policy based on deep learning
Han et al. Enhanced wear-rate leveling for PRAM lifetime improvement considering process variation
CN112395221B (en) A cache replacement method and device based on energy consumption characteristics of MLC STT-RAM
Lin et al. Greedy page replacement algorithm for flash-aware swap system
US9760488B2 (en) Cache controlling method for memory system and cache system thereof
Park et al. Filtering dirty data in dram to reduce pram writes

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant