[go: up one dir, main page]

CN103729309B - A kind of catalogue Cache coherence methods - Google Patents

A kind of catalogue Cache coherence methods Download PDF

Info

Publication number
CN103729309B
CN103729309B CN201410017448.4A CN201410017448A CN103729309B CN 103729309 B CN103729309 B CN 103729309B CN 201410017448 A CN201410017448 A CN 201410017448A CN 103729309 B CN103729309 B CN 103729309B
Authority
CN
China
Prior art keywords
cache
directory
memory
catalogue
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410017448.4A
Other languages
Chinese (zh)
Other versions
CN103729309A (en
Inventor
韩东涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IEIT Systems Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN201410017448.4A priority Critical patent/CN103729309B/en
Publication of CN103729309A publication Critical patent/CN103729309A/en
Application granted granted Critical
Publication of CN103729309B publication Critical patent/CN103729309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention provides a kind of catalogue Cache coherence methods, and its implementation process is:With reference to limited catalogue and full mapping directory, the two-stage catalogue storage organization of double agreements is set on the basis of memory Cache has been used;And the replacement algorithm between memory layer and memory Cache layers is ensured using a kind of pseudo- LRU of shared number weighting.Compared to the prior art a kind of catalogue Cache coherence methods, solve full mapping directory and take excessive storage overhead, and limited catalogue can be limited by directory entry spilling, practical, it is easy to promote the problems such as the available time of chain type catalogue is relatively low.

Description

一种目录Cache一致性方法A Directory Cache Consistency Method

技术领域technical field

本发明涉及计算机技术领域,更具体地说是目录Cache一致性方法。The invention relates to the field of computer technology, more specifically to a directory Cache consistency method.

背景技术Background technique

在多级网络中,高速缓存目录存放了有关高速缓存行拷贝驻留在哪里的信息,以支持高速缓存一致性。各种目录方法的主要差别是目录如何维护信息和存放什么信息。第一个目录方案是用一个中心目录存放所有高速缓存目录的拷贝,中心目录能提供为保证一致性所需要的全部信息。因此,其容量非常大且必须采用联想方法进行检索,这和单个高速缓存的目录类似。大型多处理机系统采用中心目录会有冲突和检索时间过长两个缺点。分布式目录方案是由Censier和Feautrier提出来的。在分布式目录中每个存储器模块维护各自的目录,目录中记录着每个存储器块的状态和当前的信息,其中状态信息是本地的,而当前信息指明哪些高速缓存中有该存储器块的拷贝。不同类型的目录方法可分为全映射目录、有限目录和链式目录三类。其中全映射目录占用过大的存储开销,有限目录会受目录项溢出的限制,链式目录的时间有效性较低等问题。基于此,本发明提供一种改进的目录Cache一致性方法,将有限目录与全映射目录结合在一起来解决上述问题。In a multilevel network, the cache directory holds information about where copies of cache lines reside to support cache coherency. The main difference between the various directory approaches is how the directory maintains information and what information it stores. The first directory scheme is to use a central directory to store copies of all cache directories, and the central directory can provide all the information needed to ensure consistency. Therefore, it is very large and must be retrieved using an associative method, similar to the catalog of a single cache. Large-scale multiprocessor systems adopting a central directory will have two disadvantages, conflict and long retrieval time. The distributed directory scheme was proposed by Censier and Feautrier. In the distributed directory, each memory module maintains its own directory, which records the state and current information of each memory block, where the state information is local, and the current information indicates which caches have a copy of the memory block . Different types of catalog methods can be divided into three categories: full map catalog, limited catalog and chain catalog. Among them, the full map directory occupies too much storage overhead, the limited directory is limited by the overflow of directory items, and the time validity of the chain directory is low. Based on this, the present invention provides an improved directory Cache consistency method, which combines limited directories and full-mapped directories to solve the above problems.

发明内容Contents of the invention

本发明的技术任务是解决现有技术的不足,提供一种操作简单、易于实现、改进的目录Cache一致性方法。The technical task of the present invention is to solve the deficiencies of the prior art, and provide a directory Cache consistency method that is simple to operate, easy to realize and improved.

本发明的技术方案是按以下方式实现的,该一种目录Cache一致性方法,其具体实现过程为:The technical scheme of the present invention is realized in the following manner, and the specific implementation process of this kind of directory Cache consistency method is:

一、设置两级目录存储结构,即全映射目录和有限目录,其中全映射目录存放与全局存储器中每个块有关的数据,使得系统中的每个高速缓存可以同时存储任何数据块的拷贝,每个目录项包含N个指针,N是系统中处理器的数目;有限目录与全映射目录不同之处为其每个目录项均含有固定数目的指针;1. Set up a two-level directory storage structure, that is, a full-map directory and a limited directory. The full-map directory stores data related to each block in the global memory, so that each cache in the system can store a copy of any data block at the same time. Each directory entry contains N pointers, and N is the number of processors in the system; the difference between the limited directory and the full mapping directory is that each directory entry contains a fixed number of pointers;

二、在存储器层和存储器Cache层之间使用的共享数加权的伪最近最少使用算法:假定存储器层的每个目录项使用Q个指针,则替换时只对共享数小于Q的高速缓存行使用该算法进行替换;当存储器Cache中所有高速缓存行的共享数皆大于Q时,将共享数最小的高速缓存行替换出存储器Cache并且进行相应作废处理。2. Pseudo-least-recently-used algorithm weighted by the shared number used between the memory layer and the memory Cache layer: assuming that each directory entry of the memory layer uses Q pointers, only the cache lines whose shared number is less than Q are used when replacing This algorithm performs replacement; when the shared numbers of all cache lines in the memory Cache are greater than Q, the cache line with the smallest shared number is replaced out of the memory Cache and invalidated accordingly.

所述的两级目录存储结构中,用全映射方法实现的目录项中有一个处理器位和一个脏位:前者表示相应处理器的高速缓存块存在或不存在的状态;后者如果为“1”,而且有一个也只有一个处理器位为“1”,则该处理器就可以对该块进行写操作,高速缓存的每个块均有两个状态位:一位表示块是否有效;另一位表示有效块是否允许写。In the two-level directory storage structure, there is a processor bit and a dirty bit in the directory entry realized by the full mapping method: the former indicates the existence or non-existence of the cache block of the corresponding processor; if the latter is " 1", and one and only one processor bit is "1", then the processor can write to the block. Each block in the cache has two status bits: one bit indicates whether the block is valid; Another bit indicates whether writing is allowed for valid blocks.

所述共享数加权的伪最近最少使用算法的详细内容为:The details of the pseudo-least recently used algorithm weighted by the sharing number are as follows:

1)若高速缓存行在存储器Cache,则执行2,否则执行5;1) If the cache line is in the memory Cache, execute 2, otherwise execute 5;

2)从存储器Cache中读取数据;2) Read data from the memory Cache;

3)若为新的共享节点,则执行4,否则执行14;3) If it is a new shared node, execute 4, otherwise execute 14;

4)修改Cache目录项,执行14;4) Modify the Cache directory item, go to 14;

5)从存储器中读取数据;5) Read data from memory;

6)若为新的共享节点,则执行7,否则执行9;6) If it is a new shared node, execute 7, otherwise execute 9;

7)若存储器目录项溢出,则执行8,否则执行9;7) If the memory directory item overflows, execute 8, otherwise execute 9;

8)记录溢出项;8) Record overflow items;

9)若Cache中有空闲目录项,则执行10,否则执行11;9) If there are free directory entries in the Cache, execute 10, otherwise execute 11;

10)将数据加入Cache并根据存储器目录项修改Cache目录项,跳至14;10) Add the data to the Cache and modify the Cache directory entry according to the storage directory entry, skip to 14;

11)若Cache中有共享数小于Q的目录项,则执行12,否则执行13;11) If there are directory entries in the Cache with a shared number less than Q, execute 12, otherwise execute 13;

12)在共享数小于Q的高速缓存行中使用LRU算法选择一块数据替换出Cache,执行10;12) Use the LRU algorithm to select a piece of data in the cache line whose shared number is less than Q to replace the Cache, and execute 10;

13)选择一共享最小的高速缓存行进行共享作废的相应处理,执行10;13) Select a cache line with the smallest share to perform the corresponding processing of shared invalidation, go to step 10;

14)完成。14) Done.

本发明与现有技术相比所产生的有益效果是:The beneficial effect that the present invention produces compared with prior art is:

本发明的一种目录Cache一致性方法将有限目录与全映射目录结合在一起,进而使用两级目录的Cache一致性方法,解决了全映射目录占用过大的存储开销,有限目录会受目录项溢出的限制,链式目录的时间有效性较低等问题。在存储器层使用有限目录,由于有限目录单个目录项占用空间小,因此能够使需要大量目录项的存储器层节省不少存储空间。同时,在存储器Cache层使用全映射目录,Cache容量有限,所以即使单个目录项占用空间大,总共占用的空间也不会多。不但解决了有限目录的目录项溢出问题,而且使用频率最高的数据及其目录项一直处于使用全映射目录的存储器Cache层中,因而这种两级式存储结构的访问速度可以与其第一级存储器即存储器Cache速度相当,实用性强,易于推广。A directory cache consistency method of the present invention combines the limited directory and the full mapping directory, and then uses the Cache consistency method of the two-level directory, which solves the problem that the full mapping directory takes up too much storage overhead, and the limited directory will be affected by directory items. Overflow restrictions, low time validity of chained directories, etc. The limited directory is used in the storage layer. Since a single directory entry of the limited directory occupies a small space, it can save a lot of storage space in the storage layer that requires a large number of directory entries. At the same time, the full-map directory is used in the memory cache layer, and the cache capacity is limited, so even if a single directory entry occupies a large space, the total occupied space will not be much. It not only solves the problem of directory entry overflow of limited directories, but also the most frequently used data and its directory entries are always in the memory cache layer using the full map directory, so the access speed of this two-level storage structure can be compared with that of the first-level memory That is, the memory Cache has the same speed, strong practicability, and easy promotion.

附图说明Description of drawings

附图1为两级目录存储结构示意图。Accompanying drawing 1 is a schematic diagram of a two-level directory storage structure.

附图2为本发明中共享数加权的伪最近最少使用算法流程图。Accompanying drawing 2 is the pseudo-least recently used algorithm flow chart of sharing number weighting in the present invention.

具体实施方式detailed description

下面结合附图对本发明的一种目录Cache一致性方法作以下详细说明。A directory Cache consistency method of the present invention will be described in detail below in conjunction with the accompanying drawings.

如附图1所示,本发明提出了一种目录Cache一致性方法,其具体实现过程为:As shown in accompanying drawing 1, the present invention proposes a kind of directory Cache consistency method, and its specific implementation process is:

设置两级目录存储结构,将有限目录与全映射目录结合在一起,进而使用两级目录的Cache一致性方法,解决了全映射目录占用过大的存储开销,全映射目录存放与全局存储器中每个块有关的数据,使得系统中的每个高速缓存可以同时存储任何数据块的拷贝,每个目录项包含N个指针,N是系统中处理器的数目。有限目录与全映射目录不同之处是不管系统规模有多大,其每个目录项均含有固定数目的指针。Set up a two-level directory storage structure, combine the limited directory and the full-map directory, and then use the Cache consistency method of the two-level directory to solve the excessive storage overhead of the full-map directory. block-related data, so that each cache in the system can simultaneously store a copy of any data block, each directory entry contains N pointers, and N is the number of processors in the system. A limited directory differs from a full-mapped directory in that each directory entry contains a fixed number of pointers, regardless of the size of the system.

在存储器层和存储器Cache层之间使用替换算法,即共享数加权的伪最近最少使用的LRU算法,所以共享数目超过存储器层有限目录指针数的高速缓存行都能保证在存储器Cache中,实现利用相对较少的存储空间保障高速缓存数据一致性。A replacement algorithm is used between the memory layer and the memory Cache layer, that is, the pseudo-least recently used LRU algorithm weighted by the shared number, so the cache lines whose shared number exceeds the limited number of directory pointers of the memory layer can be guaranteed to be in the memory Cache to realize utilization Relatively little storage space guarantees cache data coherency.

在上述两级目录存储结构中,用全映射方法实现的目录项中有一个处理器位和一个脏位:前者表示相应处理器的高速缓存块存在或不存在的状态;后者如果为“1”,而且有一个也只有一个处理器位为“1”,则该处理器就可以对该块进行写操作。高速缓存的每块有两个状态位:一位表示块是否有效;另一位表示有效块是否允许写。高速缓存一致性方法必须保证存储器目录的状态位与高速缓存的状态位一致。有限目录方法可以缓解目录过大的问题,如果任一数据块同时在高速缓存中的拷贝数目有一定限制,那么目录的大小不会超过某个常数。In the above two-level directory storage structure, there is a processor bit and a dirty bit in the directory entry implemented by the full mapping method: the former indicates the existence or non-existence of the cache block of the corresponding processor; if the latter is "1 ", and one and only one processor bit is "1", then the processor can write to the block. Each block of the cache has two status bits: one bit indicates whether the block is valid; the other bit indicates whether the valid block is allowed to be written. The cache coherency method must ensure that the status bits of the memory directory are consistent with the status bits of the cache. The limited directory method can alleviate the problem of too large directory. If there is a certain limit on the number of copies of any data block in the cache at the same time, then the size of the directory will not exceed a certain constant.

所述的共享数加权的伪最近最少使用LRU算法,共享数目超过存储器层有限目录指针数的高速缓存行都能保证在存储Cache中。假定存储器层的每个目录项使用Q个指针,则替换时只对共享数小于Q的高速缓存行使用LRU替换算法进行替换。只有当存储器Cache中所有高速缓存行的共享数皆大于Q时,才将一共享数最小的高速缓存行替换出存储器Cache并且进行相应作废处理。而这种目录项溢出的情况能够通过适当设置存储器Cache的大小来避免。In the pseudo-least-recently-used LRU algorithm weighted by the sharing number, the cache lines whose sharing number exceeds the limited directory pointer number of the memory layer can be guaranteed to be stored in the storage Cache. Assuming that each directory entry of the memory layer uses Q pointers, only the cache lines whose share number is less than Q are replaced using the LRU replacement algorithm. Only when the sharing numbers of all cache lines in the memory Cache are greater than Q, a cache line with the smallest sharing number is replaced out of the memory Cache and invalidated accordingly. And this kind of directory entry overflow situation can be avoided by setting the size of the memory Cache appropriately.

如附图2所示,所述共享数加权的伪最近最少使用算法的详细内容为:As shown in accompanying drawing 2, the detailed content of the false least recently used algorithm of described shared number weighting is:

1)若高速缓存行在存储器Cache,则执行2,否则执行5。1) If the cache line is in the memory Cache, execute 2, otherwise execute 5.

2)从存储器Cache中读取数据。2) Read data from the memory Cache.

3)若为新的共享节点,则执行4,否则执行14。3) If it is a new shared node, go to 4, otherwise go to 14.

4)修改Cache目录项,执行14。4) To modify the Cache directory entry, go to 14.

5)从存储器中读取数据。5) Read data from memory.

6)若为新的共享节点,则执行7,否则执行9。6) If it is a new shared node, go to 7, otherwise go to 9.

7)若存储器目录项溢出,则执行8,否则执行9。7) If the memory directory entry overflows, execute 8, otherwise execute 9.

8)记录溢出项。8) Log overflow items.

9)若Cache中有空闲目录项,则执行10,否则执行11。9) If there are free directory entries in the Cache, execute 10, otherwise execute 11.

10)将数据加入Cache并根据存储器目录项修改Cache目录项,跳至14。10) Add the data to the Cache and modify the Cache directory entry according to the storage directory entry, skip to 14.

11)若Cache中有共享数小于Q的目录项,则执行12,否则执行13。11) If there is a directory entry in the Cache with a shared number less than Q, go to step 12, otherwise go to step 13.

12)在共享数小于Q的高速缓存行中使用LRU算法选择一块数据替换出Cache,执行10。12) Use the LRU algorithm to select a piece of data in the cache line whose shared number is less than Q and replace it with the cache, and execute 10.

13)选择一共享最小的高速缓存行进行共享作废的相应处理,执行10。13) Select a cache line with the smallest share to perform the corresponding processing of share invalidation, go to step 10.

14)完成。14) Done.

至此,已经完整实现了一种改进的目录Cache一致性方法。在系统性能方面,根据任务的粗细粒度和应用的时间局限性以及读写操作所占比例,合理设置存储器Cache和高速缓存行的大小并且使用适合的替换算法来保证存储器Cache获得较高的命中率。当存储器层的某一目录项发生溢出时,目录项信息将会复制到存储器Cache层中。而根据共享数加权的替换算法,此项目录信息将一直保持在存储器Cache中,直到其共享节点数小于存储器层的有限目录项指针数Q时,才可以被替换出存储器Cache。因此,不但解决了有限目录的目录项溢出问题,而且使用频率最高的数据及其目录项一直处于使用全映射目录的存储器Cache层中,因而这种两级式存储结构的访问速度可以与其第一级存储器即存储器Cache速度相当。So far, an improved directory cache consistency method has been completely implemented. In terms of system performance, according to the granularity of the task, the time constraints of the application, and the proportion of read and write operations, reasonably set the size of the memory cache and cache line and use a suitable replacement algorithm to ensure a high hit rate for the memory cache . When a certain directory entry in the memory layer overflows, the directory entry information will be copied to the memory Cache layer. According to the replacement algorithm weighted by the number of shares, this directory information will always be kept in the memory cache until the number of shared nodes is less than the limited number of directory entry pointers Q in the memory layer, then it can be replaced out of the memory cache. Therefore, not only the directory item overflow problem of the limited directory is solved, but also the most frequently used data and its directory items are always in the memory cache layer using the full map directory, so the access speed of this two-level storage structure can be compared with the first Level memory, that is, memory Cache, has the same speed.

当然,本发明还可有其他多种实施例,在不背离本发明精神及其实质的情况下,熟悉本领域的技术人员当可根据本发明作出各种相应的改变和变形,但这些相应的改变和变形都应属于本发明的权利要求的保护范围。Of course, the present invention can also have other various embodiments, and those skilled in the art can make various corresponding changes and deformations according to the present invention without departing from the spirit and essence of the present invention, but these corresponding Changes and deformations should all belong to the protection scope of the claims of the present invention.

Claims (3)

1. a kind of catalogue Cache coherence methods, it is characterised in that it implements process and is:
The first, two-stage catalogue storage organization, i.e., full mapping directory and limited catalogue are set, wherein the storage of full mapping directory is deposited with the overall situation The relevant data of each block in reservoir so that each cache in system can simultaneously store the copy of any data block, The directory entry of each full mapping directory includes N number of pointer, and N is the number of processor in system;Limited catalogue and full mapping directory Difference is each of which directory entry pointer containing fixed number;
The pseudo- LRU of the shared number weighting for the 2nd, being used between memory layer and memory Cache layers:It is assumed that Each directory entry of memory layer uses Q pointer, then only use the algorithm to sharing cache line of the number less than Q when replacing It is replaced;When the shared number of all cache lines in memory Cache is all more than Q, the minimum high speed of shared number is delayed Row is deposited to replace out memory Cache and carry out corresponding calcellation treatment.
2. a kind of catalogue Cache coherence methods according to claim 1, it is characterised in that:Described two-stage catalogue is deposited In storage structure, there are a processor position and a dirty position in the directory entry with the realization of full mapping method:The former respective handling The cacheline of device is presence or absence of state;The latter if " 1 ", and have one also only one of which processor position be " 1 ", then the processor write operation can be just carried out to the block, each block of cache has two mode bits:One expression Whether block is effective;Another one represents whether active block allows to write.
3. a kind of catalogue Cache coherence methods according to claim 2, it is characterised in that:The shared number weighting The detailed content of pseudo- LRU is:
1)If cache line is in memory Cache, 2 are performed, otherwise perform 5;
2)Data are read from memory Cache;
3)If new shared node, then 4 are performed, otherwise perform 14;
4)Modification Cache directory entries, perform 14;
5)Data are read from memory;
6)If new shared node, then 7 are performed, otherwise perform 9;
7)If memory directory entry overflows, 8 are performed, otherwise perform 9;
8)Record overflow entry;
9)If available free directory entry in Cache, performs 10,11 are otherwise performed;
10)Data are added into Cache and Cache directory entries are changed according to memory directory entry, skip to 14;
11)If there is shared directory entry of the number less than Q in Cache, 12 are performed, otherwise perform 13;
12)Select a block number according to Cache is replaced out using lru algorithm in shared cache line of the number less than Q, perform 10;
13)Minimum cache line is shared in selection one carries out the respective handling of shared calcellation, performs 10;
14)Complete.
CN201410017448.4A 2014-01-15 2014-01-15 A kind of catalogue Cache coherence methods Active CN103729309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410017448.4A CN103729309B (en) 2014-01-15 2014-01-15 A kind of catalogue Cache coherence methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410017448.4A CN103729309B (en) 2014-01-15 2014-01-15 A kind of catalogue Cache coherence methods

Publications (2)

Publication Number Publication Date
CN103729309A CN103729309A (en) 2014-04-16
CN103729309B true CN103729309B (en) 2017-06-30

Family

ID=50453390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410017448.4A Active CN103729309B (en) 2014-01-15 2014-01-15 A kind of catalogue Cache coherence methods

Country Status (1)

Country Link
CN (1) CN103729309B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133785B (en) * 2014-07-30 2017-03-08 浪潮集团有限公司 Buffer consistency implementation method using the dual control storage server of mixing catalogue
CN107003932B (en) 2014-09-29 2020-01-10 华为技术有限公司 Cache directory processing method and directory controller of multi-core processor system
CN104360982B (en) 2014-11-21 2017-11-10 浪潮(北京)电子信息产业有限公司 A kind of host computer system bibliographic structure method and system based on restructural chip technology
CN106095725A (en) * 2016-05-31 2016-11-09 浪潮(北京)电子信息产业有限公司 A kind of concordance catalogue construction method, system and multiprocessor computer system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6016529A (en) * 1997-11-26 2000-01-18 Digital Equipment Corporation Memory allocation technique for maintaining an even distribution of cache page addresses within a data structure
CN102063407A (en) * 2010-12-24 2011-05-18 清华大学 Network sacrifice Cache for multi-core processor and data request method based on Cache
CN102708190A (en) * 2012-05-15 2012-10-03 浪潮电子信息产业股份有限公司 Directory cache method for node control chip in cache coherent non-uniform memory access (CC-NUMA) system
CN103049422A (en) * 2012-12-17 2013-04-17 浪潮电子信息产业股份有限公司 A method for constructing a multi-processor node system with multiple cache coherency domains

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8812786B2 (en) * 2011-10-18 2014-08-19 Advanced Micro Devices, Inc. Dual-granularity state tracking for directory-based cache coherence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6016529A (en) * 1997-11-26 2000-01-18 Digital Equipment Corporation Memory allocation technique for maintaining an even distribution of cache page addresses within a data structure
CN102063407A (en) * 2010-12-24 2011-05-18 清华大学 Network sacrifice Cache for multi-core processor and data request method based on Cache
CN102708190A (en) * 2012-05-15 2012-10-03 浪潮电子信息产业股份有限公司 Directory cache method for node control chip in cache coherent non-uniform memory access (CC-NUMA) system
CN103049422A (en) * 2012-12-17 2013-04-17 浪潮电子信息产业股份有限公司 A method for constructing a multi-processor node system with multiple cache coherency domains

Also Published As

Publication number Publication date
CN103729309A (en) 2014-04-16

Similar Documents

Publication Publication Date Title
CN105550155B (en) Snoop filter for multiprocessor system and related snoop filtering method
US7552288B2 (en) Selectively inclusive cache architecture
US8694737B2 (en) Persistent memory for processor main memory
US11544093B2 (en) Virtual machine replication and migration
US8055851B2 (en) Line swapping scheme to reduce back invalidations in a snoop filter
US20120102273A1 (en) Memory agent to access memory blade as part of the cache coherency domain
TW201107974A (en) Cache coherent support for flash in a memory hierarchy
KR102453192B1 (en) Cache entry replacement based on availability of entries in other caches
CN109815165A (en) System and method for storing and processing efficient compressed cache lines
US20110320720A1 (en) Cache Line Replacement In A Symmetric Multiprocessing Computer
US20170177482A1 (en) Computing system having multi-level system memory capable of operating in a single level system memory mode
US20180113815A1 (en) Cache entry replacement based on penalty of memory access
CN104166631B (en) Replacement method of Cache line in LLC
US20180095884A1 (en) Mass storage cache in non volatile level of multi-level system memory
US10705977B2 (en) Method of dirty cache line eviction
TW201807586A (en) Memory system and processor system
US20070233966A1 (en) Partial way hint line replacement algorithm for a snoop filter
JP6027562B2 (en) Cache memory system and processor system
CN103729309B (en) A kind of catalogue Cache coherence methods
CN104461932A (en) Directory cache management method for big data application
KR20180122969A (en) A multi processor system and a method for managing data of processor included in the system
KR102754785B1 (en) Rinsing of cache lines from common memory pages to memory
US11526449B2 (en) Limited propagation of unnecessary memory updates
CN106339330B (en) Method and system for cache refreshing
CN111488293B (en) Access method and equipment for data visitor directory in multi-core system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant