[go: up one dir, main page]

CN1896972A - Method and device for converting virtual address, reading and writing high-speed buffer memory - Google Patents

Method and device for converting virtual address, reading and writing high-speed buffer memory Download PDF

Info

Publication number
CN1896972A
CN1896972A CNA2005100838630A CN200510083863A CN1896972A CN 1896972 A CN1896972 A CN 1896972A CN A2005100838630 A CNA2005100838630 A CN A2005100838630A CN 200510083863 A CN200510083863 A CN 200510083863A CN 1896972 A CN1896972 A CN 1896972A
Authority
CN
China
Prior art keywords
memory
address
data
random access
virtual address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2005100838630A
Other languages
Chinese (zh)
Other versions
CN100377117C (en
Inventor
黄海林
唐志敏
范东睿
许彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Loongson Technology Corp Ltd
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CNB2005100838630A priority Critical patent/CN100377117C/en
Publication of CN1896972A publication Critical patent/CN1896972A/en
Application granted granted Critical
Publication of CN100377117C publication Critical patent/CN100377117C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本发明公开了一种用于处理器中将虚拟地址转换为物理地址及读写高速缓冲存储器的方法及装置。本发明利用局部性原理,一方面将需要变换成物理地址的虚拟地址同虚拟地址历史记录相比较,如果同属一个虚拟页表,则不访问翻译后援缓冲器的随机存储器部分,减少了对翻译后援缓冲器中随机存储器的访问次数;同时如果虚拟地址进一步与虚拟地址历史记录同属于一个高速缓冲存储器行,则不访问高速缓冲存储器的随机存储器部分,而是直接对高速缓冲存储器行缓冲区进行读写操作。这样显著减少对翻译后援缓冲器和高速缓冲存储器中随机存储器的访问次数,从而同时降低了翻译后援缓冲器和高速缓冲存储器的功耗,而又不会降低处理器的性能。

Figure 200510083863

The invention discloses a method and a device for converting a virtual address into a physical address and reading and writing a cache memory in a processor. The present invention utilizes the principle of locality. On the one hand, the virtual address that needs to be transformed into a physical address is compared with the historical record of the virtual address. If they belong to the same virtual page table, the random memory part of the translation backup buffer is not accessed, which reduces the need for translation backup. The number of accesses to random memory in the buffer; at the same time, if the virtual address further belongs to the same cache line as the virtual address history record, the random memory part of the cache is not accessed, but the cache line buffer is directly read write operation. This significantly reduces the number of accesses to random access memory in the translation lookaside buffer and cache memory, thereby reducing power consumption in both the translation lookaside buffer and cache memory without degrading processor performance.

Figure 200510083863

Description

The method and the device that are used for converting virtual address and read-write cache memory
Technical field
The present invention relates to be used for the method and the device of processor converting virtual address and read-write cache memory.
Background technology
In virtual storage system, when processor produces a request of access, on the one hand by translation look aside buffer (Translation Lookaside Buffer, be called for short TLB) virtual address translation is become physical address, visit cache memory in the sheet on the other hand simultaneously, and the physical address that translation look aside buffer is changed out compares with label (TAG) physical address of reading from cache memory, if the two coupling, then cache-hit and return the memory access result; If in translation look aside buffer, can not find the list item of corresponding virtual address, then produce exception and processor controls and enter into exception handler; If in translation look aside buffer, find the list item of corresponding virtual address but in cache memory, do not hit, then need from main memory, to read corresponding physical block and replace.
In order to quicken virtual address translation is physical address, in the memory management unit of processor, set up translation look aside buffer, the list item of each logical page address of storage and physical page address in the translation look aside buffer, and set up both mapping relations, at the inner mapping process that just can finish from the virtual address to the physical address of processor, quicken the conversion from the virtual address to the physical address like this.
Because the gaps between their growth rates between processor and the storer become increasing, memory access speed becomes the bottleneck that Constraints Processing device performance further improves, in order to fill up gaps between their growth rates huge between processor and the main memory, between processor and main memory, introduced cache memory, high-speed buffer is deposited the processor often instruction or the data of visit, thereby can accelerate the memory access speed of processor.
Translation look aside buffer is made up of two parts usually, a part is that the memory page exterior deficiency is intended the address and carried out linking to each other full content-addressed memory (CAM) (Content Addressable Memory relatively with the virtual address of visit, be called for short CAM), another part is a storage physical address page table entry, by the random access memory (RAM) of index search.When a virtual address visit translation look aside buffer, in content-addressed memory (CAM), search earlier virtual page list item concurrently with current virtual address coupling, after finding,, obtain virtual address corresponding physical page table entry according to the index accesses random access memory of finding the position.
Cache memory is made up of two parts random access memory of concurrent working usually, and a part is deposited recently often instruction of the physical block of visit or data, is used for providing instruction or data to processor; Whether another part is deposited the physical address of corresponding physical block, be used for decision processor memory access request to hit in cache memory.
Like this, when processor produces a memory access request, produce physical address by the translation look aside buffer conversion on the one hand, read label physical address and data by cache memory on the other hand, if hit then can return the memory access result; Because locality, cache memory can provide very high hit rate, thereby can improve performance of processors greatly.Referring to document 1:CACHE structure and design, Qi Jiayue, " micro computer and application " the 4th phase of nineteen ninety-five; With document 2:CACHE technology and realization thereof, Wang Shikuan, Electronic Engineering Institutes Of Guilin's journal, the 15th volume, the 1st, 2 phases, June nineteen ninety-five.
Yet, in the processor of prior art,, all need to visit the random access memory in translation look aside buffer and the cache memory for each memory access request, therefore consumed many power consumptions.And the present invention utilizes principle of locality, can significantly reduce the access times to random access memory in translation look aside buffer and the cache memory, has reduced the power consumption of memory access parts and entire process device effectively.
Summary of the invention
The objective of the invention is to overcome the deficiencies in the prior art, thereby a kind of power consumption that can reduce translation look aside buffer and high-speed buffer memory circuit is provided, does not influence the converting virtual address of performance of processors and the method and the device of read-write cache memory simultaneously.
In order to achieve the above object, the technical scheme taked of the present invention is as follows:
Be used for the method for converting virtual address and read-write cache memory, may further comprise the steps:
A) with the instruction/data translation look aside buffer with this virtual address of getting finger/data access with get finger/data virtual address historical record and compare, judge whether that symbolic animal of the birth year is with page table or direct mapping space? if, the random access memory of access instruction/data translation lookaside buffer no longer then, and carry out next step b); If not, execution in step d);
Is b) further judgement got the virtual address of finger/data access and is got finger/data virtual address historical record in same cache line? if, the random access memory of access cache no longer then, directly the cache line buffer zone is carried out read-write operation, and execution in step e); If not, carry out next step;
Is c) further the address space that finger/data access virtual address is the process cache memory got in judgement? if, then read cache memory and upgrade the cache line buffer zone with readout, and execution in step e); If not, then directly visit main memory, do not upgrade the cache line buffer zone, then execution in step e);
D) carry out actual situation address translation and read and write cache memory in normal way, the instruction/data that will read from cache memory is updated in the instruction/data cache row buffer simultaneously; And execution next step e);
E) return to get and refer to results/data visit result;
In such scheme, in described step d), the common mode of described actual situation address translation and read-write cache memory mainly comprises following operation: the instruction/data translation look aside buffer is read corresponding physical address according to getting finger/data access virtual address inquiry random access memory; Instruction/data cache is visited corresponding random access memory according to the low order address of getting finger/data access virtual address simultaneously, reads the label physical address and the physical block data value of relevant position.
Be used for converting virtual address and the device of reading and writing cache memory, shown in Fig. 2 frame of broken lines, comprise:
One is used to deposit virtual address and carries out the content-addressed memory (CAM) 21 that hash transformation produces index according to virtual address;
One first random access memory 22, the index that described content-addressed memory (CAM) 21 produces reads the page table entry of virtual address correspondence as the address of this first random access memory 22;
One second random access memory 27 is used for the storage tags physical address;
One the 3rd random access memory 28 is used to store the physical block content;
It is characterized in that, also comprise:
One the 3rd decision circuitry 23 is connected with described first random access memory 22;
One virtual address historical record register 24, be linked in sequence with first decision circuitry 25, described first random access memory 22, the address history record of described virtual address historical record register 24 judges that in described first decision circuitry 25 signal of generation is as the enable signal of described first random access memory 22 with virtual address;
Second decision circuitry 26 selects a circuit 210 to be connected respectively with described virtual address historical record register 24, second random access memory 27, the 3rd random access memory 28, three; The address history record of described virtual address historical record register 24 judges that in described second decision circuitry 26 signal of generation is as the enable signal of second random access memory 27 and the 3rd random access memory 28 with virtual address; The low level of virtual address reads the label physical address and the numerical value of corresponding address physical block as the address of second random access memory 27 and the 3rd random access memory 28;
Described second random access memory 27 all is connected with described the 3rd decision circuitry 23 with described first random access memory 22; Get the signal that refers to whether operation or data access operation hit by 23 generations of described the 3rd decision circuitry;
Described the 3rd random access memory 28 selects a circuit 210 to be connected with cache line buffer zone 29 and three respectively;
Described second decision circuitry 26, described the 3rd decision circuitry 23, described cache line buffer zone 29, the primary memory space 211 select a circuit 210 to be connected with three respectively, by the value of three values of selecting a circuit 210 to select the 3rd random access memory 28 to read, cache line buffer zone 29 or directly the result of access processor main memory 211 as final result.
In technique scheme, this device can be got finger operation or data access operation in conjunction with the method for aforesaid converting virtual address and read-write cache memory.
Compared with prior art, beneficial effect of the present invention is:
The present invention utilizes principle of locality, the virtual address that needs is transformed into physical address is compared with the virtual address historical record on the one hand, if belong to a virtual page table together, then do not visit the random access memory part of translation look aside buffer, reduced access times random access memory in the translation look aside buffer; If simultaneously virtual address further belongs to a cache line with the virtual address historical record, the random access memory part of access cache not then, but directly the cache line buffer zone is carried out read-write operation.Refer to that virtual address historical record/data virtual address historical record compares by getting finger virtual address/data virtual address with getting like this, can significantly reduce access times simultaneously to random access memory in translation look aside buffer and the cache memory, thereby reduced the power consumption of translation look aside buffer and cache memory simultaneously, and don't can reduce performance of processors.
Description of drawings
Fig. 1 represents the process flow diagram of the method for converting virtual address of the present invention and read-write cache memory;
Fig. 2 represents the device circuit block diagram of converting virtual address of the present invention and read-write cache memory;
Fig. 3 represents of the present invention for getting converting virtual address and the read-write cache device block diagram that refers to operation;
Fig. 4 represents converting virtual address and the read-write cache device block diagram for data access operation of the present invention;
Embodiment
Below in conjunction with the drawings and specific embodiments the present invention is described in further detail:
The objective of the invention is to improve from virtual address to the physical address conversion and the process of reading and writing cache memory, can reduce the power consumption of translation look aside buffer circuit and high-speed buffer memory circuit simultaneously, also do not influence performance of processors simultaneously.
As shown in Figure 1, be used for may further comprise the steps from the method for virtual address to physical address conversion and read-write cache memory:
Step 1, the instruction/data translation look aside buffer is compared this virtual address of getting finger/data with getting finger/data access virtual address historical record, judge whether that symbolic animal of the birth year is with page table or direct mapping space? if, execution in step 2, if not, execution in step 8.
Step 2, control access enabled signal, the no longer random access memory of access instruction/data translation lookaside buffer, and execution next step 3.
Is step 3 further judged the virtual address of getting finger/data access and is got finger/data virtual address historical record in same cache line? if, execution in step 4, if not, execution in step 5;
Step 4, control access enabled signal, no longer the random access memory of access instruction/data caching is directly carried out read-write operation to the cache line buffer zone, and execution in step 9;
Is step 5 further judged and is got the address space that finger/data access virtual address is the process cache memory? if, execution in step 6; If not, execution in step 7;
Step 6 reads cache memory, and upgrades the cache line buffer zone with readout, and execution in step 9;
Step 7 is directly visited main memory, and does not upgrade the cache line buffer zone, and execution in step 9;
Step 8, the instruction/data translation look aside buffer is visited random access memory according to virtual address, reads corresponding physical address; Instruction/data cache is visited corresponding random access memory according to the low order address of instruction/data accesses virtual address simultaneously, reads label physical address and physical block content; And this is got finger/data access virtual address be saved in and get in finger/data virtual address historical record register, and the content of the instruction/data cache row read is saved in the instruction/data cache row buffer; And execution in step 9;
Step 9 is returned and is got finger results/data visit result;
Describe the device of the embodiment of the invention of method shown in the corresponding diagram 1 in detail below in conjunction with Fig. 3 and Fig. 4.
Shown in the frame of broken lines among Fig. 2, refer to operation for getting, the device of converting virtual address and read-write cache memory comprises: get finger content-addressed memory (CAM) 31, get finger first random access memory 32, get and refer to the 3rd decision circuitry 33, get finger virtual address historical record register 34, get finger first decision circuitry 35, get finger second decision circuitry 36, get and refer to second random access memory 37, get finger the 3rd random access memory 38, get finger cache line buffer zone 39, get finger three and select a circuit 310.
Get and refer to that first content-addressed memory (CAM) 31 is used to preserve the page table entry virtual address and refers to that virtual address compares and provide index with current getting, get and refer to that first random access memory 32 is used to deposit page table entry and exports physical address, get and refer to that the 3rd decision circuitry 33 is used for page table physical address and label physical address are compared judgement and provide the signal that whether hits, get and refer to that virtual address historical record register 34 is used to preserve getting of successful access last time and refers to the virtual address historical record, get and refer to that first decision circuitry 35 is used to judge that this is got refers to that whether virtual address refers to that the virtual address historical record is positioned at one page or not at direct mapping space with getting, get and refer to that second decision circuitry 36 is used for judging that this is got refers to that whether virtual address refers to that the virtual address historical record is arranged in same cache line or whether in the space without cache memory with getting, get and refer to that second random access memory 37 is used for the label physical address of storage instruction physical block, get and refer to that the 3rd random access memory 38 is used for the instruction of storage instruction physical block, get and refer to that cache line buffer zone 39 is used to preserve the capable command content of instruction cache that refers to the virtual address historical record corresponding to getting, get and refer to that three select a circuit 310 to be used for the instruction that the selection instruction cache memory is read, get and refer to that instruction that the cache line buffer zone is read or the result who directly visits main memory refer to output as final getting, processor main memory 211 is used to deposit program and data, and can conduct interviews with direct access mode.
Shown in the frame of broken lines among Fig. 3, for data access operation, the device of converting virtual address and read-write cache memory comprises data content-addressed memory (CAM) 41, data first random access memory 42, data the 3rd decision circuitry 43, data virtual address historical record 44, data first decision circuitry 45, data second decision circuitry 46, data second random access memory 47, data the 3rd random access memory 48, data caching row buffer 49, data three are selected a circuit 410.
Data first content-addressed memory (CAM) 41 is used to preserve the page table entry virtual address and compares with current data accesses virtual address and provides index, data first random access memory 42 is used to deposit page table entry and exports physical address, data the 3rd decision circuitry 43 is used for page table physical address and label physical address are compared judgement and provide the signal that whether hits, data virtual address historical record register 44 is used to preserve the data access virtual address historical record of successful access last time, data first decision circuitry 45 is used to judge whether this data access virtual address is positioned at one page or not at direct mapping space with data access virtual address historical record, data access second decision circuitry 46 is used for judging that whether this data access virtual address is arranged in same cache line with data access virtual address historical record or whether in the space without cache memory, data second random access memory 47 is used to store the label physical address of data physical block, the data that data the 3rd random access memory 48 is used to store the data physical block, data caching row buffer 49 is used to preserve the capable data content of data caching corresponding to data access virtual address historical record, data three select a circuit 410 to be used to the data of selecting data caching to read, data that the data caching row buffer is read or the result that directly visits main memory are as final data visit result output, processor main memory 411 is used to deposit program and data, and can conduct interviews with direct access mode.
Get and refer to whether the 3rd decision circuitry 33 hits comparator circuit for instruction cache, be used for the label physical address that page table physical address and instruction cache memory that the comparison order translation look aside buffer produces produces, produce whether hiting signal of instruction cache according to comparative result; Whether described data access the 3rd relatively 43 hits comparator circuit for data caching, be used for the label physical address that page table physical address that the comparing data translation look aside buffer produces and data caching produce, according to comparative result generation data caching hiting signal whether.
Get the instruction physical block command value that refers to that cache line buffer zone 39 is used to preserve corresponding to the instruction virtual address historical record; Data caching row buffer 49 is used to preserve the data physical block data value corresponding to the data virtual address historical record.
The result who gets the instruction that refers to three instructions of selecting a circuit 310 to be used to select to get to refer to the 3rd random access memory 38 and read, instruction cache row buffer 39 or directly visit main memory 211 is as the final finger result that gets; The result that described data three are selected the data of data that a circuit 410 is used to select data the 3rd random access memory 48 to read, data caching row buffer 49 or directly visited main memory 411 visits the result as final data.
Described getting refers to that first random access memory 32 is used to deposit the page table entry of getting finger actual situation address translation, gets to refer to that second random access memory 37 is used to deposit the label physical address of instruction physical block, gets to refer to that the 3rd random access memory 38 is used to deposit the command value of instruction physical block; Described data first random access memory 42 is used for the page table entry of store data visit actual situation address translation, data second random access memory 47 is used for the label physical address of store data physical block, and data the 3rd random access memory 48 is used for the data value of store data physical block.
In the present embodiment, getting finger content-addressed memory (CAM) 31 and can separate with data access content-addressed memory (CAM) 41, also can be public; Getting finger first random access memory 32 and can separate with data first random access memory 42, also can be public; Said two devices is to separate or public operation and the realization that does not influence this device.
This installs used circuit and can obtain from the standard cell lib that each chip foundries (as SMIC integrated circuit manufacturing company, Taiwan Semiconductor Mfg) openly provides.
From the above, advantage of the present invention is to refer to that virtual address historical record/data access virtual address historical record compares by getting finger/data access virtual address with getting, can obviously reduce visit capacity simultaneously to the random access memory in instruction/data translation look aside buffer and the instruction/data cache, thereby can effectively reduce simultaneously the power consumption of translation look aside buffer and cache memory, can not have a negative impact again simultaneously to processor performance.
It should be noted that at last: above embodiment is the unrestricted technical scheme of the present invention in order to explanation only, although the present invention is had been described in detail with reference to the foregoing description, those of ordinary skill in the art is to be understood that: still can make amendment or be equal to replacement the present invention, and not breaking away from any modification or partial replacement of the spirit and scope of the present invention, it all should be encompassed in the middle of the claim scope of the present invention.

Claims (4)

1、用于虚实地址变换及读写高速缓冲存储器的方法,包括以下步骤:1. A method for converting virtual and real addresses and reading and writing a cache memory, comprising the following steps: a)将指令/数据翻译后援缓冲器将此次取指/数据访问的虚拟地址与取指/数据虚拟地址历史记录相比较,判断是否属相同页表或可直接映射空间?如果是,则不再访问指令/数据翻译后援缓冲器的随机存储器,并执行下一步骤b);如果否,执行步骤d);a) Compare the instruction/data translation back-up buffer with the virtual address of the fetch/data access and the history of the fetch/data virtual address to determine whether they belong to the same page table or directly mappable space? If yes, then no longer access the RAM of the instruction/data translation lookaside buffer, and perform the next step b); if not, perform step d); b)进一步判断取指/数据访问的虚拟地址与取指/数据虚拟地址历史记录是否在同一个高速缓冲存储器行中?如果是,则不再访问高速缓冲存储器的随机存储器,直接对高速缓冲存储器行缓冲区进行读写操作,并执行步骤e);如果否,执行下一步;b) Further judge whether the virtual address of the instruction fetch/data access and the history record of the instruction fetch/data virtual address are in the same cache memory line? If yes, then no longer access the random access memory of the cache memory, directly read and write operations to the line buffer of the cache memory, and perform step e); if no, perform the next step; c)进一步判断取指/数据访问虚拟地址是否是经过高速缓冲存储器的地址空间?如果是,则读取高速缓冲存储器并用读出值更新高速缓冲存储器行缓冲区,并执行步骤e);如果否,则直接访问主存,不更新高速缓冲存储器行缓冲区,然后执行步骤e);c) Further judge whether the instruction fetch/data access virtual address passes through the address space of the cache memory? If yes, read the cache memory and update the cache memory line buffer with the read value, and perform step e); if no, directly access the main memory, do not update the cache memory line buffer, and then perform step e) ; d)以普通方式进行虚实地址转换并读写高速缓冲存储器,同时将从高速缓冲存储器中读出的指令/数据更新到指令/数据高速缓冲存储器行缓冲区中;并执行下一步骤e);d) perform virtual-real address translation and read and write the cache memory in a normal manner, and simultaneously update the instruction/data read from the cache memory into the instruction/data cache memory line buffer; and perform the next step e); e)返回取指结果/数据访问结果。e) Return instruction fetch result/data access result. 2、根据权利要求1所述的用于虚实地址变换及读写高速缓冲存储器的方法,其特征在于,在所述步骤d)中,所述虚实地址转换以及读写高速缓冲存储器的普通方式主要包括如下操作:指令/数据翻译后援缓冲器根据取指/数据访问虚拟地址查询随机存储器,读出相应的物理地址;同时指令/数据高速缓冲存储器根据取指/数据访问虚拟地址的低位地址访问相应的随机存储器,读出相应位置的标签物理地址和物理块数据值。2. The method for converting virtual and real addresses and reading and writing cache memory according to claim 1, characterized in that, in said step d), the normal mode of said virtual and real address conversion and reading and writing cache memory is mainly Including the following operations: the instruction/data translation backup buffer queries the RAM according to the instruction fetch/data access virtual address, and reads out the corresponding physical address; at the same time, the instruction/data cache memory accesses the corresponding Random access memory, read out the tag physical address and physical block data value at the corresponding location. 3、用于虚实地址变换与读写高速缓冲存储器的装置,包括:3. A device for converting virtual and real addresses and reading and writing cache memory, including: 一用于存放虚拟地址并根据虚拟地址进行散列变换产生索引的相联存储器(21);An associative memory (21) for storing virtual addresses and performing hash transformation according to the virtual addresses to generate indexes; 一第一随机存储器(22),所述相联存储器(21)产生的索引作为该第一随机存储器(22)的地址来读取虚拟地址对应的页表项;A first random access memory (22), the index generated by the associative memory (21) is used as the address of the first random access memory (22) to read the page table entry corresponding to the virtual address; 一用于存储标签物理地址的第二随机存储器(27);A second random access memory (27) for storing the physical address of the tag; 一用于存储物理块内容第三随机存储器(28);One is used to store the third RAM (28) of physical block content; 其特征在于,还包括:It is characterized in that it also includes: 一第三判断电路(23),与所述第一随机存储器(22)相连接;A third judging circuit (23), connected to the first random access memory (22); 一虚拟地址历史记录寄存器(24),与第一判断电路(25)、所述第一随机存储器(22)顺序连接,所述虚拟地址历史记录寄存器(24)的地址历史记录与虚拟地址在所述第一判断电路(25)进行判断,产生的信号作为所述第一随机存储器(22)的使能信号;A virtual address history record register (24), sequentially connected with the first judging circuit (25) and the first random access memory (22), the address history record of the virtual address history record register (24) and the virtual address in the The first judging circuit (25) is judged, and the generated signal is used as the enabling signal of the first random access memory (22); 第二判断电路(26)与所述虚拟地址历史记录寄存器(24)、第二随机存储器(27)、第三随机存储器(28)、三选一电路(210)分别连接;所述虚拟地址历史记录寄存器(24)的地址历史记录与虚拟地址在所述第二判断电路(26)进行判断,产生的信号作为第二随机存储器(27)和第三随机存储器(28)的使能信号;虚拟地址的低位作为第二随机存储器(27)和第三随机存储器(28)的地址,读取对应地址物理块的标签物理地址和数值;The second judging circuit (26) is respectively connected with the virtual address history record register (24), the second random access memory (27), the third random access memory (28), and one of three selection circuits (210); the virtual address history The address history record and virtual address of recording register (24) are judged at described second judging circuit (26), and the signal that produces is as the enabling signal of second random memory (27) and the 3rd random memory (28); The low bit of the address is used as the address of the second random access memory (27) and the third random access memory (28), and reads the label physical address and the numerical value of the corresponding address physical block; 所述第二随机存储器(27)和所述第一随机存储器(22)都与所述第三判断电路(23)连接;Both the second random access memory (27) and the first random access memory (22) are connected to the third judging circuit (23); 所述第三随机存储器(28)分别与高速缓冲存储器行缓冲区(29)和三选一电路(210)连接;The third random access memory (28) is respectively connected with the cache line buffer (29) and the one-out-of-three circuit (210); 所述第二判断电路(26)、所述第三判断电路(23)、所述高速缓冲存储器行缓冲区(29)、主存空间(211)分别与三选一电路(210)连接,由三选一电路(210)选择第三随机存储器(28)读出的值、高速缓冲存储器行缓冲区(29)的值或者直接访问处理器主存(211)的结果作为最终的结果。The second judging circuit (26), the third judging circuit (23), the cache line buffer (29), and the main memory space (211) are respectively connected with a three-choice one circuit (210), and the The one-out-of-three selection circuit (210) selects the value read from the third random access memory (28), the value of the cache line buffer (29) or the result of directly accessing the processor main memory (211) as the final result. 4、根据权利要求3所述的用于虚实地址变换与读写高速缓冲存储器的装置,其特征在于,该装置结合权利要求1所述的虚实地址变换以及读写高速缓冲存储器的方法,能进行取指操作或数据访问操作。4. The device for converting virtual and real addresses and reading and writing cache memory according to claim 3, characterized in that the device, in combination with the method for converting virtual and real addresses and reading and writing cache memory according to claim 1, can perform Instruction fetch operation or data access operation.
CNB2005100838630A 2005-07-14 2005-07-14 Method and device for converting virtual and real addresses and reading and writing cache memory Active CN100377117C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100838630A CN100377117C (en) 2005-07-14 2005-07-14 Method and device for converting virtual and real addresses and reading and writing cache memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100838630A CN100377117C (en) 2005-07-14 2005-07-14 Method and device for converting virtual and real addresses and reading and writing cache memory

Publications (2)

Publication Number Publication Date
CN1896972A true CN1896972A (en) 2007-01-17
CN100377117C CN100377117C (en) 2008-03-26

Family

ID=37609501

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100838630A Active CN100377117C (en) 2005-07-14 2005-07-14 Method and device for converting virtual and real addresses and reading and writing cache memory

Country Status (1)

Country Link
CN (1) CN100377117C (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246452B (en) * 2007-02-12 2010-12-15 国际商业机器公司 Method and apparatus for fast performing MMU analog, and total system simulator
CN102054192A (en) * 2009-10-27 2011-05-11 中兴通讯股份有限公司 Information storage method and device of electronic tag
CN101911025B (en) * 2008-01-11 2012-11-07 国际商业机器公司 Dynamic address translation with fetch protection
CN102884506A (en) * 2010-05-11 2013-01-16 高通股份有限公司 Configuring surrogate memory accessing agents using instructions for translating and storing data values
CN105302744A (en) * 2014-06-26 2016-02-03 Hgst荷兰公司 Invalidation data area for cache
CN107195159A (en) * 2017-07-13 2017-09-22 蚌埠依爱消防电子有限责任公司 A kind of method for inspecting of fire protection alarm system and fire protection alarm system
CN109032963A (en) * 2017-06-12 2018-12-18 Arm有限公司 Access control
CN112416436A (en) * 2020-12-02 2021-02-26 海光信息技术股份有限公司 Information processing method, information processing apparatus, and electronic device
CN117331854A (en) * 2023-10-11 2024-01-02 上海合芯数字科技有限公司 Cache processing method, device, electronic equipment and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GR20130100707A (en) 2013-12-23 2015-07-31 Arm Limited, Address translation in a data processing apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002312237A (en) * 2001-04-11 2002-10-25 Toshiba Corp Processor
JP4085328B2 (en) * 2003-04-11 2008-05-14 ソニー株式会社 Information processing apparatus and method, recording medium, program, and imaging apparatus
CN1280735C (en) * 2003-12-04 2006-10-18 中国科学院计算技术研究所 Initiator triggered remote memory access virtual-physical address conversion method

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246452B (en) * 2007-02-12 2010-12-15 国际商业机器公司 Method and apparatus for fast performing MMU analog, and total system simulator
US8301864B2 (en) 2007-02-12 2012-10-30 International Business Machines Corporation Apparatus and method for executing rapid memory management unit emulation and full-system simulator
CN101911025B (en) * 2008-01-11 2012-11-07 国际商业机器公司 Dynamic address translation with fetch protection
CN102054192A (en) * 2009-10-27 2011-05-11 中兴通讯股份有限公司 Information storage method and device of electronic tag
CN102054192B (en) * 2009-10-27 2016-01-20 中兴通讯股份有限公司 A kind of information storage means of electronic tag and device
CN102884506A (en) * 2010-05-11 2013-01-16 高通股份有限公司 Configuring surrogate memory accessing agents using instructions for translating and storing data values
US8924685B2 (en) 2010-05-11 2014-12-30 Qualcomm Incorporated Configuring surrogate memory accessing agents using non-priviledged processes
CN102884506B (en) * 2010-05-11 2015-04-15 高通股份有限公司 Configuring surrogate memory accessing agents using instructions for translating and storing data values
US11372771B2 (en) 2014-06-26 2022-06-28 Western Digital Technologies, Inc. Invalidation data area for cache
CN105302744B (en) * 2014-06-26 2019-01-01 Hgst荷兰公司 The invalid data area of Cache
US10445242B2 (en) 2014-06-26 2019-10-15 Western Digital Technologies, Inc. Invalidation data area for cache
US10810128B2 (en) 2014-06-26 2020-10-20 Western Digital Technologies, Inc. Invalidation data area for cache
CN105302744A (en) * 2014-06-26 2016-02-03 Hgst荷兰公司 Invalidation data area for cache
CN109032963A (en) * 2017-06-12 2018-12-18 Arm有限公司 Access control
CN109032963B (en) * 2017-06-12 2023-09-05 Arm有限公司 Access control
CN107195159A (en) * 2017-07-13 2017-09-22 蚌埠依爱消防电子有限责任公司 A kind of method for inspecting of fire protection alarm system and fire protection alarm system
CN112416436A (en) * 2020-12-02 2021-02-26 海光信息技术股份有限公司 Information processing method, information processing apparatus, and electronic device
CN112416436B (en) * 2020-12-02 2023-05-09 海光信息技术股份有限公司 Information processing method, information processing device and electronic equipment
CN117331854A (en) * 2023-10-11 2024-01-02 上海合芯数字科技有限公司 Cache processing method, device, electronic equipment and medium
CN117331854B (en) * 2023-10-11 2024-04-30 上海合芯数字科技有限公司 Cache processing method, device, electronic equipment and medium

Also Published As

Publication number Publication date
CN100377117C (en) 2008-03-26

Similar Documents

Publication Publication Date Title
US20210406170A1 (en) Flash-Based Coprocessor
CN1158607C (en) Techniques for improving memory access in virtual memory system
US8984254B2 (en) Techniques for utilizing translation lookaside buffer entry numbers to improve processor performance
US20170235681A1 (en) Memory system and control method of the same
US6965970B2 (en) List based method and apparatus for selective and rapid cache flushes
US8185692B2 (en) Unified cache structure that facilitates accessing translation table entries
CN104166634A (en) Management method of mapping table caches in solid-state disk system
US8583874B2 (en) Method and apparatus for caching prefetched data
CN1955948A (en) Digital data processing device and method for managing cache data
CN1369808A (en) Tranfer translation sideviewing buffer for storing memory type data
CN1509436A (en) Method and system for speculatively invalidating a cache line in a cache
CN101510176B (en) Control method of general-purpose operating system for accessing CPU two stage caching
CN1093961C (en) Enhanced memory performace of processor by elimination of outdated lines in second-level cathe
CN1831824A (en) Cache database data organization method
CN107589908A (en) The merging method that non-alignment updates the data in a kind of caching system based on solid-state disk
CN1896972A (en) Method and device for converting virtual address, reading and writing high-speed buffer memory
CN104504076A (en) Method for implementing distributed caching with high concurrency and high space utilization rate
CN1278241C (en) Memory page management device and method for tracking memory access
Carniel et al. A generic and efficient framework for flash-aware spatial indexing
US6990551B2 (en) System and method for employing a process identifier to minimize aliasing in a linear-addressed cache
CN118535089B (en) A hybrid storage read cache design method based on elastic memory
JP2008512758A (en) Virtual address cache and method for sharing data stored in virtual address cache
Tan et al. APMigration: Improving performance of hybrid memory performance via an adaptive page migration method
CN1269043C (en) Remapping method of memory address
KR101102260B1 (en) Method for sharing data using virtual address cache and unique task identifier

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Assignee: Beijing Loongson Zhongke Technology Service Center Co., Ltd.

Assignor: Institute of Computing Technology, Chinese Academy of Sciences

Contract fulfillment period: 2009.12.16 to 2028.12.31

Contract record no.: 2010990000062

Denomination of invention: Method and device for converting virtual address, reading and writing high-speed buffer memory

Granted publication date: 20080326

License type: exclusive license

Record date: 20100128

LIC Patent licence contract for exploitation submitted for record

Free format text: EXCLUSIVE LICENSE; TIME LIMIT OF IMPLEMENTING CONTACT: 2009.12.16 TO 2028.12.31; CHANGE OF CONTRACT

Name of requester: BEIJING LOONGSON TECHNOLOGY SERVICE CENTER CO., LT

Effective date: 20100128

EC01 Cancellation of recordation of patent licensing contract

Assignee: Longxin Zhongke Technology Co., Ltd.

Assignor: Institute of Computing Technology, Chinese Academy of Sciences

Contract record no.: 2010990000062

Date of cancellation: 20141231

EM01 Change of recordation of patent licensing contract

Change date: 20141231

Contract record no.: 2010990000062

Assignee after: Longxin Zhongke Technology Co., Ltd.

Assignee before: Beijing Loongson Zhongke Technology Service Center Co., Ltd.

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20070117

Assignee: Longxin Zhongke Technology Co., Ltd.

Assignor: Institute of Computing Technology, Chinese Academy of Sciences

Contract record no.: 2015990000066

Denomination of invention: Method and device for converting virtual address, reading and writing high-speed buffer memory

Granted publication date: 20080326

License type: Common License

Record date: 20150211

TR01 Transfer of patent right

Effective date of registration: 20200820

Address after: 100095, Beijing, Zhongguancun Haidian District environmental science and technology demonstration park, Liuzhou Industrial Park, No. 2 building

Patentee after: LOONGSON TECHNOLOGY Corp.,Ltd.

Address before: 100080 Haidian District, Zhongguancun Academy of Sciences, South Road, No. 6, No.

Patentee before: Institute of Computing Technology, Chinese Academy of Sciences

TR01 Transfer of patent right
EC01 Cancellation of recordation of patent licensing contract

Assignee: LOONGSON TECHNOLOGY Corp.,Ltd.

Assignor: Institute of Computing Technology, Chinese Academy of Sciences

Contract record no.: 2015990000066

Date of cancellation: 20200928

EC01 Cancellation of recordation of patent licensing contract
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100095 Building 2, Longxin Industrial Park, Zhongguancun environmental protection technology demonstration park, Haidian District, Beijing

Patentee after: Loongson Zhongke Technology Co.,Ltd.

Address before: 100095 Building 2, Longxin Industrial Park, Zhongguancun environmental protection technology demonstration park, Haidian District, Beijing

Patentee before: LOONGSON TECHNOLOGY Corp.,Ltd.