[go: up one dir, main page]

CN101558391A - Configurable cache for a microprocessor - Google Patents

Configurable cache for a microprocessor Download PDF

Info

Publication number
CN101558391A
CN101558391A CNA2007800461129A CN200780046112A CN101558391A CN 101558391 A CN101558391 A CN 101558391A CN A2007800461129 A CNA2007800461129 A CN A2007800461129A CN 200780046112 A CN200780046112 A CN 200780046112A CN 101558391 A CN101558391 A CN 101558391A
Authority
CN
China
Prior art keywords
cache
instruction
cache line
address
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007800461129A
Other languages
Chinese (zh)
Other versions
CN101558391B (en
Inventor
罗德尼·J·佩萨文托
格雷格·D·拉赫蒂
约瑟夫·W·特里斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microchip Technology Inc
Original Assignee
Microchip Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/928,242 external-priority patent/US9208095B2/en
Application filed by Microchip Technology Inc filed Critical Microchip Technology Inc
Publication of CN101558391A publication Critical patent/CN101558391A/en
Application granted granted Critical
Publication of CN101558391B publication Critical patent/CN101558391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A cache module for a central processing unit has a cache control unit with an interface for a memory, a cache memory coupled with the control unit, wherein the cache memory has a plurality of cache lines, at least one cache line of the plurality of cache lines has an address tag bit field and an associated storage area for storing instructions or data, wherein the address tag bit field is readable and writeable and wherein the cache control unit is operable upon detecting that an address has been written to the address tag bit field to initiate a preload function in which instructions or data from the memory are loaded from the address into the at least one cache line.

Description

The configurable cache that is used for microprocessor
The cross reference of related application
The exercise question of the application's case opinion application on Dec 15th, 2006 is the 60/870th of " having the configurable skin cache memory (CONFIGURABLE PICOCACHE WITHPREFETCH AND LINKED BRANCH TRAIL BUFFERS; AND FLASH PREFETCHBUFFER) of looking ahead and linking branch's trace buffer and flash prefetch buffer " the, the exercise question of No. 188 U.S. Provisional Application cases and application on Dec 19th, 2006 is the right of priority of the 60/870th, No. 622 U.S. Provisional Application case of " linked branch history impact damper (LINKED BRANCH HISTORY BUFFER) "; The full text of described two provisional application cases is incorporated herein.
Technical field
The present invention relates to a kind of configurable cache that is used for microprocessor or microcontroller.
Background technology
The bottleneck of pipeline type microprocessor structure is the high access time of accumulator system.Typical method in order to head it off uses the large high-speed memory buffer and a plurality of data words of every clock transfer after initial high memory access time.Small-sized microcontroller design is subject to the amount that can be positioned at the cache memory on the chip, and it can not support large-sized high stand-by period but the narrow storer of format high throughput.Therefore, need a kind of configurable cache that is used for microcontroller or microprocessor.
Summary of the invention
According to an embodiment, a kind of cache module that is used for CPU (central processing unit) can comprise the cache memory control module that comprises the interface that is used for storer, cache memory with described control module coupling, wherein said cache memory comprises a plurality of cache lines, at least one cache line in described a plurality of cache line comprise address tag bit field and be used for storage instruction or data be associated with storing the zone, wherein said address tag bit field is readable and can writes, and wherein said cache memory control module can be operated to be initial pre-loaded function detecting after the address being written to address tag bit field, wherein will be loaded into described at least one cache line from described address from the instruction or the data of storer.
According to further embodiment, described cache module also can comprise the indexed registers that is used for by at least one register access cache line that is associated.According to further embodiment, described cache module also can comprise at reading and write access the register of mapping address tag field.According to further embodiment, described at least one cache line further can comprise and be used to lock described at least one cache line in order to avoid it is by the locking bit of overwrite.According to further embodiment, described at least one cache line further can comprise at least one control bit field, and wherein said control bit field and address tag bit field coupling are with the position of predefine number in the shielded address marker bit field.According to further embodiment, at least one other cache line can comprise at least one branch tail bit that is used to automatically lock described at least one other cache line, wherein under the situation that described branch tail bit is set, under the situation that the predefined instruction of described locking bit in being associated with storing the zone has been published by automatic setting.According to further embodiment, each cache line further can comprise the validity control bit of the validity that is used to indicate associated cache line.According to further embodiment, each cache line further can comprise and be used to indicate described cache line is as instruction cache line or as the type control bit of data cache lines.According to further embodiment, cache module can further comprise the pre-fetch unit with described storer and the coupling of described cache memory, and wherein said pre-fetch unit is through designing will be loaded into automatically in another cache line from the instruction of storer when being published from the instruction that before has been loaded with a cache line of instruction.According to further embodiment, described pre-fetch unit is controlled to be made and is activated or stops using.According to further embodiment, least-recently-used algorithm can be in order to determine which cache line will be by overwrite.
According to another embodiment, the method that a kind of operation has the cache memory that a plurality of cache lines of being used for storage instruction or data and each cache line have address tag bit field can comprise following steps: the address that is provided for being stored in the instruction sequence of storer; And described address is written in the address tag bit field of cache line, carry out under described address the access of storer will under described address, be stored in instruction in the storer or data load in cache line thereupon.
According to further embodiment, described method can further be included in carries out the step that write step is selected cache line before.According to further embodiment, can be loaded into indexed registers by the index that will be used for described cache line and carry out described selection step.According to further embodiment, can be by the address being written to the step of carrying out the said write address in the register that maps to cache line.According to further embodiment, described method can further comprise the step that will be loaded into automatically from the instruction of storer in another cache line when being published from the instruction that before has been loaded with a cache line of instruction.
According to another embodiment, the method that a kind of operation has the system of CPU (central processing unit) (CPU) (its with have the cache memory coupling that a plurality of cache lines of being used for storage instruction or data and each cache line have address tag bit field) can comprise following steps: execute instruction at described CPU, described instruction is written to the address in the address tag bit field of cache line; Detect described address tag bit field by overwrite; And thereupon under described address access memory and will under described address, be stored in instruction in the storer or data load in cache line.
According to further embodiment, described method can further be included in carries out the step that write step is selected cache line before.According to further embodiment, can be loaded into indexed registers by the index that will be used for described cache line and carry out described selection step.According to further embodiment, can be by the address being written to the step of carrying out the said write address in the register that maps to described cache line.According to further embodiment, described method can further comprise the step that will be loaded into automatically from the instruction of storer in another cache line when being published from the instruction that before has been loaded with a cache line of instruction.
According to another embodiment, a kind of cache module that is used for CPU (central processing unit) can comprise cache memory control module that comprises the interface that is used for storer and the cache memory that is coupled with described control module, wherein said cache memory comprises a plurality of cache lines, the wherein said cache memory first group of cache line that instructs in order to high-speed cache with appointment able to programme reaches second group of cache line in order to cached data, and wherein said cache memory control module comprises programmable functions, and described programmable functions is forcing data cache to described second group of cache line when described first group of cache line executes instruction.
According to another embodiment, a kind of cache module that is used for CPU (central processing unit) can comprise and comprise the interface that is used for storer and the cache memory control module of control register able to programme, and with the cache memory of described control module coupling, wherein said cache memory comprises a plurality of cache lines, wherein said cache memory comprises in order to first group of cache line of high-speed cache instruction and in order to second group of cache line of cached data, and wherein the cache memory control module can be operated when being set with at least one position in control register and forces data cache in second group of cache line.
Description of drawings
Can be by obtaining referring to the following description of being done in conjunction with the accompanying drawings of the present invention than complete understanding, wherein:
Fig. 1 illustrates first embodiment of configurable cache.
Fig. 2 explanation is according to the details of the cache memory sections of the embodiment of Fig. 1.
Fig. 3 illustrates second embodiment of configurable cache.
Fig. 4 explanation is according to the details of the cache line of the cache memory of the embodiment of Fig. 3.
Fig. 5 explanation is used to control the exemplary register of function of the embodiment of cache memory.
Fig. 6 explanation is according to other register of the content of the mapping cache line of one among the described embodiment.
Fig. 7 explanation is used to produce certain logical circuit of signal specific.
Fig. 8 illustrates and shows the process flow diagram of simplifying cache access process.
Although the present invention allows various modifications and alternative form, show in the accompanying drawings and also describe its specific example embodiment in this article in detail.Yet, should be appreciated that, herein the description of specific example embodiment is not wished to limit the invention to particular form disclosed herein, but on the contrary, the present invention will be contained as all modifications and equivalent by appended claims defined.
Embodiment
Standard micro controller unit (MCU) comprises 8 or microprocessor of 16 bit core usually.32 cores only just enter MCU circle recently.All these cores all do not have cache memory usually.Only Fu Za high-end 32 8-digit microcontrollers can have cache memory.This is that cache memory is bigger and expensive because for MCU.The embodiment that is disclosed provides the small-sized configurable cache of middle ground, and it can dispose in running and can serve as and look ahead and branch's trace buffer, is provided for the optimum high speed memory buffer degree of depth that MCU uses simultaneously.
According to an embodiment, cache memory can be configurable with operation very neatly through being designed to.For instance, it can be through programming strictly to operate as cache memory, and this is useful for small loop optimization.For this reason, manually lockable comprises the respective cache line in loop.It also can contribute the cache line (for example, nearly being used for half of line of linked branch history storage) of given number, and this can quicken function call and return.At last, it can be configured to from cache line issue first instruction time sequential program information is prefetched to the least-recently-used cache line.But by coming the prefetch program instruction with the speed that doubles the instruction of microprocessor service routine, accumulator system provides available bandwidth with extraction procedure data under the situation that program instruction streams is stopped.In fact, be not that all routine datas extractions are transparent.Provide in order to by providing but the feature balance of the low latency cache memory of the wide memory of format high throughput combination is improved the mechanism of performance according to the cache design method of different embodiment with the high stand-by period.
According to an embodiment, cache memory can through be designed to working time and the running in configurable complete association cache memory.Fig. 1 shows the block diagram of the embodiment of this type of configurable cache 100. Coupling bus 110a and 110b are coupled to cache memory the CPU (central processing unit) (CPU) of microcontroller or microprocessor.Cache memory 100 comprises cache controller 120, and described cache controller 120 is coupled to instruction cache section 130 and data cache section 140.Each instruction cache section comprises control bit and the mark (for example, with linear formula) that command memory is peculiar and be associated, and its center line can comprise the storage area that is used to store a plurality of words.For instance, the lines that word can be in 16 long and instruction caches 130 can have 4 double words, thereby produce 4 * 32 positions.According to an embodiment, small instruction cache 130 can comprise 4 these type of lines.According to other embodiment, other configuration fixed according to the design of respective processor may be for more favourable.According to an embodiment, data cache section 140 can be through being designed to be similar to instruction cache design 130.According to designing a model and deciding, independent data and instruction cache section 130 and 140 may be for desirable, in the processor of this (for example) (Harvard) structure that can be used for having Harvard.Yet in Feng's Neumann (von Neumann) type microprocessor of routine, can use can be from the hybrid cache memory of same memory cache instruction and data.Fig. 1 only shows the program flash memory 160 (PFM) that is connected to instruction and data caching 130,140 according to the processor with Harvard structure.Data-carrier store can be coupled in the Harvard structure individually, and perhaps storer 160 can be as employed unified instruction/data storer in Feng's Neumann structure.The data/commands that multiplexer 150 (for example) is controlled and will be stored in the cache memory 130,140 by cache controller 120 is provided to CPU via bus 110b.
Fig. 2 shows in more detail according to the instruction cache 130 of an embodiment and the structure of data caching.Described layout is showed once more and is used to instruct and the independent cache memory of data.Each line of cache memory comprises data/commands storage area and a plurality of control and marker bit (for example, IFM, TAG and BT) of being associated.IFM represents particular mask, and it can be in order to some position of (for example) shielded address tag field TAG, and described address mark field TAG contains the start address of data/commands cache memory DATA, as hereinafter explaining in more detail.Each line can (for example) comprises 4 * 32 positions of instruction/data cache, such as among Fig. 2 displaying.The extra bits that tag field can comprise actual address and indicate the validity of respective cache line, locking, type etc.In addition, such as among Fig. 2 displaying, provide branch tail bit BT at each cache line.When this position when being set, when to carry out subroutine call instruction and described instruction in respective cache line be not last instruction in the described line, CPU can automatically lock the cache line that is associated.In the case, respective cache line is automatically locked, and when program when respective subroutine is returned, the instruction of following after the respective calls instruction will be present in the cache memory, as hereinafter explaining in more detail.
Fig. 3 shows another embodiment of configurable cache.Cache controller 120 is provided for the control signal and the information of all functions of cache memory.For instance, cache controller 120 control TAG logics 310, described TAG logic 310 and hit logic 320 couplings, described hit logic 320 are also handled from cache controller 120 and the data of coming the mark 330 of looking ahead that free cache controller provided.Hit logic produces the signal of control cache line address scrambler 340, described cache line address scrambler 340 addressing cache memories 350, described cache memory 350 is in this embodiment including (for example) the data/commands storer of 16 lines, and each line is including (for example) being used to instruct/4 * 32 double words of data storage.Program flash memory 160 is coupled with cache memory with cache controller 120 couplings and via pre-fetch unit 360, and described pre-fetch unit 360 is also connected to cache line address scrambler 340.Pre-fetch unit 360 is sent to by cache line address scrambler 340 instruction directly or in each cache line of the cache memory 350 by addressed.For this reason, pre-fetch unit 360 can comprise one or more impact dampers that can store the instruction in the storage area of waiting to be sent to respective cache line.Multiplexer 150 is provided to cpu bus 110b through control with selection respective byte/word/double word in cache memory 350 or from the prefetch buffer of unit 360 and with it.
Fig. 4 shows cache memory 350 in more detail.In this embodiment, provide 16 cache lines.Each line comprises a plurality of control bits and one 4 * 32 bit instructions/data storage areas (Word0 is to Word3).Described control bit comprises shielding MASK, address mark TAG, validity bit V, locking bit L, type bit T and branch tail bit BT.Shielding MASK allows the selected position of shielded address mark TAG during being compared by hit logic 320, as hereinafter explaining in more detail.The beginning of the cache line in address mark TAG and then the instruction memory 160.As hereinafter will explaining in more detail, address mark TAG is readable and can writes, and fashionablely will force pre-fetch function being write by the user.Clauses and subclauses in the validity bit V indication associated cache line are effective.This position can not be changed by the user, and it is through automatic setting or reset.Whether locking bit L indication cache line is locked, and therefore can not be by overwrite.This position can be by user change or can be with respect to branch trail function and automatic setting, such as hereinafter explanation.The type of position T indication cache line, that is, cache line is as instruction cache line or as data cache lines.This position can be through being designed to the change by the user, and this allows assigning very flexibly and disposing of cache memory.Replace using a single T that assigns that some cache line is appointed as data cache lines, can use general configuration register to define the individual line that will be used for cached data of given number, and the residue cache line will be used for instruction cache.In this embodiment, still can provide a T indicating which cache line through being set at specified data cache lines, and therefore the rheme T of institute can not be modified in this embodiment.As will explaining after a while, can (for example) be configured to zero cache line, 1,2 or 4 cache lines are used for the purpose of data cache according to the cache memory of an embodiment.Therefore this appointment can be split into two parts with cache memory, for example, decides according to the number of the line of assigning, and can upwards assign data cache lines from the bottom of cache memory.Have other configuration of multidata cache line more yes possible and decide according to the respective design of cache memory.Therefore, when being set, position T indicates this line to be used for data cache.
Fig. 7 shows can be in order to the embodiment of certain logical circuit of implementing branch trail function.As explained above, branch tail bit 750 will be in order to will be branched off into subroutine and the subroutine of returning to be instructed, capture, interrupt or other instruction is carried out in cache line and be not to automatically lock associated cache line under the situation of last instruction in the described line.When being set, calling that the subroutine type instruction has been performed and program branches leaves that it is linear when carrying out sequence, CPU can be by setting position 740 automatic lock-related on lines via logic gate 760.The execution of this subroutine type instruction can detect in performance element, and by signal 770 signaling logic gates 760.Do not carry out as yet when at least one but will be when the instruction that program is carried out when respective subroutine is returned is stayed the cache line, enable that this is functional.Under this instruction is placed on situation in last storage space of cache line, to there is no need to keep cache line to automatically lock, because instruction subsequently will be in different cache line or even may be not in cache memory.When position 750 according to the execution (it is by detection signal 770 signaling logic gates 760) of respective subroutine or interrupt call when being set, CPU automatic setting and reset locking position 740.
Fig. 5 and Fig. 6 are illustrated in and implement in microprocessor or the microcontroller with the behavior of control configurable cache and the example of functional universal high speed memory buffer control register 510 and other control register 610 to 660.All registers can designed to be used 32 bit registers that use in 32 environment.Yet these registers can be adapted to work in 16 or 8 environment easily.For instance, register CHECON comprises position 31 enabling or to stop using whole cache memory, and position 16 CHECOH can set in order to realize the cache coherence on the PFM program loop position.For instance, this CHECOH can make all data and order line invalid when being set, maybe can make all data lines invalid and only make without the order line of locking invalid.Position 24 can be in order to enable compulsory data cache function, as hereinafter explaining in more detail.When being set, if the cache memory bandwidth is not used for extracting instruction, this function is forced data cache so.Position 11-12 BTSZ can be in order to enable/the disable branch trace labelling.For instance, in one embodiment, if be activated, branch's trace labelling can be set to the size of 1,2 or 4 line so.Therefore, 1,2 or 4 cache line will have that this is functional.According to other embodiment, all cache lines can be activated to be used for this functional.Position 8-9 DCSZ is in order to define the number of data cache lines, as explained above.In one embodiment, described number can be through setting to enable 0,1,2 or 4 data cache line.
Position 4-5 PREFEN can be in order to enable the predictive prefetch of the cacheable and non-cacheable area that optionally is used for storer.The cacheable area of storer can be in the storer for example can be through the district of the storer or the program area of actual cache, it means the memory areas with the actual coupling of cache memory.Non-cacheable area refers generally to generation (for example) usually can not be by the memory mapped peripheral space of high-speed cache.Differentiation criterion system design between cacheable area and the non-cacheable area and deciding.Some embodiment may need this difference, and respective microprocessor/microcontroller will be supported high-speed cache/non-cache method, and other embodiment of processor may the high-speed cache any kind storer, and no matter it is actual storage district or memory mapped district.
If be set, pre-fetch unit will be extracted the instruction of following after the cache line of current therefrom issuing command always so.Use two positions to allow (for example) four kinds of different set, for example, enable be used for cacheable area and both predictive prefetch of non-cacheable area, only enable the predictive prefetch that is used for non-cacheable area, only enable predictive prefetch and the inactive predictive prefetch that is used for cacheable area.According to an embodiment, suppose that cache line comprises 16 bytes or four double words.For instance, if the CPU (central processing unit) request from the instruction x1 of address 0x001000, the cache memory steering logic compares all address marks and 0x00100X (its meta X is left in the basket) so.If controller produces and hits, select homologous lines so.Selected line comprises all initial instructions with address 0x001000.Therefore, be under 32 long situations in each instruction, first instruction will be distributed to CPU (central processing unit), and pre-fetch unit will be triggered with next line of looking ahead.For this reason, pre-fetch unit will be calculated as address mark subsequently 0x001010 and begin and load command adapted thereto in next available cache line.When CPU (central processing unit) was further carried out instruction from address 0x001004,0x001008 and 0x00100C, pre-fetch unit was used from the instruction of address 0x001010,0x001014,0x001018 and 0x00101C and is filled up next available cache line.Finish in CPU (central processing unit) before the instruction of the cache line of carrying out current selected, pre-fetch unit will be finished the loading subsequent instructions.Therefore, CPU (central processing unit) will not be stopped.
Return referring to Fig. 5, position 0-2 is in order to the number of the waiting status that defines program flash memory.Therefore, various flash memory can use with microcontroller.
Each line in the cache memory as shown in Figure 4 can be mapped to register as shown in Figure 6 under control.Therefore, can be designed to fully can be by reading and write operation comes access and can be changed by the user fully for cache line.Yet as indicated above, some positions of cache line can must not or may need homologous lines is unblanked before the user can change homologous lines through design by user's change.For this reason, can provide indexed registers 600 to be used for selecting one of described 16 cache lines.In case selected cache line by indexed registers 600, described cache line just can come access by register 610-660 subsequently.Mask register can comprise the shielding MASK that selectes cache line among (for example) 5-15 on the throne.Second register that is used for mark can have address mark and also can comprise position V, L, T and the BT that validity, lock-out state, type and the branch trail function of register are selected in indication by 4-23 on the throne.At last, four 32 bit registers can be provided for the selected line that comprises cached data or instruction in register Word0, Word1, Word2 and Word3.Can implement the general utility functions of other control register with the control cache memory.Therefore, each cache line can be by user or software access and manipulation, as hereinafter explaining in more detail.
According to the embodiment that is disclosed, cache memory 100,300 comes initial cpu instruction extraction is responded to gather (being called line) by the instruction word of extracting (for example) 128 bit alignments from PFM 160 through design.The actual instruction of being asked can be present in the described line Anywhere.Described line is stored in the cache memory 130,350 (filling), and instruction turns back to CPU.This access can take a plurality of clock period and CPU is stopped.For instance, for 40 nanoseconds of access flash, access can cause 3 waiting statuss under 80MHz.Yet, in case line by high-speed cache, just takes place in zero wait state the subsequent access that is present in the instruction address in the described line.
If high-speed cache so is activated, this process continues on for each instruction address of miss cache line so.In this way, if minor loop is 128 bit alignments and identical with the byte number of cache memory 130,350 or is less than the byte number of cache memory 130,350 that so described loop can be carried out from cache memory under zero wait state.For the loop of complete filling, 4 line cache memories, the 130 every clocks with 32 bit instructions are as shown in fig. 1 carried out an instruction.In other words, CPU carries out all instructions that are stored in the cache memory 130 in 16 clocks.If only support the extraction of 128 bit wides, so described same loop can take the waiting status of given number (for example to be used for extraction by every line, 3 waiting statuss), and the clock that takies given number is (for example to be used for execution, 4 clocks), this will cause (for example) per 4 instructions to take 7 clocks.This example has generation the total loop time of 28 clocks.
Embodiment among Fig. 1 comprises two line data cachings to utilize constant that can be stored among the PFM 160 and the spatial proximity of showing data.Yet in other embodiments, this cache memory can be bigger and is connected to data-carrier store.
In addition, as explained above, also can realize looking ahead, with the waiting status of the required given number of the instruction stream that allows to avoid to extract 128 bit wides as the cache memory of being showed among Fig. 1 and Fig. 3.Be activated if look ahead, cache memory 100,300 uses least-recently-used line to carry out the predicted address filling so.Predicted address just in time is next order 128 bit alignment address, as mentioned the example that uses actual address is done detailed explanation.Therefore, in cache line during the execution command, if predicted address as yet not in cache memory, cache memory produces flash memory access so.When CPU needed to move under (for example) frequency to 3 waiting status accesses of flash memory system, predicted address was extracted in CPU wherein and needs in cycle of predict command and finishes.In this way, for linear code, cpu instruction extracts and can move under zero wait state.
When link branch carries out in CPU with the preservation cache line for future when using with link skip instruction, branch's tracking characteristics is checked described instruction.This feature is by preserving the performance that any instruction comes enhancement function to call to return in the line of following the tracks of branch or skip instruction.
Program flash memory cache 160 and prefetch module 120,360 are for providing the performance of enhancing in the outside application of carrying out of cacheable program flash memory region.The performance enhancing realizes with three kinds of different modes.
First kind of mode is the module cache capability.Has the ability that every clock is loop supply once command (reaches 16/64 instruction and reach 32/128 instruction for 16 bit manipulation codes for 32 bit manipulation codes) as 4 or 16 line instruction cache 130,350 of being showed among Fig. 1 and Fig. 3.Other configuration of cache size and tissue is applicable.The embodiment that is showed among Fig. 1 also provides the ability of high-speed cache two line data, thereby the improvement access to the data item in the line is provided.The embodiment that is showed among Fig. 3 is by setting split point or individually assigning corresponding cache memory type (as explained above) that the more data cache lines size of flexible assignment is provided.
The second, when allowing to look ahead, the every clock of module provides once command for linear code, thereby hides the access time of flash memory.The 3rd, module can be distributed to one or two instruction cache line the linked branch history instruction.When the jump with link or branch instruction occurred in CPU, last line was marked as branch history line and preserves being used for and return from calling.
Module is enabled
According to an embodiment, after resetting, can enable module by setting position (for example, the position 31ON/OFF (referring to Fig. 5) in the CHECON register).Remove this position and will finish the following:
Stop using all cache memories, look ahead and the state of the branch history functionality and the cache memory that resets.
With module settings is bypass mode.
Allow special function register (SFR) to read and write.
Operation under the energy-saving mode
Park mode
According to an embodiment, when device entered park mode, the clock control piece stopped the clock to cache module 100,300.
Idle mode
According to an embodiment, when device enters idle mode, cache memory and still work in the clock source of looking ahead and CPU stops run time version.Any being untreated is taken at module 100,300 in advance and stops to finish before its clock via automatic Clock gating.
The bypass behavior
According to an embodiment, default mode of operation is bypass.Under bypass mode, module is at each instruction and access PFM, thereby causes the flash access time of being defined as the PFMWS position (referring to Fig. 5) among the register CHECON.
The high-speed cache behavior
According to Fig. 1, high-speed cache and prefetch module can be implemented complete association 4 line instruction cache.Decide according to design, more or less cache line can be provided.Instruction/data storage area in the cache line can through be designed to write and during the quickflashing programmed sequence or when the corresponding positions among the general control register CHECON is set to logical zero, be eliminated with the control bit that is associated.Its every line uses register or the bit field that contains flash address tag.Each line can be made of the instruction of 128 positions (16 bytes), and no matter instruction size how.In order to simplify access, can be according to high-speed cache and the prefetch module of Fig. 1 and Fig. 3 only from quickflashing 160 requests 16 byte aligned instruction data.According to an embodiment, if the address that CPU asked is not aimed at 16 byte boundaries, module will be come aligned address by abandoning address bit [3.0] so.
When only being configured to cache memory, module by when miss with a plurality of instruction load in line and work as any cache memory.According to an embodiment, module can be used simply least-recently-used (LRU) algorithm to select which line to receive new instructions and close.Cache controller uses the wait state value of register CHECON to determine how long it must wait for flash access when it detects when miss.When hitting, cache memory is return data under zero wait state.
Instruction cache is according to looking ahead and branch follow the tracks of to select and works by different way.If code is 100% linearity, so only cache mode will provide instruction with corresponding PFMWS cycle sequential and get back to CPU, and wherein PFMWS is the number of waiting status.
Shielding
Use mask bit field can realize further using flexibly of cache memory.Fig. 7 shows in order to implement the possible logical circuit of function of shielding.The bit field 710 of cache line contains (for example) 11 positions, and institute's rheme can be used some position with shielded address mark 720.11 positions of mask bit field 710 in order to shielded address mark 720 than low level 0-10.When comparer 780 compares address mark 720 and institute's request address 790, any position that is set to " 1 " in the mask bit field 710 will make the corresponding positions in the address mark be left in the basket.If instruction/data storage area comprises 16 bytes, address mark does not comprise low 4 positions of actual address so.Therefore, be set to " 1 " if shield all positions of 710, comparer compares the position 0-19 of the address mark in the position 4-23 of actual address and the system that uses 24 address bits so.Yet,, can force comparer 780 only the fraction of address mark 720 and the corresponding fraction of actual address 790 to be compared by shielding 730.Therefore, a plurality of addresses can cause and hit.This is functional can be especially advantageously to cause that with some the interruption of branch of the predefined address in the command memory or the generation of trapped instruction use.For instance, interruption can cause that to the branch of the storage address that contains Interrupt Service Routine described storage address adds that by interrupting base address the offset address that is defined by priority of interrupt is defined.For instance, priority 0 interrupts being branched off into address 0x000100, and priority 1 interrupts being branched off into address 0x000110, and priority 2 interrupts being branched off into address 0x000120, or the like.Trapped instruction can be organized similarly and can be caused similar branching pattern.The Interrupt Service Routine of supposing given number is identical for the instruction of predefine number at least, and by using function of shielding, these addresses can cause to the branch of the initial same cache line that contains service routine so.For instance, if the forth day of a lunar month 32 bit instruction for the Interrupt Service Routine of priority level 0-3 are identical, the mask bit field that is included in the cache line of the initial instruction in 0x000010 place, address so can be set to " 11111111100 ", and it will hit causing from initial all addresses to 0x0001300 of 0x000100.Therefore, the interruption that not only has priority 0 will cause hits, and has priority 1,2 and 3 interruption and also will cause and hit.It all will jump to the same instruction sequence that has been carried in the cache memory.Therefore, with the loss that can not take place because of the access flash storer.
The behavior of looking ahead
The bit field PREFEN of control register CHECON or corresponding single position (referring to Fig. 5) can be in order to enable pre-fetch function.When being configured when being used to look ahead, module 100,300 next line address of prediction and it is turned back in the LRU line of cache memory 130,350.Pre-fetch function is extracted based on first cpu instruction and is begun prediction.When first line was positioned in the cache memory 130,350, module only made address increment arrive next 16 byte alignment address and beginning flash access.Flash memory 160 all instructions can be before the front when carrying out or before return the next instruction set.
If any time during predicted flash access, new cpu address does not mate with predicted address, and flash access will be changed to correct address so.This behavior can not make the CPU access take than the shared longer time of time under the situation that does not have prediction.
If predicted flash access is finished, will instruct so to be positioned in the LRU line with its address mark.Before hitting line, do not upgrade cpu address the LRU indication.If the line that it is just in time to be looked ahead, so with described wire tag for the lines that use at most recently and correspondingly upgrade other line.If it is another line in the cache memory, algorithm is correspondingly adjusted so, but the line of just in time being looked ahead still is the LRU line.If it is miss cache memory 130,350, access forwards quickflashing to and link order is positioned over the LRU line (it is for upgrading at most recently but from untapped prefetched lines) so.
According to an embodiment, as indicated above, optionally open or close data pre-fetching.According to another embodiment, if (for example, CHECON) dedicated bit in is set to logical one to control register, can cause the instruction prefetch abort in instruction prefetch data access midway so.If this position is set to logical zero, data access is finished after instruction prefetch is finished so.
Branch's tracking behavior
Cache memory can be used for branch trace command by the bit field BTSZ among the program register CHECON (referring to Fig. 5) with one or more lines of instruction cache with (for example) through division.When CPU request as from branch with link or during new address that jump and link instruction are calculated, branch trail line be the cache lines of nearest maximum uses.According to an embodiment, when module 100,300 was labeled as branch trail line with the MRU cache line, it also can be removed and distribute the LRU branch trail line, used thereby make it be returned as the universal high speed memory buffer.
As explained above, if last access is that so described line is not marked as branch trail line from last instruction in the MRU line (location superlatively).And module is not removed any one of distributing the existing line from branch's tracking section of cache memory.
The prestrain behavior
Bootable module 100,300 usefulness of application code are from the instruction prestrain of flash memory 160 and lock a cache line.Pre-loaded function is used the LRU of the line that is labeled as cache memory (that is not branch trail) of hanging oneself.
According to an embodiment, but the address tag bit field in the direct access cache line, and the user can be written to any value in this bit field.This writes the pressure prestrain high-speed cache that causes the homologous lines of institute's addressing in the flash memory.Therefore, prestrain is by coming work to be pre-loaded to homologous lines from storer in the address tag bit field that the address is written to cache line.According to an embodiment, this action made described line invalid before access flash is with search instruction.After prestrain, described line can be by the CPU (central processing unit) access to be used to carry out command adapted thereto.
According to an embodiment, this functional can be in order to implementing debug functionality very flexibly, and need not to change the code in the program storage.Being included in the homologous lines that needs the instruction of breakpoint during the debug sequence in case recognize, can be that prestrain has particular address with described wire tag just.Then, the content of described cache line can be through revising to comprise debug command.For instance, system software can be replaced instruction in the described cache line automatically to produce breakpoint or to carry out the subroutine of any other type.In case respective code is performed, just available presumptive instruction is replaced described instruction and can be changed storehouse to turn back to the same address of therefrom carrying out debugging routine.Therefore, preload functionality allows to change very neatly intrasystem code.
According to another embodiment, if cache line can be forbidden the access that writes to this cache line so by the locking bit locking or potentially by the branch tail bit locking.Therefore, only the cache line through unblanking can be and can write.If it is functional to implement this, user's described cache line of must at first unblanking before it can be written to new address mark in the cache line so loads command adapted thereto or data from storer to force cache controller.For instruction/data storage area write access too.
Feature with specified instruction load cache memory especially can be very useful for function of shielding as explained above on one's own initiative.For instance, if it is initial that many Interrupt Service Routines come with same instruction sequence, can so that having the respective interrupt service routine to instruct, respective cache line prestrain force this instruction sequence to enter in the cache memory by the respective service routine address being written in the address mark so.By setting corresponding shielding and locking respective cache line as explained above, cache memory can not have a flash access loss through pre-configured so that program is made a response to some interruption.Therefore, some routine can be come access by cache memory all the time.
Reset and initialization
After resetting, all cache lines all promptly are marked as invalid and cache features is deactivated.For instance, by register CHECON, waiting status is reset to its max wait state value (allowing to carry out bypass accesses after resetting).
When any quickflashing program began, it was its reset values that module 100,300 forces cache memory.Before program loop finishes, all be stopped by any access that CPU carried out.In case program loop is finished, CPU access co-pending is just proceeded via switching to quickflashing.Link order is finished by the value that is defined in the configuration register.
Flash prefetch buffer (FPB)
According to an embodiment, flash prefetch buffer design (referring to Fig. 3) can be simple impact damper, for example latch or register 365.In one embodiment, its core cpu of can be through design looking ahead nearly the core cpu instruction of 8 instructions altogether or 4 instructions of looking ahead with 4 panels allowing to utilize x32 position flash memory during when operation under 16 bit instruction patterns under 32 bit instruction patterns when operating instructs.The FPB that implements in cache controller 120 looks ahead with linear mode and will kernel instruction be stopped with the instruction of guaranteeing to be fed in the core.According to an embodiment, FPB can contain 2 impact dampers that have 16 bytes separately.Each impact damper trace command address extraction.If branch out outside the present buffer instruction boundary, utilize alternate buffer (cause initially and stopping, but then the linear code of high-speed cache extracts) so.Each instruction fetch forces FPB to grasp 16 possible bytes of follow-up linearity with fill buffer.
According to another embodiment, randomly, programmable forced data cache operation can be implemented by prefetch buffer.In case cache memory is filled with one or more order lines, just can sequentially carries out described instruction and need not to extract other order line in the cycle at special time.This situation is especially true, because the execution time of the instruction in the single cache line can double or even more be longer than in order to cache line is loaded into the time in the cache memory.In addition, if one or more row cache memory line comprises the loop through carrying out, may not need the relative of any other instruction of high-speed cache by duration of existence so than the long time.According to an embodiment, this time can be used with cached data, for example treats the relative lot of data used in table, or the like.Cache memory can be carried out extra data caching function by register (for example, position 23 DATAPREFEN (referring to Fig. 5) among the register CHECON) programming when extracting instruction to be not used in the cache memory bandwidth.This can be useful under the situation that the tables of data program in the cache memory that is loaded into by needs is used.Data extract can take place after initially filling the first time and still allow core to continue the institute prefetched instruction of use from cache line.According to an embodiment, when function digit DATAPREFEN is set, can after each instruction fetch, extract data line automatically.Perhaps, according to another embodiment,, just can force data cache as long as corresponding positions DATAPREFEN is set.Therefore, for instance, can begin and stop compulsory data cache by setting corresponding positions.In another embodiment, when cache memory suspends load instructions in cycle time, just can automatically perform compulsory data cache.If a plurality of control bits are provided, can implement the programmable combination of different pieces of information cache mode so.
Fig. 8 shows according to the use high-speed cache of an embodiment and the simplification flash memory request of pre-fetch function.Flash memory request begins at step 800 place.At first, determine in step 805 whether request is cacheable.If request, determines in step 810 so whether the address that provides has produced cache-hit for cacheable.If so according to an embodiment, process can branch into two parallel procedures.Yet other embodiment can sequentially carry out these processes.First branch determines whether to have asked calling subroutine with step 812 beginning in step 812.If not, first parallel procedure finishes so.If in step 815, determine whether in respective cache line, to have set branch tail bit so.If whether determine so to call in step 820 is last instruction in cache line.If first parallel procedure finishes so.If in step 830, lock respective cache line so.Second parallel procedure begins in step 835, wherein from the cache memory link order, and in step 835, carries out the last recently algorithm that uses to upgrade the state of cache line.If if in step 810, do not produce cache-hit or request as yet for not cacheable, determine in step 840 so whether prefetch buffer produces to hit.If prefetch buffer contains the instruction of request to some extent, in step 845, return the instruction of being asked so.Otherwise, in step 850, carry out flash access, it will make CPU stop.In the step 855 after the step 850, can be used for carrying out under the situation of cache memory function at cache line, flash request can be filled cache line.Routine finishes with step 860.
Although describe, describe and defined embodiments of the invention with reference to exemplary embodiment of the present invention, described reference does not also mean that limitation of the present invention, and should not infer any this type of restriction.The subject matter that is disclosed can be made considerable modification, change and equivalent on form and function, as association area and benefit from that those skilled in the art of the present invention will expect.The embodiment that institute of the present invention describes and describes only is an example, and and non exhaustive scope of the present invention.

Claims (23)

1. cache module that is used for CPU (central processing unit), it comprises:
The cache memory control module that comprises the interface that is used for storer,
Cache memory with described control module coupling, wherein said cache memory comprises a plurality of cache lines, at least one cache line in described a plurality of cache line comprise address tag bit field and be used for storage instruction or data be associated with storing the zone, wherein said address tag bit field is readable and can writes, and wherein said cache memory control module can be operated with initial at once after detecting the address to be written to described address tag bit field and wherein will be loaded into pre-loaded function described at least one cache line from described address from the instruction of described storer or data.
2. cache module according to claim 1, it further comprises the indexed registers that is used for by at least one described cache line of register access that is associated.
3. cache module according to claim 1, it further comprises the described address mark field of mapping to be used to read and write the register of access.
4. cache module according to claim 1, wherein said at least one cache line further comprise and are used to lock described at least one cache line in order to avoid by the locking bit of overwrite.
5. cache module according to claim 1, wherein said at least one cache line further comprises at least one control bit field, wherein said control bit field and the coupling of described address tag bit field are to shield the position of predefine number in the described address tag bit field.
6. cache module according to claim 1, wherein at least one other cache line comprises at least one branch tail bit that is used to automatically lock described at least one other cache line, wherein under the situation that described branch tail bit is set, described locking bit under the situation that the described predefined instruction that is associated with storing in the zone has been published by automatic setting.
7. cache module according to claim 1, wherein each cache line further comprises the validity control bit of the validity that is used to indicate described associated cache line.
8. cache module according to claim 1, wherein further to comprise and be used to indicate described cache line be as the instruction cache line or the type control bit of data cache lines to each cache line.
9. cache module according to claim 1, it further comprises the pre-fetch unit with described storer and the coupling of described cache memory, wherein said pre-fetch unit, will be loaded in another cache line from the instruction of described storer with when being published from the instruction that before has been loaded with a cache line of instruction automatically through design.
10. cache module according to claim 9, described pre-fetch unit may command is to be activated or to stop using.
11. cache module according to claim 9, wherein least-recently-used algorithm is in order to determine which cache line will be by overwrite.
12. a method of operating cache memory, described cache memory have a plurality of cache lines and each cache line that are used for storage instruction or data and have address tag bit field, described method comprises following steps:
For the instruction sequence that is stored in the storer provides the address;
Described address is written in the address tag bit field of cache line, carry out under described address access, will under described address, be stored in described instruction in the described storer or data load in described cache line to described storer thereupon.
13. method according to claim 12, it further is included in carries out the step that the said write step is selected described cache line before.
14. method according to claim 13 wherein is written to indexed registers by the index that will be used for described cache line and carries out described selection step.
15. method according to claim 12 is wherein by being written to described address the step of carrying out the described address of said write in the register that maps to described cache line.
16. method according to claim 12, it further comprises the step that will be loaded into automatically from the instruction of described storer in another cache line when being published from the instruction that before has been loaded with a cache line of instruction.
17. an operation has the method for the system of CPU (central processing unit) (CPU), described CPU (central processing unit) (CPU) has address tag bit field with cache memory coupling and each cache line with a plurality of cache lines that are used for storage instruction or data, and described method comprises following steps:
Execute instruction in described CPU, described instruction is written to the address in the address tag bit field of cache line,
Detect described address tag bit field by overwrite, and thereupon
Under described address access memory and will under described address, be stored in instruction in the described storer or data load in described cache line.
18. method according to claim 17, it further is included in carries out the step that the said write step is selected described cache line before.
19. method according to claim 18 wherein is written to indexed registers by the index that will be used for described cache line and carries out described selection step.
20. method according to claim 17 is wherein by being written to described address the step of carrying out the described address of said write in the register that maps to described cache line.
21. method according to claim 17, it further comprises the step that will be loaded into automatically from the instruction of described storer in another cache line when being published from the instruction that before has been loaded with a cache line of instruction.
22. a cache module that is used for CPU (central processing unit), it comprises:
The cache memory control module that comprises the interface that is used for storer,
Cache memory with described control module coupling, wherein said cache memory comprises a plurality of cache lines, the wherein said cache memory first group of cache line that instructs in order to high-speed cache with appointment able to programme reaches second group of cache line in order to cached data, and wherein said cache memory control module comprises programmable functions, and described programmable functions is forcing data cache to described second group of cache line when described first group of cache line executes instruction.
23. a cache module that is used for CPU (central processing unit), it comprises:
The cache memory control module, it comprises interface and the control register able to programme that is used for storer, cache memory with described control module coupling, wherein said cache memory comprises a plurality of cache lines, wherein said cache memory comprises in order to first group of cache line of high-speed cache instruction and in order to second group of cache line of cached data, and wherein said cache memory control module can be operated when being set with at least one position in described control register and forces data cache in described second group of cache line.
CN2007800461129A 2006-12-15 2007-12-12 Configurable cache for a microprocessor Active CN101558391B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US87018806P 2006-12-15 2006-12-15
US60/870,188 2006-12-15
US87062206P 2006-12-19 2006-12-19
US60/870,622 2006-12-19
US11/928,242 2007-10-30
US11/928,242 US9208095B2 (en) 2006-12-15 2007-10-30 Configurable cache for a microprocessor
PCT/US2007/087238 WO2008085647A1 (en) 2006-12-15 2007-12-12 Configurable cache for a microprocessor

Publications (2)

Publication Number Publication Date
CN101558391A true CN101558391A (en) 2009-10-14
CN101558391B CN101558391B (en) 2013-10-16

Family

ID=41175633

Family Applications (3)

Application Number Title Priority Date Filing Date
CN2007800461129A Active CN101558391B (en) 2006-12-15 2007-12-12 Configurable cache for a microprocessor
CN200780046103.XA Active CN101558390B (en) 2006-12-15 2007-12-12 Configurable cache for a microprocessor
CN200780046003.7A Active CN101558393B (en) 2006-12-15 2007-12-14 Configurable cache for a microprocessor

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN200780046103.XA Active CN101558390B (en) 2006-12-15 2007-12-12 Configurable cache for a microprocessor
CN200780046003.7A Active CN101558393B (en) 2006-12-15 2007-12-14 Configurable cache for a microprocessor

Country Status (1)

Country Link
CN (3) CN101558391B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105874437A (en) * 2013-12-31 2016-08-17 三星电子株式会社 Memory management method and apparatus
CN107870873A (en) * 2016-09-26 2018-04-03 三星电子株式会社 Based on the memory module by byte addressing flash memory and operate its method
CN111124955A (en) * 2018-10-31 2020-05-08 珠海格力电器股份有限公司 Cache control method and device and computer storage medium
CN112527390A (en) * 2019-08-28 2021-03-19 武汉杰开科技有限公司 Data acquisition method, microprocessor and device with storage function

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011209904A (en) * 2010-03-29 2011-10-20 Sony Corp Instruction fetch apparatus and processor
CN102567220A (en) * 2010-12-10 2012-07-11 中兴通讯股份有限公司 Cache access control method and Cache access control device
JP5863855B2 (en) * 2014-02-26 2016-02-17 ファナック株式会社 Programmable controller having instruction cache for processing branch instructions at high speed
JP6250447B2 (en) * 2014-03-20 2017-12-20 株式会社メガチップス Semiconductor device and instruction read control method
US9460016B2 (en) * 2014-06-16 2016-10-04 Analog Devices Global Hamilton Cache way prediction
DE102016211386A1 (en) * 2016-06-14 2017-12-14 Robert Bosch Gmbh Method for operating a computing unit
US11360704B2 (en) 2018-12-21 2022-06-14 Micron Technology, Inc. Multiplexed signal development in a memory device
US12013784B2 (en) * 2022-01-07 2024-06-18 Centaur Technology, Inc. Prefetch state cache (PSC)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0795820B1 (en) * 1993-01-21 2000-03-01 Advanced Micro Devices Inc. Combined prefetch buffer and instructions cache memory system and method for providing instructions to a central processing unit utilizing said system.
JP4045296B2 (en) * 2004-03-24 2008-02-13 松下電器産業株式会社 Cache memory and control method thereof
US7386679B2 (en) * 2004-04-15 2008-06-10 International Business Machines Corporation System, method and storage medium for memory management

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105874437A (en) * 2013-12-31 2016-08-17 三星电子株式会社 Memory management method and apparatus
US10430339B2 (en) 2013-12-31 2019-10-01 Samsung Electronics Co., Ltd. Memory management method and apparatus
CN107870873A (en) * 2016-09-26 2018-04-03 三星电子株式会社 Based on the memory module by byte addressing flash memory and operate its method
CN107870873B (en) * 2016-09-26 2023-08-08 三星电子株式会社 Memory module based on byte-addressed flash memory and method of operating the same
CN111124955A (en) * 2018-10-31 2020-05-08 珠海格力电器股份有限公司 Cache control method and device and computer storage medium
CN111124955B (en) * 2018-10-31 2023-09-08 珠海格力电器股份有限公司 Cache control method and equipment and computer storage medium
CN112527390A (en) * 2019-08-28 2021-03-19 武汉杰开科技有限公司 Data acquisition method, microprocessor and device with storage function
CN112527390B (en) * 2019-08-28 2024-03-12 武汉杰开科技有限公司 Data acquisition method, microprocessor and device with storage function

Also Published As

Publication number Publication date
CN101558393A (en) 2009-10-14
CN101558390B (en) 2014-06-18
CN101558393B (en) 2014-09-24
CN101558391B (en) 2013-10-16
CN101558390A (en) 2009-10-14

Similar Documents

Publication Publication Date Title
CN101558391B (en) Configurable cache for a microprocessor
KR101363585B1 (en) Configurable cache for a microprocessor
KR101441019B1 (en) Configurable cache for a microprocessor
CA1199420A (en) Hierarchical memory system including separate cache memories for storing data and instructions
US9286221B1 (en) Heterogeneous memory system
US7272703B2 (en) Program controlled embedded-DRAM-DSP architecture and methods
TWI442227B (en) Configurable cache for a microprocessor
US8725987B2 (en) Cache memory system including selectively accessible pre-fetch memory for pre-fetch of variable size data
US20120191916A1 (en) Optimizing tag forwarding in a two level cache system from level one to lever two controllers for cache coherence protocol for direct memory access transfers
US5715427A (en) Semi-associative cache with MRU/LRU replacement
JPS6263350A (en) Information processor equipped with cache memory
JPH01108650A (en) Work station
RU2005107713A (en) DATA PROCESSING SYSTEM CONTAINING KITS OF EXTERNAL AND INTERNAL TEAMS
DE102013202995A1 (en) Energy savings in branch forecasts
US20060212654A1 (en) Method and apparatus for intelligent instruction caching using application characteristics
US9262325B1 (en) Heterogeneous memory system
CN106569961A (en) Access address continuity-based cache module and access method thereof
US5835945A (en) Memory system with write buffer, prefetch and internal caches
US5953740A (en) Computer memory system having programmable operational characteristics based on characteristics of a central processor
Teman Lecture 6: The Memory Hierarchy

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant