CN1008839B - Storage management of microprocessing system - Google Patents
Storage management of microprocessing systemInfo
- Publication number
- CN1008839B CN1008839B CN85106711A CN85106711A CN1008839B CN 1008839 B CN1008839 B CN 1008839B CN 85106711 A CN85106711 A CN 85106711A CN 85106711 A CN85106711 A CN 85106711A CN 1008839 B CN1008839 B CN 1008839B
- Authority
- CN
- China
- Prior art keywords
- page
- data
- memory
- address
- microprocessor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired
Links
- 230000015654 memory Effects 0.000 claims abstract description 79
- 238000013519 translation Methods 0.000 claims abstract description 17
- 230000011218 segmentation Effects 0.000 claims description 19
- 238000013507 mapping Methods 0.000 claims description 16
- 230000008878 coupling Effects 0.000 claims description 7
- 238000010168 coupling process Methods 0.000 claims description 7
- 238000005859 coupling reaction Methods 0.000 claims description 7
- 230000000295 complement effect Effects 0.000 claims description 3
- 230000006872 improvement Effects 0.000 claims description 3
- 238000013500 data storage Methods 0.000 claims description 2
- 238000000034 method Methods 0.000 description 11
- 238000007726 management method Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000003491 array Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000001681 protective effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003534 oscillatory effect Effects 0.000 description 2
- 238000005381 potential energy Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 229910052710 silicon Inorganic materials 0.000 description 2
- 239000010703 silicon Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 108091006146 Channels Proteins 0.000 description 1
- 108010075750 P-Type Calcium Channels Proteins 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 229910052594 sapphire Inorganic materials 0.000 description 1
- 239000010980 sapphire Substances 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000000352 storage cell Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/14—Protection against unauthorised use of memory or access to memory
- G06F12/1416—Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
- G06F12/145—Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being virtual, e.g. for virtual blocks or segments before a translation mechanism
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1027—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
一种地址翻译单元的微处理机体系结构,它提供二级高速缓冲存储器。各段式调度寄存器和主存储器中的一相关联的段式调度表提供第一级存储管理,它包括用于保护、优先权等的各的属性位。第二页面级的高速缓冲存储器包括主存储器中的一关联的页指南和页表,它提供在页面级上具有独立保护的第二级管理。
A microprocessor architecture with an address translation unit that provides a secondary cache memory. Each segment scheduler register and an associated segment schedule table in main memory provide the first level of storage management, including individual attribute bits for protection, priority, and the like. The second page level cache includes an associated page guide and page table in main memory that provides second level management with independent protection at the page level.
Description
The present invention relates to be used for the address translation unit field of storage administration, the particularly field of this in microprocessor system.
Many devices that people all know are arranged as storage administration.In some system, one significantly location (virtual address) be translated a less actual address (smaller physical address).In other, a less address is to come access one big storage space with for example application memory body conversion (bank switching).The present invention relates to last class, that is, big there virtual address is to be used for access one limited actual storage.
In storage management system, also know the protective device that provides different.For example, a system can prevent that the user from writing an operating system or even preventing that perhaps the user from reading into external channel with operating system.As being about to see; the invention provides a protective device; as the parts of basic control system (broader control scheme), this system goes up two special levels (distinct levels) and distributes " attribute " (attributes) to data.
The immediate prior art that the applicant knows is to be described in United States Patent (USP) 4,442, in 484.This patents state the storage administration and the protective device that constitute by microprocessor Intel 286 available on the market.This microprocessor comprises the segmentation descriptor register (segmentation descriptor registers) that contains segment base address (segment base addresses), limit information (limit information) and attribute (for example safeguard bit).Both contain for example position of privilege level (Privilege level), protection type etc. of definite different control device section description vocabulary (segment descriptor table) and section descriptor register (segment descriptor registers).These control device at length are described in United States Patent (USP) 4,442, in 484.
It is that field offset (segment offset) is limited to the 64k byte that Intel 286 has a problem.It also requires in actual storage to a section provides continuous storage unit, and this point may not always keep easily.As seeing, an advantage of system of the present invention is that skew is big as the actual address space.System of the present invention also provides compatibility to the segmentation device of the prior art in Intel 286.Additional advantage and the prior art system of discussing in above-mentioned patent and its coml realize that the difference between (Intel 286 microprocessor) will be conspicuous from of the present invention being described in detail.
Improvement to the microprocessor system that comprises a microprocessor and a data-carrier store will be described.Microprocessor comprises that one-part form dispatching device (segmentation mechanism) is used as the attribute of translating a virtual memory address to one second memory address (linear address) and being used as check and control data memory paragraph.Improvement of the present invention be included in microprocessor page up cache memory be used for from one " hitting " (hit) or the linear address translation of " coupling " condition go out one first field.Data-carrier store is the memory page mapping (enum) data also, particularly a page guides (Page directory) and a page table (Page table).If " hitting " do not take place in the page or leaf cache memory, first field access page guides and the page table then.Be used in a physical base address of one page of storer is not that output from the page or leaf cache memory is exactly to confess from the output of page table.Another field of linear address provides a skew in page or leaf.
Page or leaf cache memory and the page or leaf mapping (enum) data both in data-carrier store store the signal of the data attribute of representative in a specific page.These attributes comprise the read and write protection, and whether the indication page or leaf writes in advance, and also comprises other information.Importantly, page or leaf level protection (Page level Protection) is provided at second grade of 1 on data control in the storer, and it is to separate with the section attribute and have any different.
Fig. 1 is a block scheme, whole architectures of expression microprocessor, and the present invention realizes at present therein.
Fig. 2 is the block scheme that the segmentation mechanism arrangement in the microprocessor that is included in Fig. 1 is described.
Fig. 3 is explanation block scheme for the page or leaf field mappings of one " hitting " or " coupling " in the page or leaf cache memory.
Fig. 4 be key diagram 3 in the page or leaf cache memory for " not hitting " or the block scheme of the page or leaf field mappings of " coupling ".For this condition, page guides in the primary memory and page table are employed and are so and shown in Figure 4.
Fig. 5 is a sketch, is used for illustrating the attribute that is stored in page guides, page table and the page or leaf cache memory.
Fig. 6 is the mechanism of explanation content-addressed memory (CAM) (content addressable memory) and the block scheme that is included in the data-carrier store in page cache memory.
Fig. 7 is the circuit theory diagrams of a part of the content-addressed memory (CAM) of key diagram 6.
Fig. 8 be relevant with the detecting device of Fig. 6 be the circuit theory diagrams of logical circuit.
One microprocessor system and will be described especially for a memory management unit of system.In the narration below, the concrete number of many details such as position (bits) (specific number) etc. will be addressed, and understands completely of the present invention in order that provide.Obviously, for persons skilled in the art, there is not the present invention of these details can implement yet.In other example, structure common to all does not illustrate, in order to avoid unnecessarily have influence on the understanding of the present invention.
In this present optimum implementation, microprocessor system comprises the microprocessor 10 of Fig. 1.This microprocessor is to manufacture with complementary metal oxide semiconductor (CMOS) (CMOS) on single silicon chip.Any can use during many kinds CMOS common to all handles, and clearly, the present invention can for example n type raceway groove, ambipolar silicon on sapphire (SOS) wait and realize with other technologies.
The memory management unit that is used for some condition requires to visit the table that is stored in primary memory.One random access memory (BAM), 13 its functions are as the primary memory of system and are illustrated among Fig. 1.One common RAM can be used as a use dynamic storage (employing dynamic memories).
As shown in Figure 1, microprocessor 10 has one 32 actual address, and processor itself is one 32 bit processing machines.The driver that the other parts of one microprocessor system for example usually use, mathematics processor etc. are not drawn in Fig. 1.
Storage administration of the present invention utilizes segmentation and page (Paging) to dispatch these two.Each section described vocabulary by one group of section and determined, they separate mutually with the page table that is used to describe paginal translation.Two devices are fully separately and independently.One virtual address is asynchronously to be translated as an actual address in rapid two, wherein uses two different mapping devices.The one-part form dispatching technique was used for for the first translation step, and a page dispatching technique was used for for the second translation step.Paging translation can become and produces a single step translation that segmentation only arranged, it and 286 be compatible.
Segmentation (first translation) is one 32 bit linear (centre) addresses with one 48 virtual address translation.48 virtual addresses are made up of 32 biased the moving (offset) of one 16 segment selectors and in this section.16 segment selectors are discerned this section, and are used for access and describe the project (entry) of vocabulary from section.The different attribute of the length (limit) of one base address of this section descriptor project section of comprising, section and section.Translation step segment base (segment base) thus 32 biased the moving that are added in the virtual address obtain 32 bit linear address.At the same time, in the virtual address 32 biased move with the section limiting proportion, and the pattern of access is right with the section attribute nucleus.If 32 biased scopes of moving the section of the exceeding limit, if or the pattern of access be unallowed for the section attribute, a fault (fault) is so so generate and addressing process abnormal ending (aborted).
In the process that is described in detail below, paging (second translation) utilizes a secondary (two-level) paging table that one 32 bit linear address are translated as one 32 actual addresses.
This two step is fully independently.This allows (big) section to form by several pages, or one page is made up of several (little) section.
Initial on any border and the random length of one section energy, and be not limited to initial on a page boundary or its length must be the accurate multiple of a number of pages.This allows each section to describe the protection zone of storer respectively, and this zone originates in arbitrary address and is random length.
Segmentation can be used for some segments that each section has its unique protection attribute and length are gathered at an independent page or leaf.If like this, segmentation provides the protection attribute, and paging provides a kind of facilitated method of the actual storage mapping to the relevant unit of a group that must protect respectively.
Paging can be used for very big section is divided into many little unit, as the actual storage management.This will provide a single identifier (segment selector) and not require as a single descriptor (section descriptor) of an independent protective unit of storer and use many page or leaf descriptors.One section the inside, paging provides the extra level of a mapping, its allow big section mapping to enter in actual storage, not need to connect respectively separately in the page or leaf.In fact, paging allows a big section mapping, so as only several pages reside in simultaneously in the actual storage, the remainder of section then is mapped on the disk.Can write fashionablely when other pages, paging also is supported in the definition of a big section the inside minor structure and for example some pages or leaves of a big section is protected.
Segmentation provides 10 fens protects pattern widely, and it is operated on " nature " unit of programmer's use: the random length part on the linear addressed memory.Paging is provided for managing actual storage method the most easily, comprises the two management method of main system memory and back-up disk storer.The combination of two methods in the present invention provides 10 fens strong with function flexibly memory protection patterns.
In Fig. 1, microprocessor comprises a Bus Interface Unit 14.Bus unit comprises impact damper, as transmission that allows 32 bit address signals and reception and transmission data 32.In microprocessor inside, communicate by letter on internal bus 19 in unit 14.Bus unit comprises: a pre-fetch unit, as the instruction of fetching from RAM12; With a prefetch queue, the command unit communication of its and instruction decoding unit 16.Queue instruction is to comprise the performance element 18(arithmetic and logic unit of one 32 bit register files) in handle.This unit and decoding unit use internal bus 19 communications.
The present invention is the center with address translation unit 20.This unit provides two functions: the one association section of wearing descriptor register, another then related page or leaf descriptor cache memory.Segment register is parts maximum in the known prior art; Even so, now they are done more detailed narration together with Fig. 2.Page or leaf cache memory and it will be discussed in conjunction with Fig. 3-7 with the interaction of page table with the page guides that is stored in the primary memory 13, and these key elements form basis of the present invention.
The segmentation unit of Fig. 1 receives a virtual address and the suitable register segmentation schedule information of access from performance element 18.Register contains the segment base address, and this address connects on online 23 to a page or leaf unit simultaneously with skew from virtual address.
When Fig. 2 explanation is loaded into the map information of a new section when the segmentation register, to the access of showing in the primary memory.The section field is the index that the section in primary memory 13 is described vocabulary.The content of table provides a base address and attribute about data in the section is provided in addition.Section limiting proportion in base address and skew and the comparer 27; The output of this comparer provides a fault-signal.Thereby as the totalizer 26 of microprocessor unit in conjunction with providing one " reality " address in base and the skew online 13.This address can be used as an actual address or by the paging unit by microprocessor.Such result just can be a prior art microprocessor (Intel 286) and some program of writing provides compatible.For Intel 286, the actual address space is 24.
Comprise that a section attribute that resembles the such descriptor detailed catalogue of different privilege level is described in United States Patent (USP) 4,442, in 484.
True known segmentation device is showed by dotted line 28 in Fig. 2 in the prior art, and promptly the dotted line left side is the structure of prior art.
Page or leaf field mappings piece 30 comprises the page or leaf unit of Fig. 1 and it and is stored in page guides in the primary memory and the interaction of page table, represents in Fig. 3 to Fig. 7.
Though the segmentation dispatching device uses shadow register (shadow registers) in present optimum implementation, it also can be used together with a cache memory, when using with the paging device.
In Fig. 3, the page or leaf descriptor cache memory of the page or leaf unit 22 of Fig. 1 is illustrated among the dotted line 22a.This storer comprises two arrays, a content-addressed memory (CAM) (CAM) 34 and page data (base) storer 35.Two storeies are all realized with static storage cell.Storer 34 and 35 structure will be narrated in conjunction with Fig. 6.The special circuit that is used for CAM 34 has unique screening behavior, and this will be in conjunction with Fig. 7 and Fig. 8 narration.
Be connected to the page or leaf unit 22 of Fig. 1 from the linear address of segment unit 21.As shown in Figure 3, this linear address comprises two fields: a page information field (20) and a displacement field (12).In addition, one or four page attribute fields are provided by microcode.An input end of the interior supply OR circuit 53 of 20 page information fields and CAM 34.Binary counter 51 is to start with the time clock input signal that pulse producer 63 produces, and pulse producer 63 is made up of 57. 1 resistors 59 of 55. 1 transducers of a NOR circuit and a capacitor 61.An input end of the NOR circuit 55 in the pulse producer 63 is connected with the output terminal of a NOR circuit 89, and pulse producer 13[63-annotation of translation] just start when being designed to have only output signal when NOR circuit for " 0 ".Under normal circumstances, what be input to NOR circuit 89 all is that those all are in the input signal of " 0 level ", therefore " 1 level " state that makes the output of NOR circuit 89 be in, thus pulse producer 63 and binary counter 51 are not started, remain static.In addition, the removing input end of binary counter 51 is connected to the output terminal of NOR circuit 89, and the output signal when binary counter 51 designs to such an extent that be in " 1 level " by NOR circuit 89 is come zero clearing.
The output Q of binary counter 51
1Be the input of transducer 65, the output of transducer 65 is another inputs of OR circuit 53.And the output of OR circuit 53 is first inputs of 3 input end AND circuit 67, and second of AND circuit 67 and the 3rd input end are connected respectively to the output terminal Q of binary counter 51
2And Q
5By means of a transducer 65, an OR circuit 53 and the circuit that AND circuit 67 is formed, thereby produce output, from the output terminal of AND circuit 67 data input pin D to data selector 39
0-D
7Transmit 8 transmission information, repeat continuously for several times, shown in Figure 10 a, in other words, repeat to make for several times the output Q of binary counter 51
8Become " 1 ", hereinafter will be illustrated.As mentioned above, from 8 transmission signals of AND circuit 67 outputs, can come driving transistors 71 continuously by a resistor 69.Transistor 71 drives an oscillatory circuit 85 again, and oscillatory circuit 85 is stored in (not in page guides) among page table and the CAM by an antenna 37, resistor 73, resistor 73, transistor 75, variodenser 77, resistor 79, this position.When one page is written into processor in page table to this position, position.
2. " by access ".This position only is stored in page guides and the table (not in CAM) and be to be used to refer to one page by access.In case one page is by access, this position is to be changed in storer by processor.Different with " dirty " position, this position indicates whether that one page reads by access because of being used to write or be used to.
3.U/S。This state indicates whether that the content of page or leaf is that user and supervisory routine are can access (binary one), or only is supervisory routine (Binary Zero).
4.R/W。This read/write safeguard bit must be a binary one so that allow a user class program to write this page or leaf.
5. " appearance ".This indicates whether that in page table page table relevant in actual storage occurs.This indicates whether that in page guides page table relevant in actual storage occurs.
6. " effectively " this position only is stored among the CAM, is whether to be used to refer to that the content of CAM is effective.This position is changed to one first state when initialization, change then when an effective CAM word is loaded into.
Five from page guides and table are coupled to control logic circuit 75 so that appropriate fault-signal is provided in microprocessor.
Carry out logical from the user program/supervisory routine position of page guides and table at door 46 thereby the R/W among the CAM34 that is stored in Fig. 3 is provided the position.Similarly, carry out logical from the read/write of page guides and table by door 47, thereby the W/R that is stored among the CAM is provided the position.Be stored among the CAM from " dirty " in page table position.These are parts of the steering logic 75 of Fig. 4.
The attribute " automatically " that is stored among the CAM is verified because they be handle as the part of address and to four bit comparisons from microcode.If for example linear address indication: one " user program " write with the phase be the one page that occurs in R/W=0, even an effective page base is to be stored among the CAM, also produce a fault condition.
From the U/S position of page guides and table " with " logic guarantees that " the worst situation " is stored in the cache memory.Similarly, the R/W position " with " logic provides the worst situation for cache memory.
The CAM 34 that is illustrated among Fig. 6 is organized in 8 groups that the establishment of 4 words is arranged in every group.21 (17 bit address and 4 bit attributes) are used to seek a coupling in this array.Four comparator circuits from four memory words in every group are connected to a detecting device.For example, the comparator circuit of four words of group 1 is connected to detecting device 53.Similarly, the comparator circuit of four words of group 2 to 8 is connected to comparer separately.Comparator circuit is detected input (21) coupling decide which word and cam array in group by detecting device.Each detecting device comprises " hardwired " logic, and this logic allows according to 3 state from 20 page information fields that are connected to each detecting device, and make in each detecting device one of selection.(other 17 that note this page information field are connected to cam array).
For convenience of explanation, eight detecting devices are implied in Fig. 6.In present implementation, only use a detecting device, be used to be connected to a quad of this detecting device with three selections.Detecting device itself is to represent in Fig. 8.
The data storage of cache memory partly constitutes four arrays, shown in array 35a to 35d.Corresponding to every group the data word of CAM, the one word is assigned with in each that stores four arrays into.For example, the data word of being selected by the word 1 of one " hitting " and group 1 (base address) is in array 35a, and is medium at array 35b by the data word that the word 2 of one " hitting " and group 1 is selected.Be used for selecting a detecting device three also is a word that is used for selecting in each array.Thereby side by side, each word is to be selected from each of four arrays.Last selection from a word of array is to finish through multiplex adapter 55.This multiplex adapter is by four comparator circuits control in the detecting device.
When storing high-speed buffer reservoir during by access, one relatively the comparison procedure of low speed begin by use 21.Three potential energies in addition select a quad and detecting device to prepare as a pressure drop on the detection comparator circuit at once.[as will be discussed, whole comparer (OK) line is with keeping charged selected (" hitting ") line to come pre-charge, but not after each line pulse generator 63 selected and binary counter 51 startups, finished 8 transmission of Information of set in the data selector 39, and the output Q of binary counter 51
8When becoming " 1 ", output signal is adjusted to " 0 " with the output of NOR circuit 105, makes binary counter 93 zero clearings.When binary counter is cleared, its output terminal Q
24The check signal of output becomes " 0 ", and simultaneously, the output of NOR circuit 89 becomes " 1 ", thereby pulse producer 63 and binary counter 51 are quit work.As a result, the output Q of binary counter 51
8Also become " 0 ", thereby make the output of NOR circuit 105 become " 1 ", binary counter 93 begins one hour counting computing once more, one hour signal of output check later on.Aspect processor 7, it receives according to the checked operation pattern, with the check signal of one hour time interval generation, by check, acknowledge signal is to send out the fixing time interval with one hour, and the radio transmitter that is identified for surveying is working properly.Requiring when checking signal is not that processor then detects fault, and gives the alarm when being transferred to processor 7 with time interval of one hour, and the radio transmitter that is used to survey has stopped work because of certain reason.
An input end of the NOR circuit 105 of binary counter 93 zero clearings is connected with the output terminal of a comparer 113, comparer 113 comprises an operational amplifier, it is made up of a voltage detection circuit 111, and voltage detection circuit 111 is as the decline that detects the supply voltage Vdd that is formed by electric battery.The inverting input of comparer 113 is connected to the joint of resistor 115 and 117, resistor 115 and 117 is connected in series between power supply and ground, and its non-inverting input is connected to the centre of a resistor 119 and a Zener diode 121, resistor 119 and Zener diode 121 are connected in series between power supply and ground, and Zener diode 121 is as providing a reference voltage V to comparer 113
0Part voltage and reference voltage that comparer 113 will offer the supply voltage of anti-phase input compare, and when the part voltage of supply voltage was higher than reference voltage, comparer 113 was just exported one " 0 "
Generate an overload signal during circuit 56 and 57 pre-charges, it is low that it causes whole alignments (position and position these two) to become.This prevents that comparer from leaking electricity from " hitting " line before relatively beginning.
Should be noted that comparer calibrating " binary one " condition, and in fact ignore " Binary Zero " condition.That is, for example the grid of transistor 64 is that high (line 59 height) are so transistor 63 and 64 control comparison procedure.Similarly, if bit line 60 is high, so transistor 61 and 62 control comparison procedure.This feature of comparer allows the unit to be left in the basket.Like this, when a word is connected to CAM, thereby some potential energy by means of make position and bit line the two be low in comparison procedure conductively-closed fall.This obviously is that the content of unit and the condition on the alignment are complementary.VUDW logic 57 utilizes these features.
The microcode signal that is connected to logic 57 causes becoming as the position of selecting one of attribute bit and bit line low, and this is the function of microcode position.This causes being left in the basket with that associated attributes.This feature is used for for example ignoring the U/S position in way to manage.That is, way to manage can accesses user data.Similarly, when reading, or when way to manage was worked, read/write can be left in the basket.When reading, or when way to manage was worked, read/write can be left in the basket.When reading, " dirty " position also is left in the basket.(for significance bit, this feature is inoperative).
When attribute bit is stored in the primary memory, they can be by access and calibrating, and logic is used for controlling access (for example, according to 1 or 0 state of U/S position), yet, there has been cache memory just needn't adopt logical circuit separately.Even the bit pattern of attribute bit does not match, in fact, by means of allowing a coupling (or preventing a fault), the two hangs down and just can provide additional logic by making position and bit line forcibly.
As shown in Figure 8, the detecting device of Fig. 6 comprises a plurality of NOR gates such as door 81,82,83 and 84.Three from " hitting " line of the CAM line group of selecting are connected to door 81; They are represented with line A, B and C.One various combination of line is connected to each other NOR gate.For example, NOR gate 84 receives " hitting " line D, A and B.The output of each NOR gate is to for example input of door 86 of a NOT-AND gate.One " hitting " line provides an input for each NOT-AND gate.This line is one of four of A, B, C, D, and it is not the line to the input of NOR gate.This also is the bit line from the group project that will select.For example, door 86 should select to connect the group of " hitting " line D.For example in the situation of NOR gate 81, " hitting " line is connected to NOT-AND gate 86.Similarly, for NOT-AND gate 90, the output of " hitting " line C and door 84 is this inputs.Prevent that its output signal from allowing to write thereby one allows read signal also to be connected to each NOT-AND gate.The output of NOT-AND gate, for example line 87 is to be used for the multiplex adapter 55 of control chart 6.In fact, from the signal of NOT-AND gate, the signal on for example online 87 is through P type channel transistor control multiplex adapter.For convenience of explanation, an additional phase inverter 88 is represented together with output line 89.
The advantage of this detecting device is to allow pre-charge line to use in multiplex adapter 55.
Perhaps, a kind of staticizer also may be utilized, but this will require sizable power.In the device of Fig. 8, will keep the identical voltage of state on one of " hitting " line to descend from the output of phase inverter.When this state took place, only the voltage of single output line descended, thereby allowed multiplex adapter to select correct word.
So far, a kind of address translation unit of uniqueness is narrated, and it uses l2 cache memory, and one-level is used for segmentation and one-level is used for paging.Independently data attribute control (for example: protection) all be provided on each level.
Claims (25)
1, a microprocessor system comprises:
One microprocessor has the one-part form dispatching device, and being used for a virtual address translation is one second memory address;
One data-carrier store is connected with this microprocessor, and this data-carrier store comprises the storer that is used for page mapping (enum) data;
The improvement that it is characterized in that this microprocessor system comprises:
One page cache memory, it is integrated with described microprocessor or is connected, and be isolated, thereby be used to accept one first field of described second memory address and when a condition of hitting occurs, provide one second field as it and the content of described page of cache memory are compared with described segmentation device;
First field of described second memory address is coupled to described data-carrier store, does not select one the 3rd field from described page or leaf mapping (enum) data when occurring in described page or leaf cache memory with the described condition of hitting of box lunch;
Described microprocessor system comprises a circuit, proposes to be used for an actual address of described data-carrier store thereby be used to make one of the described second and the 3rd field to combine with a offset field from described second memory address.
2,, it is characterized in that wherein said page or leaf mapping (enum) data comprises the information on each attribute of each memory page according to the microprocessor system of claim 1.
3,, it is characterized in that the storer that wherein is used for said page or leaf mapping (enum) data comprises at least one page guides and at least one page table according to the microprocessor system of claim 2.
4,, it is characterized in that wherein each said page guides and said page table storage is used for each attribute of said each memory page according to the microprocessor system of claim 3.
5,, it is characterized in that wherein some each said attribute that is stored in said page guides and the said page table logically is combined in and is stored in the said page or leaf cache memory at least according to the microprocessor system of claim 4.
6,, it is characterized in that wherein said microprocessor provides a page guides base for said page guides according to the microprocessor system of claim 5.
7, according to the microprocessor system of claim 6, a first that it is characterized in that wherein said first field provides the index of a storage unit in the said page guides to said page guides base.
8, according to the microprocessor system of claim 7, it is characterized in that wherein each page table base of said each cell stores in said page guides, and a second portion of wherein said first field will enter described page table in described data-carrier store a index offers a page table storage unit.
9, microprocessor system according to Claim 8 is characterized in that wherein said each storage unit in said page table provides a base to each page in said data-carrier store.
10, according to the microprocessor system of claim 2, it is characterized in that wherein said page or leaf cache memory comprises a content-addressed memory (CAM) (CAM) and page base storer, the page base of said data-carrier store is selected in the output of said CAM from said page base storer.
11,, it is characterized in that wherein said CAM stores each attribute of each data storage page or leaf according to the microprocessor system of claim 10.
12,, it is characterized in that wherein said CAM comprises as the device that selectively shields one of said at least each attribute between the said comparable period according to the microprocessor system of claim 11.
13,, it is characterized in that wherein said segmentation device comprises according to the microprocessor system of claim 1:
The section descriptor register that links to each other with microprocessor, as provide a segment base and
Said data-carrier store comprises that is described a vocabulary, and it comes access by one section field of said first address.
14,, it is characterized in that wherein said page or leaf mapping (enum) data comprises the information on each attribute of each memory page according to the microprocessor system of claim 13.
15,, it is characterized in that the said storer that wherein is used for said page or leaf mapping (enum) data comprises a page guides and a page table according to the microprocessor system of claim 14.
16,, it is characterized in that wherein being used for each said attribute that the storage of said page guides and page table is used for each said memory page according to the microprocessor system of claim 15.
17,, it is characterized in that wherein some each said attribute that is stored in said page guides and the said page or leaf logically is combined in and is stored in page cache memory at least according to the microprocessor system of claim 16.
18, an address translation unit, as the parts of a microprocessor, the operation as to the data storer is characterized in that this unit comprises:
Section descriptor storage provides a segment base as receiving a virtual address and being used as,
Said segment base is represented the start address of the section of size arbitrarily;
Said microprocessor is provided for an address of data-carrier store so that allowing to describe vocabulary to one section in the said data-carrier store addresses, and describe vocabulary for said section said segment base address is provided,
Said microprocessor utilizes at least a portion of said segment base address and said virtual address that one second memory address is provided,
One page cache memory provides one second field thereby compare as one first field that receives said second memory address with it and the content of said page of cache memory when one second condition of hitting occurs;
Said second field is provided when providing said first field not occur with the described word two of the box lunch condition of hitting as one page tables of data in said data-carrier store said microprocessor,
Said second field is provided for a page base of said data-carrier store.
19,, it is characterized in that wherein each each segment data attribute of said section descriptor register-stored and wherein said each page data attribute of page or leaf cache memories store according to the unit of claim 18.
20, a content-addressed memory (CAM) (CAM) is characterized in that comprising:
One has first signal wire of first signal;
With the memory cell that first signal wire links to each other, this memory cell is used for store bits of information;
With the comparer that this memory cell links to each other, this comparer is used for the binary condition on the above-mentioned memory cell canned data position and first signal wire relatively;
What link to each other with this comparer first hits line, this first hit line indicate on the above-mentioned memory cell canned data position whether with above-mentioned first signal wire coupling;
Hit the presetter device that line is connected with first, be used to make first to hit line and be in first binary condition, this first binary condition indication is mated;
Pick-up unit is used to detect above-mentioned first binary condition of hitting line, and this pick-up unit has one and hits the NOR gate that line is connected with second; Above-mentioned testing circuit further comprises a NOT-AND gate that links to each other with first output of hitting line and above-mentioned NOR gate;
Multiplexing unit is connected with the output of above-mentioned NOT-AND gate, and this multiplexing unit is used to control the selection of a data word.
21, according to the content-addressed memory (CAM) of claim 20, it is characterized in that wherein said presetter device comparer with memory cell on binary condition on canned data position and first signal wire hit the line charging to first before comparing, make it to be binary high state; Do not match if comparer detects, then comparer makes above-mentioned first to hit the line discharge, makes it to be binary low state.
22,, it is characterized in that described presetter device comprises that one hits the transistor that line links to each other with first according to the content-addressed memory (CAM) of claim 21.
23, according to the content-addressed memory (CAM) of claim 20, it is characterized in that also comprising secondary signal line with secondary signal, above-mentioned secondary signal and above-mentioned first signal be complementary, above-mentioned secondary signal line links to each other with above-mentioned memory cell.
24, according to the content-addressed memory (CAM) of claim 23, it is characterized in that wherein said comparer comprises first pair of transistor and second pair of transistor, first pair of transistor comprises:
A) the first transistor that is connected with first line is used to receive first signal;
B) transistor seconds that links to each other with the first node of above-mentioned memory cell is used to receive the binary condition of first node;
Comprise with second pair of transistor:
C) the 3rd transistor that is connected with second line is used to receive above-mentioned secondary signal;
D) the 4th transistor that links to each other with the Section Point of above-mentioned memory cell is used to receive the binary condition of above-mentioned Section Point.
25,, it is characterized in that wherein said comparer is maintained at first signal to be under an embargo when binary low state and secondary signal are maintained binary low state according to the content-addressed memory (CAM) of claim 24.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US74438985A | 1985-06-13 | 1985-06-13 | |
US744,389 | 1985-06-13 | ||
USUSSN06/744,389 | 1985-06-13 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN85106711A CN85106711A (en) | 1987-02-04 |
CN1008839B true CN1008839B (en) | 1990-07-18 |
Family
ID=24992533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN85106711A Expired CN1008839B (en) | 1985-06-13 | 1985-09-06 | Storage management of microprocessing system |
Country Status (8)
Country | Link |
---|---|
JP (1) | JPH0622000B2 (en) |
KR (1) | KR900005897B1 (en) |
CN (1) | CN1008839B (en) |
DE (1) | DE3618163C2 (en) |
FR (1) | FR2583540B1 (en) |
GB (2) | GB2176918B (en) |
HK (1) | HK53590A (en) |
SG (1) | SG34090G (en) |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1988007721A1 (en) * | 1987-04-02 | 1988-10-06 | Unisys Corporation | Associative address translator for computer memory systems |
US5226039A (en) * | 1987-12-22 | 1993-07-06 | Kendall Square Research Corporation | Packet routing switch |
US5055999A (en) * | 1987-12-22 | 1991-10-08 | Kendall Square Research Corporation | Multiprocessor digital data processing system |
US5341483A (en) * | 1987-12-22 | 1994-08-23 | Kendall Square Research Corporation | Dynamic hierarchial associative memory |
US5251308A (en) * | 1987-12-22 | 1993-10-05 | Kendall Square Research Corporation | Shared memory multiprocessor with data hiding and post-store |
US5761413A (en) | 1987-12-22 | 1998-06-02 | Sun Microsystems, Inc. | Fault containment system for multiprocessor with shared memory |
US5313647A (en) * | 1991-09-20 | 1994-05-17 | Kendall Square Research Corporation | Digital data processor with improved checkpointing and forking |
CA2078312A1 (en) | 1991-09-20 | 1993-03-21 | Mark A. Kaufman | Digital data processor with improved paging |
CA2078315A1 (en) * | 1991-09-20 | 1993-03-21 | Christopher L. Reeve | Parallel processing apparatus and method for utilizing tiling |
US5895489A (en) * | 1991-10-16 | 1999-04-20 | Intel Corporation | Memory management system including an inclusion bit for maintaining cache coherency |
GB2260629B (en) * | 1991-10-16 | 1995-07-26 | Intel Corp | A segment descriptor cache for a microprocessor |
CN1068687C (en) * | 1993-01-20 | 2001-07-18 | 联华电子股份有限公司 | Memory Dynamic Allocation Method for Recording Multi-segment Voices |
EP0613090A1 (en) * | 1993-02-26 | 1994-08-31 | Siemens Nixdorf Informationssysteme Aktiengesellschaft | Method for checking the admissibility of direct memory accesses in a data processing systems |
US5548746A (en) * | 1993-11-12 | 1996-08-20 | International Business Machines Corporation | Non-contiguous mapping of I/O addresses to use page protection of a process |
US5590297A (en) * | 1994-01-04 | 1996-12-31 | Intel Corporation | Address generation unit with segmented addresses in a mircroprocessor |
US6622211B2 (en) * | 2001-08-15 | 2003-09-16 | Ip-First, L.L.C. | Virtual set cache that redirects store data to correct virtual set to avoid virtual set store miss penalty |
KR100406924B1 (en) * | 2001-10-12 | 2003-11-21 | 삼성전자주식회사 | Content addressable memory cell |
US7689485B2 (en) | 2002-08-10 | 2010-03-30 | Cisco Technology, Inc. | Generating accounting data based on access control list entries |
US7171539B2 (en) | 2002-11-18 | 2007-01-30 | Arm Limited | Apparatus and method for controlling access to a memory |
GB2396930B (en) | 2002-11-18 | 2005-09-07 | Advanced Risc Mach Ltd | Apparatus and method for managing access to a memory |
GB2396034B (en) | 2002-11-18 | 2006-03-08 | Advanced Risc Mach Ltd | Technique for accessing memory in a data processing apparatus |
AU2003278350A1 (en) | 2002-11-18 | 2004-06-15 | Arm Limited | Secure memory for protecting against malicious programs |
US7149862B2 (en) | 2002-11-18 | 2006-12-12 | Arm Limited | Access control in a data processing apparatus |
US7900017B2 (en) * | 2002-12-27 | 2011-03-01 | Intel Corporation | Mechanism for remapping post virtual machine memory pages |
WO2005017754A1 (en) * | 2003-07-29 | 2005-02-24 | Cisco Technology, Inc. | Force no-hit indications for cam entries based on policy maps |
US20060090034A1 (en) * | 2004-10-22 | 2006-04-27 | Fujitsu Limited | System and method for providing a way memoization in a processing environment |
GB2448523B (en) * | 2007-04-19 | 2009-06-17 | Transitive Ltd | Apparatus and method for handling exception signals in a computing system |
US8799620B2 (en) * | 2007-06-01 | 2014-08-05 | Intel Corporation | Linear to physical address translation with support for page attributes |
KR101671494B1 (en) | 2010-10-08 | 2016-11-02 | 삼성전자주식회사 | Multi Processor based on shared virtual memory and Method for generating address translation table |
FR3065826B1 (en) * | 2017-04-28 | 2024-03-15 | Patrick Pirim | AUTOMATED METHOD AND ASSOCIATED DEVICE CAPABLE OF STORING, RECALLING AND, IN A NON-VOLATILE MANNER, ASSOCIATIONS OF MESSAGES VERSUS LABELS AND VICE VERSA, WITH MAXIMUM LIKELIHOOD |
KR102686380B1 (en) * | 2018-12-20 | 2024-07-19 | 에스케이하이닉스 주식회사 | Memory device, Memory system including the memory device and Method of operating the memory device |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA784373A (en) * | 1963-04-01 | 1968-04-30 | W. Bremer John | Content addressed memory system |
GB1281387A (en) * | 1969-11-22 | 1972-07-12 | Ibm | Associative store |
US3761902A (en) * | 1971-12-30 | 1973-09-25 | Ibm | Functional memory using multi-state associative cells |
GB1457423A (en) * | 1973-01-17 | 1976-12-01 | Nat Res Dev | Associative memories |
GB1543736A (en) * | 1976-06-21 | 1979-04-04 | Nat Res Dev | Associative processors |
US4376297A (en) * | 1978-04-10 | 1983-03-08 | Signetics Corporation | Virtual memory addressing device |
GB1595740A (en) * | 1978-05-25 | 1981-08-19 | Fujitsu Ltd | Data processing apparatus |
US4377855A (en) * | 1980-11-06 | 1983-03-22 | National Semiconductor Corporation | Content-addressable memory |
GB2127994B (en) * | 1982-09-29 | 1987-01-21 | Apple Computer | Memory management unit for digital computer |
US4442482A (en) * | 1982-09-30 | 1984-04-10 | Venus Scientific Inc. | Dual output H.V. rectifier power supply driven by common transformer winding |
US4638426A (en) * | 1982-12-30 | 1987-01-20 | International Business Machines Corporation | Virtual memory address translation mechanism with controlled data persistence |
-
1985
- 1985-08-08 GB GB8519991A patent/GB2176918B/en not_active Expired
- 1985-08-30 JP JP60189994A patent/JPH0622000B2/en not_active Expired - Lifetime
- 1985-08-30 FR FR858512931A patent/FR2583540B1/en not_active Expired - Lifetime
- 1985-09-05 KR KR1019850006490A patent/KR900005897B1/en not_active IP Right Cessation
- 1985-09-06 CN CN85106711A patent/CN1008839B/en not_active Expired
-
1986
- 1986-05-23 GB GB8612679A patent/GB2176920B/en not_active Expired
- 1986-05-30 DE DE3618163A patent/DE3618163C2/en not_active Expired - Lifetime
-
1990
- 1990-05-15 SG SG340/90A patent/SG34090G/en unknown
- 1990-07-19 HK HK535/90A patent/HK53590A/en not_active IP Right Cessation
Also Published As
Publication number | Publication date |
---|---|
GB2176918A (en) | 1987-01-07 |
HK53590A (en) | 1990-07-27 |
CN85106711A (en) | 1987-02-04 |
KR870003427A (en) | 1987-04-17 |
DE3618163C2 (en) | 1995-04-27 |
FR2583540A1 (en) | 1986-12-19 |
GB2176920A (en) | 1987-01-07 |
JPH0622000B2 (en) | 1994-03-23 |
KR900005897B1 (en) | 1990-08-13 |
SG34090G (en) | 1990-08-03 |
GB2176918B (en) | 1989-11-01 |
FR2583540B1 (en) | 1991-09-06 |
DE3618163A1 (en) | 1986-12-18 |
GB2176920B (en) | 1989-11-22 |
JPS61286946A (en) | 1986-12-17 |
GB8612679D0 (en) | 1986-07-02 |
GB8519991D0 (en) | 1985-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1008839B (en) | Storage management of microprocessing system | |
US10445244B2 (en) | Method, system, and apparatus for page sizing extension | |
CN1118027C (en) | Memory access protection | |
KR920005280B1 (en) | High speed cache system | |
CN101341547B (en) | High speed cam lookup using stored encoded key | |
US6493812B1 (en) | Apparatus and method for virtual address aliasing and multiple page size support in a computer system having a prevalidated cache | |
US5125085A (en) | Least recently used replacement level generating apparatus and method | |
EP0508577A1 (en) | Address translation mechanism | |
CN1622060A (en) | Lazy flushing of translation lookaside buffers | |
JPH0769868B2 (en) | High-speed address translator | |
JPH08101797A (en) | Translation lookaside buffer | |
US7343469B1 (en) | Remapping I/O device addresses into high memory using GART | |
US7007135B2 (en) | Multi-level cache system with simplified miss/replacement control | |
US5732405A (en) | Method and apparatus for performing a cache operation in a data processing system | |
US6212616B1 (en) | Even/odd cache directory mechanism | |
JP2004530962A (en) | Cache memory and addressing method | |
US20030225992A1 (en) | Method and system for compression of address tags in memory structures | |
JPS623357A (en) | Tlb control system | |
US9015447B2 (en) | Memory system comprising translation lookaside buffer and translation information buffer and related method of operation | |
AU602952B2 (en) | Cache memory control system | |
US6763431B2 (en) | Cache memory system having block replacement function | |
US6216198B1 (en) | Cache memory accessible for continuous data without tag array indexing | |
US6493792B1 (en) | Mechanism for broadside reads of CAM structures | |
EP0376253B1 (en) | Information processing apparatus with cache memory | |
JP2728434B2 (en) | Address translation device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C13 | Decision | ||
GR02 | Examined patent application | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CX01 | Expiry of patent term |