US9158672B1 - Dynamic deterministic address translation for shuffled memory spaces - Google Patents
Dynamic deterministic address translation for shuffled memory spaces Download PDFInfo
- Publication number
- US9158672B1 US9158672B1 US13/644,550 US201213644550A US9158672B1 US 9158672 B1 US9158672 B1 US 9158672B1 US 201213644550 A US201213644550 A US 201213644550A US 9158672 B1 US9158672 B1 US 9158672B1
- Authority
- US
- United States
- Prior art keywords
- memory
- data
- group
- storage section
- address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/20—Employing a main memory using a specific memory technology
- G06F2212/202—Non-volatile memory
- G06F2212/2022—Flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7201—Logical to physical mapping or translation of blocks or pages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7211—Wear leveling
Definitions
- Reorganizing logical memory space is deliberately performed for a variety of reasons, including security and wear leveling. For example, occasionally changing a physical location where data is stored can make it harder for an attacker to either misappropriate the data or trigger targeted location wear.
- Wear leveling is used for memory devices with endurance issues to more evenly distribute use-based wear by periodically relocating frequently accessed data to other physical locations within memory; while wear leveling is most commonly associated with flash memory, most memory forms including dynamic random access memory (DRAM) can ultimately suffer use-based wear, and thus can also benefit from wear leveling.
- DRAM dynamic random access memory
- a CPU or other “master” in these circumstances still needs to read, write and otherwise manage the data that has been shuffled or reorganized; an address translation mechanism is used to make that happen.
- a look-up table stored in processor cache or local memory is retrieved and used to determine a logical-to-physical mapping corresponding to a desired logical address.
- the master e.g., a memory controller
- the master first retrieves this table and uses it to perform address translation to retrieve a true “physical” address from the look-up table.
- the memory controller Once the memory controller has “translated” the logical address to a physical address, it then issues a read, write, refresh or other command to the memory device, corrected to the true memory location.
- FIG. 1 shows a block diagram illustrating a memory address translation system using one or more “substitute” memory sections and the periodic shuffling of data.
- FIG. 2A illustrates a method for managing memory in a manner that facilitates fast address translation.
- the method of FIG. 2A is optionally used with the address translation system of FIG. 1 .
- FIG. 2B contains a flow diagram for fast address translation.
- dashed-line boxes illustrate optional features.
- the presented teachings permit address translation to be optionally implemented entirely in hardware.
- FIG. 3 presents a hybrid flow chart and memory allocation diagram illustrating steps of a single wear leveling iteration. At its right side, the figure provides an example showing how the mapping of logical memory space to sections of physical memory changes with each step.
- FIG. 4A depicts a memory allocation diagram showing how data blocks in logical memory space are scrambled in terms of their relative storage locations as repeated wear leveling/data shuffling iterations are performed.
- FIG. 4B provides a table 441 illustrating stride address space; FIG. 4B is used to explain aspects of a detailed wear leveling algorithm.
- the stride value S is 3.
- FIG. 4C provides a table 443 , similar to the one seen in FIG. 4B , but which shows stride address (SA) as a function of a provided data block address (A), again for a stride value of 3.
- SA stride address
- A provided data block address
- FIG. 5 contains a flow diagram of one embodiment of a method for performing dynamic address translation.
- FIG. 6A contains a memory allocation diagram similar to FIG. 4A , but where the stride value S has been increased to 5.
- FIG. 6B provides a table 641 , similar to the one seen in FIG. 4B , but based on a stride value of 5.
- FIG. 6C provides a table 643 , similar to the one seen in FIG. 4C , but where the stride value is 2.
- FIG. 7 is a memory allocation diagram similar to FIG. 4A , but where the number N of substitute blocks (highlighted in gray) is 2.
- FIG. 8A shows a system diagram, used to introduce the notion of “reserved blocks” (e.g., blocks, Ua that are neither active nor substitute blocks for current wear leveling/data shuffling steps) and consequent ability to vary the wear leveling/data shuffling algorithm by occasionally changing the number of substitute blocks without changing active memory space.
- “reserved blocks” e.g., blocks, Ua that are neither active nor substitute blocks for current wear leveling/data shuffling steps
- FIG. 8B shows a system diagram similar to FIG. 8A , but where the pool of potential substitute blocks has been increased (i.e., increased to Ub from Ua) at the expense of the number of reserved blocks (Na decreased to Nb).
- FIG. 9A shows a memory storage scheme where a memory controller uses an address translation table to perform address translation.
- FIG. 9B shows a memory storage scheme where a memory controller uses instructional logic and registers to perform fast address translation.
- FIG. 9C shows a memory storage scheme where a memory controller uses hardware logic to perform address translation; this logic includes a wear leveling or data scrambling circuit (“Cir.”) to manage data movement, and logical-to-physical translation circuitry (“LA-2-PA”) to provide for fast address translation.
- this logic includes a wear leveling or data scrambling circuit (“Cir.”) to manage data movement, and logical-to-physical translation circuitry (“LA-2-PA”) to provide for fast address translation.
- FIG. 9D shows a memory storage scheme similar to FIG. 9C , but where each memory device comprises an instance of the hardware logic.
- FIG. 9E shows a memory storage scheme similar to FIG. 9C , but where a separate chip comprises the hardware logic.
- FIG. 10 illustrates a circuit diagram for a hardware-based fast address translation mechanism.
- FIG. 11 shows a method corresponding to circuit blocks from FIG. 10 .
- FIG. 12 shows a block diagram of a memory system that uses at least one RRAM memory device.
- FIG. 13 shows a method where wear leveling is integrated with refresh operations.
- This disclosure provides a memory address translation mechanism and shuffling strategy specially adapted for wear leveling (or other reorganization of logical memory space).
- This disclosure also provides a memory device, controller or system having a fast address translation mechanism; that is, instead of relying on a lookup table, this fast address translation mechanism enables dynamic address computation based on just a few stored parameters.
- This strategy and mechanism can be employed in concert with one another, that is, fast address translation can be combined with the mentioned-memory storage shuffling strategy.
- this combination permits a memory device, controller or system to perform wear leveling in a manner completely transparent to a memory controller (or operating system). Such a memory technique can be used with large memory spaces without creating unacceptable overhead associated with large address translation tables.
- one embodiment provides a wear leveling or data shuffling method.
- the method defines a first portion of physical memory space as an “active” memory space and a second portion as a “substitute” memory space.
- the active space is that part of memory used to store host data.
- the substitute memory space provides one or more stand-in storage locations that will receive data that is to be moved in an upcoming wear leveling or data shuffling operation. With each such operation, a data block in active memory space is chosen based on the logical address of the data block last-moved during a previous wear leveling/data shuffling iteration and based on a stride value.
- This chosen data block is then copied into a substitute memory section, and the substitute memory section then is used to serve memory commands related to this data (e.g., read commands) as part of the active memory space.
- the “donor” memory section that was previously used to store this data block then is treated as part of the substitute space (and is later used to receive data copied from somewhere else in a future wear leveling/data shuffling operation).
- This operation is then iteratively repeated, e.g., based on command, passage of time intervals, usage statistics, or other metrics, with the logical address of the next “donor location” (for data to be moved during the next iteration) being incremented based on the stride value, to progressively rotate selection of the logical data blocks being moved, so as to precess through each data block in the active memory space.
- the stride value is coprime with the size of the active space in terms of size of the data blocks being moved and the associated “unit” of physical memory (i.e., “section” of physical memory) that will receive data.
- a “block” of logical data is sized in several embodiments presented below to be equal to a single row of data in a memory device.
- a block could instead be selected to be integer number of multiple rows, a fractional number of rows, or indeed, any size.
- the stride value (in terms of number of blocks) and aggregate number of active sections of memory are coprime, these values have no common divisor other than one.
- a memory system stores five blocks of data in active memory space (i.e., block-sized sections d 0 -d 4 ) in five physical sections of memory, p 0 -p 5 , and uses one section of physical memory (equal to one block of logical address) as substitute memory space, and that a stride value of 2 is used.
- the memory space is initially arranged with data blocks d 0 to d 4 , stored respectively in physical sections p 0 to p 4 , and with physical section p 5 initially assigned as a substitute block.
- data block d 0 is copied from physical data block p 0 to reserved physical data block p 5 , with physical section p 0 following the move then regarded as substitute space (e.g., free space—note that it is not necessary to erase or otherwise obfuscate data stored in that section).
- substitute space e.g., free space—note that it is not necessary to erase or otherwise obfuscate data stored in that section.
- the data blocks in the active memory space are then ordered as blocks d 1 , d 2 , d 3 , d 4 , d 0 (stored in respective physical sections p 1 -p 5 ), while physical section p 0 is redundant and is generally not used to service commands addressed to logical block d 0 .
- Physical section p 0 is free for use as a substitute memory section for the next wear leveling iteration (e.g., in schemes that use a single substitute block, physical section p 0 will be the destination for the next wear-leveling-based move of data during, while if multiple sections are used, physical section p 0 can await its turn in a rotation of multiple substitute memory sections.
- the logical address for the data block that will be copied in the next iteration is calculated by taking the logical address of the data block that was previously moved and adding the stride value (i.e., 0+2); the next wear leveling operating is thus directed to logical address d 2 , followed by d 4 , followed by d 1 , followed by d 3 , and again returning to logical block d 0 .
- stride value i.e., 0+2
- each block or a “section” of memory can be a page of memory (e.g., a row of memory serviced by a single wordline); M sections of active memory can number in the thousands (e.g., thousands of rows), and N reserved memory sections can number one or more. Note that S is typically greater than 1.
- One advantage obtained by at least some wear leveling or data shuffling embodiments presented by this disclosure is that if parameters are correctly chosen (e.g., stride value, number of active and substitute memory sections, and so forth), an address translation table can be optionally dispensed with, and fast address translation can be used for dynamic, real-time address translation, based on just a few tracked parameters.
- parameters e.g., stride value, number of active and substitute memory sections, and so forth
- an address translation table can be optionally dispensed with, and fast address translation can be used for dynamic, real-time address translation, based on just a few tracked parameters.
- automatic address translation is performed using a desired logical address provided as an input and only five parameters (L, P, S, M and M+N); here, L is the logical address of the data block moved during the last wear leveling or data shuffling operation, S is the stride value and P is the physical address of the section of memory that received the last-moved data block.
- Address translation thus does not require a massive translation table, but rather, depending on embodiment, can be dynamically performed by a memory controller, a memory device, software, or by a dedicated circuit, such as a buffer circuit located between a memory controller and a controlled memory device.
- fast address translation can be equation-based.
- a still more detailed embodiment will be provided below that performs this translation using only simple shift and add operations, i.e., with an efficient hardware design. This hardware design can be made effective to perform address translation with reduced latency.
- FIG. 1 shows a system 101 that uses both the memory storage method introduced above and a fast address translation mechanism, implemented using either hardware or instructional logic.
- a physical memory space 111 embodied as one or more memory devices includes M active memory sections 115 to service logical memory space and N substitute memory sections 117 .
- N is at least one, such that the system 101 includes at least one section of substitute memory space, with additional sections in this space being indicated in dashed lines to indicate their optional nature.
- N is 1, but embodiments will later be discussed where N is greater than one and where N is periodically changed, e.g., on a random or other basis.
- the system of FIG. 1 performs two basic operations, periodic wear leveling and run-time data operations. These two operations do not necessarily have to be mutually exclusive, i.e., in one implementation, wear leveling can be performed based on data access, and in a second implementation, wear leveling can be performed as part of a periodic refresh operation (see, e.g., FIG. 13 ). These latter implementations can be especially advantageous for newer memory forms such as resistive random access memory (RRAM) and magnetic random access memory (MRAM).
- RRAM resistive random access memory
- MRAM magnetic random access memory
- logic 121 periodically moves or copies a block of data from a “donor” memory section in active memory space 115 to a recipient section in substitute memory space 117 , with the donor and recipient sections then effectively trading roles; that is, the donor section is rotated to the substitute memory space, and the recipient section receiving data is placed in active service as part of the active memory space.
- This movement is described as a single wear leveling iteration or operation. Again, it should be understood that the application of these techniques is not limited to wear leveling. The effects of these actions are exemplified as follows.
- a logical “block” of data can be any convenient size and will often be selected based on practical considerations based on a given architecture; in discussion below, it will be assumed to correspond to a single row (or equivalently, a “page” of data). It should be assumed for many embodiments below that the logical block size corresponds to a page of memory (e.g., one row of a flash memory integrated circuit); contemplated, alternate implementations use different sizes, e.g., a programmable or erasable block in some memory architectures, a non-integer number of rows, or less than one row, e.g., a column of data.
- a stride value “S” is also expressed in these same units, e.g., logical blocks.
- an operation that places the valid copy of a logical block at a new physical section location is referred to below as a data “movement,” it is understood that most systems will simply replicate data at the new location, and not disturb the copy at the old location until time to overwrite the old location with a new logical block.
- an appropriate time to erase the old block may be selected immediately after the copy is verified, just prior to when new data is to be written to the old block, or at any convenient intermediate time.
- the logic 121 can be rooted either primarily in software, firmware or hardware, depending on desired implementation. In the context of FIG. 1 , this logic relies upon control parameters 127 to move data blocks, such that data is incrementally rotated in a manner that equalizes overall wear to the memory device (in embodiments where wear leveling is a goal). As mentioned above, even wear will result using this memory storage scheme if the stride value S and the size of the active memory space M are chosen to be coprime. Note that the effect of having S>1 will be to scramble memory space.
- a memory access command arriving on a signal path 129 may seek data at a logical address that is no longer at the same physical memory address as the last time that data was accessed, due to periodic wear leveling/data shuffling.
- Logic 121 invokes a logical-to-physical address translation function 123 to convert the “logical” address provided by the memory access command to a “physical” section address to obtain the correct data.
- a logical-to-physical address translation function 123 to convert the “logical” address provided by the memory access command to a “physical” section address to obtain the correct data.
- the system 101 tracks the movement of data and uses stored parameters to quickly locate the correct physical section that currently stores the desired data.
- Conventional address translation techniques can also be used for this translation function (e.g., using an on or off-circuit, updated for each data movement operation, address translation table stored in cache or other memory).
- a fast address translation technique dynamically performs logical-to-physical address translation, that is, in real time, using parameters 127 .
- equation-based address translation using these stored parameters is performed by instructional logic (see, e.g., FIG. 9B ), and in another embodiment, the translation is performed solely by hardware (see, e.g., FIGS. 9C-E ).
- FIG. 1 by dashed line 133 , all of these functions are optionally co-resident on a single device, such as a single memory integrated circuit.
- one device provided by this disclosure is a memory integrated circuit device with on-board address translation logic; this device receives a logical address (from the device's perspective) and without supervision provides the requested data from the correct physical address, notwithstanding the use of dynamic wear leveling/data shuffling.
- Such an integrated circuit can be in the form of a DRAM, flash, RRAM, or other type of integrated circuit memory device.
- FIG. 2A provides a functional flow diagram 201 that shows how wear leveling can be performed according to the basic methods introduced above.
- Dashed line boxes 207 denote the optional presence of registers to track wear leveling parameters, for example, so that fast address translation can be employed.
- data is periodically stored and updated in an active memory space, for example, during run-time system operation.
- This active memory space stores the logical space of multiple data blocks introduced earlier, e.g., M logical pages of data in M physical memory sections. At some point, for example when a threshold number of data operations have been performed subsequent to a previous wear leveling/data shuffling iteration, it is determined that wear leveling/data shuffling again needs to be performed.
- a data block and its corresponding “active” physical memory section are selected as a donor location to undergo wear leveling.
- a target location i.e., a section to be used as a substitute for the donor section, generally of equivalent size, is then selected, per reference numeral 205 .
- Data is then copied between these physical sections of memory (step 209 ). This may be optionally done as part of a row copy feature in the memory device, where data is read from one row of data to a page of sense amplifiers, then written back to another row accessible by the same sense amplifiers.
- an address translation table, optional registers ( 207 ), or another mechanism used to track wear leveling data is then updated. This update permits data indexed by logical address to be found after the move. The process then awaits a new wear leveling iteration, as indicated by a return loop 213 .
- registers 207 can be used to store parameters for fast address translation instead of a conventional table lookup.
- S parameters for fast address translation
- L the depicted registers
- X the number of bits
- RR the number of bits
- P the number of bits
- these labels refer to one specific embodiment where specific parameters are tracked and updated to facilitate equation-based, dynamic, physical address computation.
- the use of other parameters and registers and the use of alternate algorithms/equations are also possible. In the context of FIG.
- these specific parameters are stride value (S, in units of blocks), last-moved data block logical address (L), the next available substitute section physical location (X), rotation register (RR, i.e., the data block/logical address of the next wear leveling/data shuffling target), and the last recipient section physical address (P, i.e., the previous substitute section).
- S stride value
- L last-moved data block logical address
- X next available substitute section physical location
- RR rotation register
- P last recipient section physical address
- FIG. 2B shows a block diagram 251 illustrating how address translation occurs in such a system.
- a memory access command e.g., read, write, refresh, etc.
- the command identifies a specific row of data in logical address space that is to be sensed, written to, refreshed, and so forth. Because the data in question may no longer be at the physical address originally associated with this provided logical address, the provided logical address is transformed or translated to the correct address in physical memory space at which the data can presently be found.
- the physical address space can reside within a single memory device or multiple memory devices.
- hardware or instructional logic extracts the logical address (LA) associated with the command, for example, a row number and column number.
- LA logical address
- the logic then dynamically computes the associated physical address (PA), e.g., row address, where the desired data is presently stored; computation is optionally performed in a memory controller, memory module, or on a DRAM, flash, RRAM or other integrated circuit memory device. In the depicted embodiment, the computation is optionally performed directly by hardware (optional block 257 ).
- This hardware can be located in a memory device, memory controller, or in some other IC (as denoted by numerals 259 , 261 and 263 ).
- the hardware uses registers to store S, L, X, RR, P and/or other parameters as appropriate, as denoted by dashed line box 265 . These parameters are then used in an equation-based address calculation, per optional function block 267 . Finally, when the address translation is complete, the command is serviced by the memory device using the returned physical address (PA), wherein resides the desired data.
- PA physical address
- FIG. 3 is a block diagram 301 showing, at the left, steps for performing a single iteration of wear leveling and, at the right, a set of illustrative blocks showing how logical and physical memory space is affected (e.g., scrambled) by wear leveling.
- illustrative block 315 At the right side of the figure. There are M sections of active memory, where M can be any number; in this example, M is seen to represent an arbitrarily design choice of 8. Illustrative block 315 shows 9 total sections of memory, with one of the nine sections (represented by shading) representing the N sections of memory of substitute memory, i.e., N is 1 in this example.
- Illustrative block 315 also depicts selection of 3 as the stride value S.
- M and S have been selected to be coprime, meaning that there is no integer common denominator other than 1. That is, this objective is satisfied in this example, because the only integer that divides into both 8 and 3 is indeed 1.
- S can advantageously be made to be less than M, i.e., S ⁇ M.
- the logical address for data will correspond to the physical addresses, that is, it represents a “natural mapping” of logical to physical memory space. For example, if a memory command sought data at logical address II, that data would have been found at physical storage section 2. At this point in time, however, it is determined that the system is to iteratively perform wear leveling or scrambling of data.
- wear leveling can be triggered in connection with “run-time” memory commands, e.g., any time a write, read or other command is directed to the storage section identified by the start position. Wear leveling can also be triggered by a write data count reaching or exceeding a threshold (e.g., y write operations have been received by memory or a specific segment of memory since the last wear leveling operation).
- Another possible trigger for wear leveling is satisfaction of a timing constraint, for example, any time a timer reaches a certain count since previous wear leveling or in response to a calendared time (e.g., “January 1”).
- a timing constraint for example, any time a timer reaches a certain count since previous wear leveling or in response to a calendared time (e.g., “January 1”).
- Many other triggers can be used, including any trigger used for conventional wear leveling processes, including ad hoc initiation by a user or an operating system.
- the system first selects a “start position” or target data block for wear leveling/data shuffling in the active memory space, per method step 305 .
- the start position is selected based on logical address zero (“O”).
- the start position identifies a logical data block for the next ensuing wear leveling iteration and its associated physical location.
- RR is used in the figure as an acronym for the term “rotation register,” indicating that a hardware register is optionally used to store a logical or corresponding physical address of the donor section for the next ensuing wear leveling iteration.
- the system also determines the corresponding physical address location (i.e., location 0) where the identified logical data block may be found.
- the system then proceeds to select a substitute section of memory that will receive the selected data, per numeral 307 .
- N since N is 1, there is only one possible choice; in embodiments where N is greater than 1, a round robin scheme can optionally be used to select the “oldest” substitute section (that is, the section that has been used to service active memory space operations least recently).
- the data found at physical section 0 (logical address “O”) is moved to the substitute section identified by physical address “8,” as depicted by illustrative block 319 .
- the donor section 0 is reassigned to substitute memory space, such that it is now depicted using shading in illustrative block 321 .
- logical memory addresses no longer necessarily correspond to physical address space; that is, if it is desired to access data at logical memory address O, that data is no longer found at physical memory section 0.
- the start position e.g., in a rotation register
- This step is referenced by numeral 311 and is depicted by illustrative block 323 .
- registers are used to track other parameters for equation-based address translation, other such parameters are advantageously updated at this point in time.
- parameters L and P are updated to indicate the “last” logical block that was rotated and the physical address of its corresponding substitute section in memory respectively; in updating such parameters using the examples just given, the values 0 (representing O) and 8 would be stored.
- the physical address of the next logical block for the next wear leveling iteration, parameter X is then updated (e.g., using the new substitute section or another section of substitute memory space if N>1).
- Illustrative block 323 depicts both logical and physical memory space in a state that will persist until the performance of the next wear leveling iteration ( 313 ).
- memory accesses such as write, read, refresh and other memory operations as appropriate will being performed as part of run-time system operation.
- physical section 0 represents the substitute memory space, meaning that generally, no write or read command will be directed to this section until placed back in active use. This section will form the target location where data from the start position (III) will be moved in the next wear leveling iteration, and its physical address is stored as parameter X.
- FIG. 4A shows a sequence 401 of illustrative blocks representing a complete cycle of C (e.g., 72 ) wear leveling iterations.
- a first illustrative block 403 representing memory organization is identical to illustrative block 317 ; each block following illustrative block 403 represents a progression of a single iteration of wear leveling.
- the substitute memory space is seen to consist of physical memory section 3, which has just had its contents copied to the previous substitute section. Subsequent to another wear leveling iteration, RR will be equal to I (logical address VI plus the stride value 3, modulo M), as indicated by illustrative block 409 .
- a modulo of M is used because (1) it is not necessary to perform wear leveling for unused, substitute space, (2) memory space is treated in these embodiments as circular, and (3) it is desired, in view of the stride value S, to have the system step through each data block in some progression until all blocks have been processed.
- illustrative block 409 logical space following the fourth wear leveling iteration is fairly scrambled, bearing seemingly indiscernible relationship to the natural mapping of logical-to-physical memory space seen in illustrative block 403 . That is, the stored sequence of data blocks is seen to be III, I, II, VI, V, VII and O (3, 1, 2, 6, 4, 5, 7, 0).
- the progression in starting position following each wear leveling iteration can involve a different distance in physical space—in illustrative block 409 for example, the starting position (e.g., physical address “1,” stored as parameter “X”) has advanced four sections (or rows) in physical memory distance, relative to the physical location of the donor section physical address from the previous wear leveling iteration.
- the progression in logical memory space is constant and equal to the stride value.
- the progression in physical memory space in terms of distance between donor sections in successive wear leveling iterations is not necessarily constant.
- FIGS. 4B and 4C present respective tables 441 and 443 associated with stride address space; these tables are used by an equation-based address translation mechanism to dynamically compute a physical address.
- the stride value S can optionally be changed.
- the stride value S in some embodiments is changeable at any time if an address translation table is used to help map logical memory to physical memory.
- Stride address space is a virtual space where addresses are mapped by iterations of wear leveling, as seen in FIG. 4C (for stride value of 3).
- the algorithm is somewhat more complex for the example provided by FIGS. 4A-C .
- FIGS. 4A-C assume that it is desired at a point in time between the wear leveling iterations represented by blocks 417 and 419 to read data from logical address IV.
- data at logical address II was moved in the immediately previous wear leveling iteration from physical memory section 2 to physical memory section 7 (compare blocks 415 and 417 ). Since logical address IV is now sought by a memory command, application of the second formula above would indicate that data for logical address IV in fact resides at physical memory section 1. That is, the value SL-SLA is calculated by mapping L and LA respectively to 18 and 12 (see FIG. 4C ), yielding a difference of 6.
- equational logic can be implemented in a number of ways, for example, in software or firmware, via a circuit equivalent, or a combination of these things. If implemented in software, some or all of the functions can be embodied as machine-readable-media that stores instructions that, when executed by a machine such as a computer, cause that machine to mathematically transform a provided address in accordance with the functions described above.
- FIG. 5 provides a block diagram 501 showing a method of address translation based on these principles.
- the method extracts the accompanying logical address LA, and it retrieves stored parameters L and P, representing the last wear leveling iteration.
- L and P representing the last wear leveling iteration.
- the system then computes the stride physical address (SPA) corresponding to the desired logical address LA; once again, this is based on the principle in this embodiment that distance between any two logical addresses in stride address space at any time is equal to distance between two corresponding sections (stride physical addresses) associated with that data, provided S and N have not changed relative to the initial, natural mapping. This computation is indicated by function block 511 . Finally, with the stride physical address SPA computed, the system performs a reverse lookup function as indicated by numeral 513 .
- FIG. 6A is a memory allocation diagram similar to FIG. 4A . However, in the example provided by FIG. 6A , the stride value S has been changed to 5. This stride value and associated partitioning of memory into 9 physical sections are once again arbitrary and are used to simplify discussion.
- FIG. 6A presents a sequence 601 of illustrative blocks 603 - 623 .
- S the effects of scrambling the memory space are different from the values depicted in FIG. 4A .
- the mapping of logical memory space to physical memory space depicted in illustrative block 417 is III, IV, VI, VII, V, I, II and O (3, 4, 6, 7, 5, 1, 2, 0); however, a corresponding block 617 in FIG.
- registers (such as optional register 125 from FIG. 1 ) store several alternative, predetermined stride values, with one of the alternative values being re-selected every C repetitions of wear leveling. Selection can be performed using a round robin or random selection process, with a selected stride value being indicted by an associated change in a pointer, or a loading of a selected stride value into a proxy register.
- the described methodology (a) facilitates fast address translation mechanism, obviating the need for an address translation (lookup) table, and (b) provides an address translation mechanism that can more readily scale with almost any size or density or memory.
- this algorithm can be easily integrated into hardware, meaning that a memory device can itself be charged with wear leveling or data scrambling, in a manner transparent to a memory controller.
- a schematic will be presented below in connection with FIG. 10 for hardware that performs translation using simple binary math, that is, without the requirement of function calls or complex circuitry that imposes significant latency. For example, based on that circuit, it should be possible to provide hardware translation with less than 2000 transistors, imposing a delay of about 1 nanosecond, using 45 nm process technology.
- FIG. 7 provides a sequence 701 of illustrative blocks, similar to FIG. 4A .
- FIG. 7 uses a different size of substitute memory space. That is to say, in FIG. 7 , N is equal to 2. While there are different ways in which such a memory storage scheme can be implemented, for the embodiment discussed in reference to FIG. 7 , it should be assumed that each of N target sections in the substitute space at any point in time are used in a round robin fashion.
- the two substitute sections are each used in alternating wear leveling iterations, with the “oldest” substitute section being the recipient of copied (moved) data in the ensuing wear leveling/data shuffling iteration; in an iteration represented by the difference between illustrative blocks 705 and 703 , substitute memory section “A” is used as the recipient of copied data, while in an iteration represented by the difference between illustrative blocks 707 and 705 , substitute memory section “B” is used as the recipient for copied data.
- illustrative block 723 represents the 40 th shuffling or wear leveling iteration (since this is the smallest integer which is a common multiple of these values).
- N or S provides significant optional security enhancements for every kC repetitions of wear leveling/data shuffling (where k is any integer); note that as N is changed, the value of C also potentially changes. For example, it was earlier demonstrated in connection with the examples discussed above that a different value for S (or for N) can drastically affect how logical space appears after a given number of shuffling or wear leveling iterations. Changing N or both S and N also affects the periodicity with which logical memory space assumes its original organization. For example, in the examples presented above, the change in N from 1 ( FIG. 4A ) to 2 ( FIG.
- the substitute memory space can be varied by retaining a pool of reserved sections of memory (U sections); in such an embodiment, increases in the number N of substitute sections are drawn from these unused sections at every C wear leveling cycles or a multiple of C cycles.
- N potentially changes the value of C, since C is the smallest integer that is a common multiple of N and M+N.
- FIG. 8A shows a memory system 801 having a memory controller 803 and a memory device 805 .
- the memory controller 803 uses a command bus 807 (“CA”) to transmit commands and associated addresses to the memory device, with write or read data being exchanged via a data bus 809 (“DQ”).
- CA command bus 807
- DQ data bus 809
- Either bus includes one or more pathways, with communications being single-ended or differential. The pathways are optionally physical, conductive pathways.
- the memory device includes an array of storage cells 811 , with the array organized into an active memory space 815 , a substitute memory space 817 and an optional reserved space 819 , denoted using dashed lines.
- Each space 815 / 817 / 819 includes a respective number of sections; for example, the active space consists of M sections, the substitute space consists of N sections and the reserved space consists of U sections.
- Section size is arbitrary depending on designed implementation, i.e., in the discussed embodiments, it should be assumed that each section is a single row of memory, but each section could be of different size, e.g., 2 rows, 10 rows, or a fraction of row size, e.g., a column of 64 memory cells.
- numerals in common with FIG. 8A represent the same elements, i.e., FIG. 8B represents the same embodiment of FIG. 8A , but with changed values of a and b to reflect variation in the number of reserved and substitute sections.
- the memory controller 803 uses logic 821 to perform wear leveling and to manage command snooping (so as to substitute correct, physical addresses in commands sent outward on internal bus 831 for logical addresses arriving via internal bus 829 , into memory access commands).
- the operation of the logic 821 is governed by control parameters 827 , such as for example, whether wear leveling is to be performed and, if so, effectuating triggers for each wear leveling iteration.
- the memory controller can rely upon a fast hardware address translation mechanism 823 and optional registers 825 to store a limited set of parameters used to track the performance of wear leveling/data shuffling and also to facilitate, fast, hardware-based, dynamic address translation.
- FIGS. 9A-E illustrate various system configurations that rely on address translation.
- FIG. 9A shows a system 901 having a memory controller 903 and one or more memory devices 905 that are bidirectionally coupled to the controller via a signaling bus; the signaling bus may itself include address and/or data paths, bidirectional or unidirectional, and associated control signal paths such as timing paths, power rails and/or mask lines, as appropriate to the design.
- the signaling path includes printed circuit board traces or cabling compatible with currently prevalent standards, such as SATA 3.0 or PCI express 3.0, including both bidirectional and unidirectional signaling paths.
- Memory commands are generally triggered by a master operating system (“OS”) which communicates with the memory controller via signaling path 909 .
- OS master operating system
- circuitry 911 which processes commands from the memory controller to any attached memory device
- address translation table 913 which stores a current mapping of logical to physical addresses.
- This table can be stored within cache in the memory controller, but more typically is stored off chip due to its size for large capacity memory.
- the embodiment depicted in FIG. 9A implements data scrambling or wear leveling as introduced earlier, and performs address translation using conventional techniques such as a continually updated address translation (lookup) table.
- This table stores a 1-1 mapping of each logical address to a physical address and can be very large when memory capacity is large, as mentioned earlier.
- instructional logic 935 is still used to perform address translation.
- a memory controller 923 relies on a small set of parameters to dynamically compute physical address, that is, at the time a memory access command is presented.
- the address translation can therefore be equation-based, wholly obviating the need to store and update a lookup table.
- the registers 933 store the parameters mentioned earlier (e.g., “S,” “L,” “X,” “RR,” “P” and/or other parameters as appropriate), and instructional logic is operative to control circuitry 931 so as to perform calculations (described in detail below) to compute PA from LA.
- the system also includes memory device 925 , which communicates with the memory controller via a communications bus 927 , and a path 929 for communicating with an operating system or other master.
- FIG. 9C shows another system 941 .
- This system also includes a memory controller 943 and a plurality of memory devices 945 , coupled to the memory controller via a system bus 947 .
- each of the memory controller and memory devices are optionally integrated circuit devices, that is flip-chip or package-mounted dies, and the bus is optionally a set of printed circuit traces that provide a conductive path coupling these elements.
- the memory controller 941 includes circuitry 951 that manages data and processes commands from CPU or operating system (per numeral 949 ).
- the memory controller 943 includes on board fast hardware translation mechanism 953 (i.e., hardware logic) to perform logical-to-physical address translation (“LA-2-PA”).
- LA-2-PA logical-to-physical address translation
- circuitry 951 processes commands from the operating system or a master CPU, and retransmits those commands as appropriate to the memory devices 945 ; the circuitry 951 detects a logical address associated with those commands, and it uses the fast hardware translation mechanism 953 that compute a correct physical address from that logical address. Circuitry 951 then substitutes the true (physical) address into the commands before transmitting those commands to the memory devices 945 .
- the memory controller 943 also includes registers that store individual parameters as mentioned above, i.e., that enable the fast hardware translation mechanism 953 to dynamically compute physical address. That is, this hardware translation mechanism 953 computes physical address with relatively constant, short latency, without requiring support of an address translation table. This enhances the ability of a system to pipeline memory commands.
- FIG. 9D shows yet another system 961 .
- This system also uses a memory controller 963 , one or more memory devices 965 , and a connecting bus 967 .
- the address translation circuitry is seen to reside within each individual memory device. That is to say, one or more of the individual memory devices 965 each have circuitry to process incoming commands and hardware translation to dynamically compute physical address from logic address.
- the system bus 967 and operating system link 969 are similar to those described earlier and so are not repeated in detail here.
- system 961 can perform wear leveling in a manner entirely transparent to the memory controller.
- circuitry 971 is be given sole control over wear leveling using and updating of parameters; the memory controller simply issues commands and has a perspective that data is always stored in accordance with logical address space.
- each of the described systems provides improved security against use-based attacks, that is, attacks made via software to cause memory to fail through heavy use of a specific physical memory location. Any such attack, because it arises in software, would be subject to logical to physical memory translation, and so, even the attacks would be scrambled in terms of affected physical location.
- FIG. 9E provides another variant on these principles.
- FIG. 9E provides a system 981 that includes a memory controller 983 , one or more memory devices 985 and a system bus 987 .
- the circuitry 991 and fast hardware translation mechanism 993 are located in a different integrated circuit 995 , apart from either memory controller or memory device integrated circuits.
- part or all of the system bus is essentially split into two sections 987 and 988 , with the integrated circuit 995 snooping commands from bus section 988 and substituting physical address for logical address.
- the memory devices 985 , bus section 987 and integrated circuit 995 are optionally mounted to a common board, for example, forming a memory module.
- the bus section 988 couples the memory controller with a physical interface slot or connector, with edge connectors or a mating physical interface for the board coupling bus section 988 with integrated circuit 995 and bus section 987 .
- the integrated circuit 995 can optionally be instantiated as multiple integrated circuits or be combined with other functions; for example, some or all the parameters used for fast hardware address translation could be obtained from or stored in a serial presence detect register (e.g., in a memory module embodiment). Other combinations are also possible.
- FIG. 9E carries with it many of the same advantages described above for systems 901 , 921 , 941 and 961 ; as should be apparent, however, having circuitry 991 and fast hardware translation logic removed from the memory devices 985 permits optional movement of blocks of data between different memory integrated circuits. This is to say, each of physical and logical space can be manages as a single virtual device. The same is also true for the embodiments of FIGS. 9A-9C .
- FIGS. 9A-E provide a possible implementation for the memory storage scheme introduced by FIGS. 1-8 .
- FIGS. 10-11 provide a specific hardware-based address translation circuit.
- Other hardware implementations are also possible, but where M is selected to a Mersenne number (i.e., M is selected to be a power of two, minus one) and stride value S is selected to be a power of two, the hardware can be designed to provide translation using simple shift and addition operations. Such a design facilitates a relatively quick, low power address translation circuit, and will be discussed with reference to FIGS. 10-11 .
- M active memory space
- S stride value
- S power of two
- square brackets designate bits in a binary implementation, e.g., the first left-most half of the formula above indicates that an m-bit logical address LA is left shifted by m ⁇ r bits and padded with m trailing zeros to obtain ⁇ LA[r ⁇ 1,0],m′b0 ⁇ , i.e., a value m+r bits in length.
- the address LA is then added to this with the r least significant bits set to zero, to obtain the value SLA.
- the first circuit block 1003 passes this address into a simple binary addition circuit 1015 via two paths.
- a first path, 1017 represents the m+r bit value of ⁇ [m ⁇ 1],r]r′b0 ⁇
- a second path, 1019 represents the m+r bit value of ⁇ [r ⁇ 1],0],m′b0 ⁇ .
- the second circuit block 1005 receives this output 1021 and calculates distance in stride space by subtracting it from SL, designated by numeral 1029 .
- the value SL can be pre-calculated from a prior wear leveling operation or dynamically calculated using simple binary logic; again, this is an advantage of selecting the stride value S to be a power of 2.
- the value SL is received as a first input 1029 , and the value M ⁇ S+SL as an input 1037 ; the subtraction circuits subtract SLA (respective inputs 1031 / 1039 ) from each of these values.
- SLA selective inputs 1031 / 1039
- a carry signal 1033 from subtraction circuit 1023 output causes multiplexer to select between outputs 1035 or 1041 to provide a modulo complement, i.e., SL+M ⁇ S-SLA.
- a second circuit block output signal 1043 then represents the value of SLD.
- the third circuit block 1007 is nearly identical in configuration to the second circuit block. That is, it also relies on two subtraction circuits 1045 and 1047 and a multiplexer 1049 .
- the first subtraction circuit 1045 provides a carry output 1061 to cause multiplexer 1049 to select the output 1063 from the second subtraction circuit 1047 instead of output 1059 if SLD>SP, generating the (M+1) ⁇ S modulo complement of SP ⁇ SPD.
- N can be selected to be 1; N can also be selected to be greater than one or it can be periodically varied, as described earlier.
- the fourth circuit block 1009 simply converts the stride physical address SPA to the desired physical address PA. That is, it picks
- the addition circuit 1067 receives inputs 1073 and 1075 , respectively representing the highest m bits and lowest r bits of SPA (SPA consisting of m+r bits); this operation performs the inverse operation of first circuit block 1003 . In other words, it obtains an address from the stride address per an effective reverse lookup process, as described earlier using FIGS.
- the second subtraction circuit 1069 yields an output 1087 that causes multiplexer 1071 to output M (input 1089 ) instead of the output 1081 of the addition circuit.
- FIG. 11 provides a method block diagram 1101 corresponding to the circuit just described. That is to say, the methodology is broken up in to a number of process steps, respectively indicated using numerals 1103 , 1105 , 1107 , 1109 and 1111 .
- the desired logical address is retrieved and the values SL and SP are calculated.
- the stride logic address SLA is calculated from these values, per numeral 1105 .
- distance from the last rotated logic block is calculated in stride address space, that is, SLD is calculated as SL ⁇ SLA, with SPD being is equal to SLD.
- the stride physical address SPA is calculated from these values based on the rule that distance in stride space between any two logic addresses corresponds to distance in stride space between the corresponding physical addresses.
- the physical address PA is calculated from the stride physical address SPA and substituted into a memory access command.
- FIG. 12 shows a system 1201 having a memory controller integrated circuit 1203 and a RRAM integrated circuit 1205 ; the memory controller integrated circuit communicates with a master or system interface using a first bus 1207 , and it relays commands as appropriate to the RRAM integrated circuit via a memory bus 1209 .
- This embodiment is easily extended to other forms of memory, volatile or nonvolatile.
- newer memory architectures are being developed that have properties where cell state is manifested by a change in physical materials; these newer architectures are known by a variety of names, including without limitation magnetic random access memory (MRAM), flash, SONOS, phase change random access memory (PCRAM), conductive bridging RAM (CBRAM), ReRAM (also generally designating resistive RAM), metalized RAM, nanowire RAM, and other names and designs.
- MRAM magnetic random access memory
- PCRAM phase change random access memory
- CBRAM conductive bridging RAM
- ReRAM also generally designating resistive RAM
- metalized RAM nanowire RAM
- nanowire RAM and other names and designs.
- FIG. 13 illustrates a method 1301 where wear leveling is integrated with refresh operations. That is, each time a refresh operation is performed in this embodiment, wear leveling/data shuffling is performed; rather than a refresh operation writing back contents of a memory row to the same physical location, it instead writes the row to a row drawn from substitute memory space and recycles the “donor” row from active memory space into the substitute memory space.
- the method retrieves the values of SP, RR, SL and X.
- this method can be implemented directly in an RRAM integrated circuit device or other memory device, which pre-calculates values such as SP, SL, M ⁇ S+SL, as mentioned for the embodiment of FIG. 10 , above.
- Having these values “on-hand” permits control logic to translate an address for the refresh operation, to identify the correct refresh operand.
- X in this example is the physical address of the reserved (spare) section that will act as a recipient of data in next the refresh operation.
- Per numeral 1305 the then calculates new values for L, P, X and RR based on the rotation or wear leveling that is taking place.
- the method then branches into two paths, i.e., as indicated by numeral 1307 , the system performs wear leveling/refresh by writing data from logical address associated with RR into physical address X; in parallel with this path, the system also updates (per block 1309 ) the pre-calculated values SP, SL, M ⁇ S+SL for any ensuing address translation (and for any ensuing refresh operation, e.g., as part of a burst refresh). It also at this time updates the logical address held by the rotation register by increasing RR by the stride value, modulo M (step 1311 ). Finally, the method terminates at block 1313 ; the system is then ready for a return to normal data operations or performance of another refresh operation.
- Refresh is an example of a memory operation that can be combined with wear leveling using the teachings of this disclosure; there are also other types of memory operations that can be combined, including without limitation atomic operations.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System (AREA)
Abstract
Description
PA=Mod(M+N)(P−∥L−LA∥),
where PA is the desired physical address, LA is the desired logical address, L and P are the logical address of the last-previously-moved data block and corresponding substitute section physical address, respectively, and ∥L−LA∥ is a distance in stride address space. Stride address space is a virtual space where addresses are mapped by iterations of wear leveling, as seen in
where the relationship between P and L for a particular stride value is exemplified by
PA=Mod(M+N)(P−ModM(L−LA)).
TABLE 1 | |||||||||
III | IV | V | VI | VII | | I | II | ||
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
PA=Mod(M+N)(P−ModM(L−LA))=Mod9(7−Mod8(2−3))=Mod(9)(7−7)=0.
SPD=SL−SLA=30−20=10
SPA=SP−SPD=25−10=15
and, by reverse lookup to table 643,
TABLE 2 | ||||||
N = 1 | N = 3 | N = 5 |
| 511 | 1023 | | 511 | 1023 | 511 | 1023 | ||
| 1 | 1 | | 3 | 3 | 5 | 5 | ||
2 | 2 | 6 | 6 | 10 | 10 | ||||
4 | 4 | 12 | 12 | 20 | 20 | ||||
8 | 8 | 24 | 24 | 40 | 40 | ||||
16 | 16 | 48 | 48 | 80 | 80 | ||||
32 | 32 | 96 | 96 | 160 | 160 | ||||
64 | 64 | 192 | 192 | 320 | 320 | ||||
128 | 128 | 384 | 384 | 129 | 640 | ||||
256 | 256 | 257 | 768 | 258 | 257 | ||||
512 | 513 | 514 | |||||||
Changing N and/or S changes the effective stride used by the wear leveling/data shuffling scheme, i.e., the effective stride will be given by ModM(N×S). Note also that variation in N, for example, at intervals of kC wear leveling iterations, again increases security. For example, in one embodiment, illustrated by
-
- (a) scattering a provided logical address LA (e.g. sought by an incoming memory access command) into stride address space (SLA);
- (b) calculating distance in stride address space SLD between the stride address (SL) of the last-moved data block and the stride address of the provided logical address (LA), by subtracting SLA from SL;
- (c) calculating the stride physical address SPA by effectively subtracting the distance in stride space from SP, i.e., SPA=SP−(SLD); and
- (d) determining the physical address PA of the data sought by the memory access command by reverse-calculating an address from stride address space, i.e., from SPA.
That is, with aninput 1011 carrying the desired logical address LA, anoutput 1013 provides the desired physical address PA.
SLA={LA[r−1,0],m′b0}+{LA[m−1,r],r′b0}.
In connection with the notation above, square brackets designate bits in a binary implementation, e.g., the first left-most half of the formula above indicates that an m-bit logical address LA is left shifted by m−r bits and padded with m trailing zeros to obtain {LA[r−1,0],m′b0}, i.e., a value m+r bits in length. The address LA is then added to this with the r least significant bits set to zero, to obtain the value SLA.
SLD=ModM×S(SL−SLA)
using two
SPA=ModS×N+M×S(SP−SLD).
In many embodiments, N can be selected to be 1; N can also be selected to be greater than one or it can be periodically varied, as described earlier.
To this end, it uses an
Claims (23)
PA=f(Mod(S×M+S×N)(SP−ModS×M(SL−SLA))),
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/644,550 US9158672B1 (en) | 2011-10-17 | 2012-10-04 | Dynamic deterministic address translation for shuffled memory spaces |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161548089P | 2011-10-17 | 2011-10-17 | |
US13/644,550 US9158672B1 (en) | 2011-10-17 | 2012-10-04 | Dynamic deterministic address translation for shuffled memory spaces |
Publications (1)
Publication Number | Publication Date |
---|---|
US9158672B1 true US9158672B1 (en) | 2015-10-13 |
Family
ID=54252678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/644,550 Active 2033-10-24 US9158672B1 (en) | 2011-10-17 | 2012-10-04 | Dynamic deterministic address translation for shuffled memory spaces |
Country Status (1)
Country | Link |
---|---|
US (1) | US9158672B1 (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140173234A1 (en) * | 2012-12-13 | 2014-06-19 | Samsung Electronics Co., Ltd. | Semiconductor memory device and memory system |
US20140189226A1 (en) * | 2013-01-03 | 2014-07-03 | Seong-young Seo | Memory device and memory system having the same |
US20150149742A1 (en) * | 2013-11-22 | 2015-05-28 | Swarm64 As | Memory unit and method |
US20150212742A1 (en) * | 2014-01-28 | 2015-07-30 | Nec Corporation | Memory control device, information processing apparatus, memory control method, and, storage medium storing memory control program |
US20150370635A1 (en) * | 2014-06-18 | 2015-12-24 | International Business Machines Corporation | Implementing enhanced wear leveling in 3d flash memories |
US20160042782A1 (en) * | 2013-03-15 | 2016-02-11 | Ps4 Luxco S.A.R.L. | Semiconductor storage device and system provided with same |
US20160048459A1 (en) * | 2014-08-18 | 2016-02-18 | Samsung Electronics Co., Ltd. | Operation method of memory controller and nonvolatile memory system including the memory controller |
US20180052775A1 (en) * | 2007-10-24 | 2018-02-22 | Greenthread, Llc | Nonvolatile memory systems with embedded fast read and write memories |
US9921969B2 (en) | 2015-07-14 | 2018-03-20 | Western Digital Technologies, Inc. | Generation of random address mapping in non-volatile memories using local and global interleaving |
WO2018089084A1 (en) | 2016-11-08 | 2018-05-17 | Micron Technology, Inc. | Memory operations on data |
CN108369556A (en) * | 2015-12-10 | 2018-08-03 | 阿姆有限公司 | Loss equalization in nonvolatile memory |
US10049717B2 (en) | 2016-03-03 | 2018-08-14 | Samsung Electronics Co., Ltd. | Wear leveling for storage or memory device |
US20180262567A1 (en) * | 2017-03-10 | 2018-09-13 | Toshiba Memory Corporation | Large scale implementation of a plurality of open channel solid state drives |
US20190042109A1 (en) * | 2017-08-04 | 2019-02-07 | Micron Technology, Inc. | Wear leveling |
JP2019056981A (en) * | 2017-09-19 | 2019-04-11 | 東芝メモリ株式会社 | Memory system |
US20190227869A1 (en) * | 2018-01-22 | 2019-07-25 | Micron Technology, Inc. | Enhanced error correcting code capability using variable logical to physical associations of a data block |
US10445251B2 (en) | 2015-07-14 | 2019-10-15 | Western Digital Technologies, Inc. | Wear leveling in non-volatile memories |
US10445232B2 (en) | 2015-07-14 | 2019-10-15 | Western Digital Technologies, Inc. | Determining control states for address mapping in non-volatile memories |
US10452560B2 (en) | 2015-07-14 | 2019-10-22 | Western Digital Technologies, Inc. | Wear leveling in non-volatile memories |
US10452533B2 (en) | 2015-07-14 | 2019-10-22 | Western Digital Technologies, Inc. | Access network for address mapping in non-volatile memories |
US10467157B2 (en) | 2015-12-16 | 2019-11-05 | Rambus Inc. | Deterministic operation of storage class memory |
US10671292B2 (en) | 2014-01-06 | 2020-06-02 | International Business Machines Corporation | Data shuffling in a non-uniform memory access device |
US20210019052A1 (en) * | 2018-11-01 | 2021-01-21 | Micron Technology, Inc. | Data relocation in memory |
WO2021041114A1 (en) * | 2019-08-28 | 2021-03-04 | Micron Technology, Inc. | Intra-code word wear leveling techniques |
TWI722613B (en) * | 2018-11-15 | 2021-03-21 | 美商美光科技公司 | Address obfuscation for memory |
US11023391B2 (en) * | 2018-08-10 | 2021-06-01 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Apparatus for data processing, artificial intelligence chip and electronic device |
CN113918478A (en) * | 2020-07-10 | 2022-01-11 | 美光科技公司 | Memory wear management |
US11301378B2 (en) | 2017-10-12 | 2022-04-12 | Rambus Inc. | Nonvolatile physical memory with DRAM cache and mapping thereof |
US11341038B2 (en) * | 2017-12-05 | 2022-05-24 | Micron Technology, Inc. | Data movement operations in non-volatile memory |
US11372777B2 (en) * | 2018-02-28 | 2022-06-28 | Imagination Technologies Limited | Memory interface between physical and virtual address spaces |
US20220383933A1 (en) * | 2020-09-04 | 2022-12-01 | Micron Technology, Inc. | Reserved rows for row-copy operations for semiconductor memory devices and associated methods and systems |
US11526279B2 (en) * | 2020-05-12 | 2022-12-13 | Intel Corporation | Technologies for performing column architecture-aware scrambling |
US20220397418A1 (en) * | 2021-06-14 | 2022-12-15 | Harman Becker Automotive Systems Gmbh | System and method for version-adaptive navigation services |
EP4297033A1 (en) * | 2022-06-23 | 2023-12-27 | Kioxia Corporation | Memory device and memory system |
US12094581B2 (en) | 2020-08-13 | 2024-09-17 | Micron Technology, Inc. | Systems for generating personalized and/or local weather forecasts |
US12100336B2 (en) * | 2018-06-29 | 2024-09-24 | Tahoe Research, Ltd. | Dynamic sleep for a display panel |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5291582A (en) * | 1990-11-21 | 1994-03-01 | Apple Computer, Inc. | Apparatus for performing direct memory access with stride |
US20050055495A1 (en) | 2003-09-05 | 2005-03-10 | Nokia Corporation | Memory wear leveling |
EP1804169A1 (en) | 2005-12-27 | 2007-07-04 | Samsung Electronics Co., Ltd. | Storage apparatus |
US20090259801A1 (en) | 2008-04-15 | 2009-10-15 | Adtron, Inc. | Circular wear leveling |
US20100281202A1 (en) * | 2009-04-30 | 2010-11-04 | International Business Machines Corporation | Wear-leveling and bad block management of limited lifetime memory devices |
-
2012
- 2012-10-04 US US13/644,550 patent/US9158672B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5291582A (en) * | 1990-11-21 | 1994-03-01 | Apple Computer, Inc. | Apparatus for performing direct memory access with stride |
US20050055495A1 (en) | 2003-09-05 | 2005-03-10 | Nokia Corporation | Memory wear leveling |
EP1804169A1 (en) | 2005-12-27 | 2007-07-04 | Samsung Electronics Co., Ltd. | Storage apparatus |
US20090259801A1 (en) | 2008-04-15 | 2009-10-15 | Adtron, Inc. | Circular wear leveling |
US20100281202A1 (en) * | 2009-04-30 | 2010-11-04 | International Business Machines Corporation | Wear-leveling and bad block management of limited lifetime memory devices |
Non-Patent Citations (5)
Title |
---|
Ferreira et al. "Increasing PCM Main Memory Lifetime," Design, Automation & Test in Europe Conference & Exhibition (Date), 2010, Mar. 8-12, 2010, pp. 914-919. 6 pages. |
Ipek et al., "Dynamically Replicated Memory: Building Reliable Systems from Nanoscale Resistive Memories," ASPLOS '10, Mar. 13-17, 2010, Pittsburgh, Pennsylvania, pp. 3-14. 12 pages. |
Lee et al., "Phase-Change Technology and the Future of Main Memory," Micro, IEEE, vol. 30, No. 1, pp. 131-141, Jan.-Feb. 2010. 11 pages. |
Qureshi et al., "Enhancing Lifetime and Security of PCM-Based Main Memory with Start-Gap Wear Leveling", Micro '09, Dec. 12-16, 2009, New York, NY, pp. 14-23. 10 pages. |
Seong et al, "Security Refresh: Prevent Malicious Wear-out and Increase Durability for Phase-Change Memory with Dynamically Randomized Address Mapping", ISCA '10, Jun. 19-23, 2010, Saint-Malo, France. 12 pages. |
Cited By (69)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180052775A1 (en) * | 2007-10-24 | 2018-02-22 | Greenthread, Llc | Nonvolatile memory systems with embedded fast read and write memories |
US9772803B2 (en) * | 2012-12-13 | 2017-09-26 | Samsung Electronics Co., Ltd. | Semiconductor memory device and memory system |
US20140173234A1 (en) * | 2012-12-13 | 2014-06-19 | Samsung Electronics Co., Ltd. | Semiconductor memory device and memory system |
US20140189226A1 (en) * | 2013-01-03 | 2014-07-03 | Seong-young Seo | Memory device and memory system having the same |
US9449673B2 (en) * | 2013-01-03 | 2016-09-20 | Samsung Electronics Co., Ltd. | Memory device and memory system having the same |
US20160042782A1 (en) * | 2013-03-15 | 2016-02-11 | Ps4 Luxco S.A.R.L. | Semiconductor storage device and system provided with same |
US9412432B2 (en) * | 2013-03-15 | 2016-08-09 | Ps4 Luxco S.A.R.L. | Semiconductor storage device and system provided with same |
US20150149742A1 (en) * | 2013-11-22 | 2015-05-28 | Swarm64 As | Memory unit and method |
US9792221B2 (en) * | 2013-11-22 | 2017-10-17 | Swarm64 As | System and method for improving performance of read/write operations from a persistent memory device |
US10671292B2 (en) | 2014-01-06 | 2020-06-02 | International Business Machines Corporation | Data shuffling in a non-uniform memory access device |
US20150212742A1 (en) * | 2014-01-28 | 2015-07-30 | Nec Corporation | Memory control device, information processing apparatus, memory control method, and, storage medium storing memory control program |
US20150370669A1 (en) * | 2014-06-18 | 2015-12-24 | International Business Machines Corporation | Implementing enhanced wear leveling in 3d flash memories |
US20150370635A1 (en) * | 2014-06-18 | 2015-12-24 | International Business Machines Corporation | Implementing enhanced wear leveling in 3d flash memories |
US9471451B2 (en) * | 2014-06-18 | 2016-10-18 | International Business Machines Corporation | Implementing enhanced wear leveling in 3D flash memories |
US9489276B2 (en) * | 2014-06-18 | 2016-11-08 | International Business Machines Corporation | Implementing enhanced wear leveling in 3D flash memories |
US20160048459A1 (en) * | 2014-08-18 | 2016-02-18 | Samsung Electronics Co., Ltd. | Operation method of memory controller and nonvolatile memory system including the memory controller |
US9760503B2 (en) * | 2014-08-18 | 2017-09-12 | Samsung Electronics Co., Ltd. | Operation method of memory controller and nonvolatile memory system including the memory controller |
US10452560B2 (en) | 2015-07-14 | 2019-10-22 | Western Digital Technologies, Inc. | Wear leveling in non-volatile memories |
US10445232B2 (en) | 2015-07-14 | 2019-10-15 | Western Digital Technologies, Inc. | Determining control states for address mapping in non-volatile memories |
US10445251B2 (en) | 2015-07-14 | 2019-10-15 | Western Digital Technologies, Inc. | Wear leveling in non-volatile memories |
US10452533B2 (en) | 2015-07-14 | 2019-10-22 | Western Digital Technologies, Inc. | Access network for address mapping in non-volatile memories |
US9921969B2 (en) | 2015-07-14 | 2018-03-20 | Western Digital Technologies, Inc. | Generation of random address mapping in non-volatile memories using local and global interleaving |
CN108369556A (en) * | 2015-12-10 | 2018-08-03 | 阿姆有限公司 | Loss equalization in nonvolatile memory |
CN108369556B (en) * | 2015-12-10 | 2022-05-31 | 阿姆有限公司 | Wear Leveling in Non-Volatile Memory |
US10467157B2 (en) | 2015-12-16 | 2019-11-05 | Rambus Inc. | Deterministic operation of storage class memory |
US11314669B2 (en) | 2015-12-16 | 2022-04-26 | Rambus Inc. | Deterministic operation of storage class memory |
US11755509B2 (en) | 2015-12-16 | 2023-09-12 | Rambus Inc. | Deterministic operation of storage class memory |
US12147362B2 (en) | 2015-12-16 | 2024-11-19 | Rambus Inc. | Deterministic operation of storage class memory |
US10049717B2 (en) | 2016-03-03 | 2018-08-14 | Samsung Electronics Co., Ltd. | Wear leveling for storage or memory device |
EP3538983A4 (en) * | 2016-11-08 | 2020-06-24 | Micron Technology, Inc. | Memory operations on data |
WO2018089084A1 (en) | 2016-11-08 | 2018-05-17 | Micron Technology, Inc. | Memory operations on data |
US11209986B2 (en) | 2016-11-08 | 2021-12-28 | Micron Technology, Inc. | Memory operations on data |
CN109923514A (en) * | 2016-11-08 | 2019-06-21 | 美光科技公司 | Memory operation on data |
KR20190067921A (en) * | 2016-11-08 | 2019-06-17 | 마이크론 테크놀로지, 인크 | Memory behavior for data |
US11886710B2 (en) | 2016-11-08 | 2024-01-30 | Micron Technology, Inc. | Memory operations on data |
CN109923514B (en) * | 2016-11-08 | 2022-05-17 | 美光科技公司 | Memory operation on data |
US20180262567A1 (en) * | 2017-03-10 | 2018-09-13 | Toshiba Memory Corporation | Large scale implementation of a plurality of open channel solid state drives |
US10542089B2 (en) * | 2017-03-10 | 2020-01-21 | Toshiba Memory Corporation | Large scale implementation of a plurality of open channel solid state drives |
US10585597B2 (en) | 2017-08-04 | 2020-03-10 | Micron Technology, Inc. | Wear leveling |
US10416903B2 (en) * | 2017-08-04 | 2019-09-17 | Micron Technology, Inc | Wear leveling |
US11003361B2 (en) | 2017-08-04 | 2021-05-11 | Micron Technology, Inc. | Wear leveling |
US20190042109A1 (en) * | 2017-08-04 | 2019-02-07 | Micron Technology, Inc. | Wear leveling |
JP2019056981A (en) * | 2017-09-19 | 2019-04-11 | 東芝メモリ株式会社 | Memory system |
US12135645B2 (en) | 2017-10-12 | 2024-11-05 | Rambus Inc. | Nonvolatile physical memory with DRAM cache |
US11301378B2 (en) | 2017-10-12 | 2022-04-12 | Rambus Inc. | Nonvolatile physical memory with DRAM cache and mapping thereof |
US11714752B2 (en) | 2017-10-12 | 2023-08-01 | Rambus Inc. | Nonvolatile physical memory with DRAM cache |
US11341038B2 (en) * | 2017-12-05 | 2022-05-24 | Micron Technology, Inc. | Data movement operations in non-volatile memory |
US11636009B2 (en) | 2018-01-22 | 2023-04-25 | Micron Technology, Inc. | Enhanced error correcting code capability using variable logical to physical associations of a data block |
US10831596B2 (en) * | 2018-01-22 | 2020-11-10 | Micron Technology, Inc. | Enhanced error correcting code capability using variable logical to physical associations of a data block |
US20190227869A1 (en) * | 2018-01-22 | 2019-07-25 | Micron Technology, Inc. | Enhanced error correcting code capability using variable logical to physical associations of a data block |
US11372777B2 (en) * | 2018-02-28 | 2022-06-28 | Imagination Technologies Limited | Memory interface between physical and virtual address spaces |
US12100336B2 (en) * | 2018-06-29 | 2024-09-24 | Tahoe Research, Ltd. | Dynamic sleep for a display panel |
US11023391B2 (en) * | 2018-08-10 | 2021-06-01 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Apparatus for data processing, artificial intelligence chip and electronic device |
US11983403B2 (en) * | 2018-11-01 | 2024-05-14 | Micron Technology, Inc. | Data relocation in memory |
JP2022506259A (en) * | 2018-11-01 | 2022-01-17 | マイクロン テクノロジー,インク. | Data relocation in memory |
US20210019052A1 (en) * | 2018-11-01 | 2021-01-21 | Micron Technology, Inc. | Data relocation in memory |
TWI722613B (en) * | 2018-11-15 | 2021-03-21 | 美商美光科技公司 | Address obfuscation for memory |
US11853230B2 (en) | 2018-11-15 | 2023-12-26 | Micron Technology, Inc. | Address obfuscation for memory |
US11042490B2 (en) | 2018-11-15 | 2021-06-22 | Micron Technology, Inc. | Address obfuscation for memory |
US11688477B2 (en) | 2019-08-28 | 2023-06-27 | Micron Technology, Inc. | Intra-code word wear leveling techniques |
US11158393B2 (en) | 2019-08-28 | 2021-10-26 | Micron Technology, Inc. | Intra-code word wear leveling techniques |
WO2021041114A1 (en) * | 2019-08-28 | 2021-03-04 | Micron Technology, Inc. | Intra-code word wear leveling techniques |
US11526279B2 (en) * | 2020-05-12 | 2022-12-13 | Intel Corporation | Technologies for performing column architecture-aware scrambling |
CN113918478A (en) * | 2020-07-10 | 2022-01-11 | 美光科技公司 | Memory wear management |
US12094581B2 (en) | 2020-08-13 | 2024-09-17 | Micron Technology, Inc. | Systems for generating personalized and/or local weather forecasts |
US20220383933A1 (en) * | 2020-09-04 | 2022-12-01 | Micron Technology, Inc. | Reserved rows for row-copy operations for semiconductor memory devices and associated methods and systems |
US20220397418A1 (en) * | 2021-06-14 | 2022-12-15 | Harman Becker Automotive Systems Gmbh | System and method for version-adaptive navigation services |
EP4297033A1 (en) * | 2022-06-23 | 2023-12-27 | Kioxia Corporation | Memory device and memory system |
US12217781B2 (en) | 2022-06-23 | 2025-02-04 | Kioxia Corporation | Memory device and memory system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9158672B1 (en) | Dynamic deterministic address translation for shuffled memory spaces | |
KR101638764B1 (en) | Redundant data storage for uniform read latency | |
Seyedzadeh et al. | Mitigating wordline crosstalk using adaptive trees of counters | |
CN103946810B (en) | The method and computer system of subregion in configuring non-volatile random access storage device | |
Lee et al. | FAST: An efficient flash translation layer for flash memory | |
CN103946814B (en) | The autonomous initialization of the nonvolatile RAM in computer system | |
CN103946811B (en) | Apparatus and method for realizing the multi-level store hierarchy with different operation modes | |
CN104025060B (en) | Support the storage channel of nearly memory and remote memory access | |
US6912616B2 (en) | Mapping addresses to memory banks based on at least one mathematical relationship | |
US10714186B2 (en) | Method and apparatus for dynamically determining start program voltages for a memory device | |
JP3620473B2 (en) | Method and apparatus for controlling replacement of shared cache memory | |
US9058870B2 (en) | Hash functions used to track variance parameters of resistance-based memory elements | |
US20130297987A1 (en) | Method and Apparatus for Reading NAND Flash Memory | |
US10033411B2 (en) | Adjustable error protection for stored data | |
JP2016507847A (en) | Inter-set wear leveling for caches with limited write endurance | |
CN103703440A (en) | Prefetching data tracks and parity data to use for destaging updated tracks | |
US8560767B2 (en) | Optimizing EDRAM refresh rates in a high performance cache architecture | |
CN104115230B (en) | Computing device, method and system based on High Efficiency PC MS flush mechanisms | |
WO2021084365A1 (en) | Updating corrective read voltage offsets in non-volatile random access memory | |
Mittal et al. | EqualWrites: Reducing intra-set write variations for enhancing lifetime of non-volatile caches | |
JP2017538206A (en) | Memory wear leveling | |
US10042565B2 (en) | All-flash-array primary storage and caching appliances implementing triple-level cell (TLC)-NAND semiconductor microchips | |
US20220214826A1 (en) | Storage device including nonvolatile memory device and method of operating the same | |
CN110047537A (en) | A kind of semiconductor storage and computer system | |
Jiang et al. | Hardware-assisted cooperative integration of wear-leveling and salvaging for phase change memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: RAMBUS INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHENG, HONGZHONG;HAUKNESS, BRENT STEVEN;REEL/FRAME:029081/0387 Effective date: 20111019 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |