[go: up one dir, main page]

CN117472791A - Data access method and data access system - Google Patents

Data access method and data access system Download PDF

Info

Publication number
CN117472791A
CN117472791A CN202210849234.8A CN202210849234A CN117472791A CN 117472791 A CN117472791 A CN 117472791A CN 202210849234 A CN202210849234 A CN 202210849234A CN 117472791 A CN117472791 A CN 117472791A
Authority
CN
China
Prior art keywords
memory
value
column
pages
sequence value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210849234.8A
Other languages
Chinese (zh)
Inventor
陆志豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realtek Semiconductor Corp
Original Assignee
Realtek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realtek Semiconductor Corp filed Critical Realtek Semiconductor Corp
Priority to CN202210849234.8A priority Critical patent/CN117472791A/en
Publication of CN117472791A publication Critical patent/CN117472791A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0873Mapping of cache memory to specific storage devices or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The data access method comprises providing a first memory, wherein the first memory comprises a plurality of memory pages; obtaining a use sequence value of each memory page in the memory pages; acquiring a first use sequence value with highest priority in a first memory from the use sequence values corresponding to the memory pages; updating a first memory after using a first memory page corresponding to the first use sequence value; after the first memory is updated, acquiring a second use sequence value with highest priority sequence in the updated first memory; and using a second memory page corresponding to the second order of use value.

Description

Data access method and data access system
Technical Field
The present invention relates to a data access method and a data access system, and more particularly, to a data access method and a data access system for realizing high performance and low memory usage by using a memory usage sequence.
Background
With the evolution of computer technology, various dense memory devices are gradually developed, wherein the memory is the most widely used storage medium. Generally, according to different storage characteristics, the memories can be further divided into Volatile (Volatile) memories and nonvolatile (Non-Volatile) memories, wherein data stored in the Volatile memories disappear after power supply is interrupted, and data stored in the nonvolatile memories can be still saved even when power is cut off, so that memory data can be read only by re-supplying power.
Currently, when a memory accesses data, a linked List (Link List) structure is often used to store pointers (pointers) belonging to a data packet. A linked list is a common data structure. The linked list uses nodes (nodes) to record, represent, and store data and uses pointers in each Node to point to the next Node. Thus, the data structure of such a linked list may concatenate multiple nodes. However, the time complexity of searching the data structure of the linked list is O (N). Also, the data structure of the linked list requires a large memory space to store the pointers. Therefore, developing a data access method with high performance and low memory usage is an important issue.
Disclosure of Invention
One embodiment of the present invention provides a data access method. The data access method comprises the steps of providing a first memory, wherein the first memory comprises a plurality of memory pages, acquiring a use sequence value of each memory page in the memory pages, acquiring a first use sequence value with highest priority in the first memory in the use sequence values corresponding to the memory pages, updating the first memory after the first memory page corresponding to the first use sequence value is used, acquiring a second use sequence value with highest priority in the updated first memory after the first memory is updated, and using a second memory page corresponding to the second use sequence value.
Another embodiment of the present invention provides a data access system. The data access system comprises a first memory, a receiving end, a transmitting end and a processor. The first memory includes a plurality of memory pages for storing data. The receiving end is used for receiving input data and writing the input data into the first memory. The transmitting end is used for reading the transmitting data from the first memory. The processor is coupled to the first memory, the second memory, the receiving end and the transmitting end, and is used for controlling the first memory, the second memory, the receiving end and the transmitting end. The processor retrieves a use order value for each of the memory pages. And the processor acquires a first use sequence value with highest priority in the first memory from the use sequence values corresponding to the memory pages, so that the receiving end and the transmitting end can access data through the first memory pages corresponding to the first use sequence value. The processor updates the first memory after using the first memory page corresponding to the first usage order value. After the first memory is updated, the processor obtains a second use sequence value with highest priority in the updated first memory, so that the receiving end and the transmitting end can access data through a second memory page corresponding to the second use sequence value.
Drawings
FIG. 1 is a block diagram of a data access system according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating a first usage sequence value with highest priority in the data access system of FIG. 1.
FIG. 3 is a diagram illustrating a second usage sequence value with highest priority after the first memory is updated in the data access system of FIG. 1.
FIG. 4 is a flow chart illustrating a method for performing data access in the data access system of FIG. 1.
Detailed Description
FIG. 1 is a block diagram of a data access system 100 according to an embodiment of the present invention. The data access system 100 includes a first memory 10, a receiving end RX, a transmitting end TX, and a processor 11. The first memory 10 includes a plurality of memory Pages (Pages) for storing data. The first memory 10 may be a static random access memory (Static Random Access Memory, SRAM), but is not limited thereto. The memory pages of the first memory 10 may be arranged in a two-dimensional array. For example, in fig. 1, the first memory 10 may include a memory page R1 of a first column, a memory page R2 of a second column, a memory page R3 of a third column, and a memory page R4 of a fourth column. The memory pages of each column may include at least one available memory page and/or at least one unavailable (or used) memory page. In the first memory 10, the available memory pages correspond to a use order value, while the memory pages that are temporarily unavailable or already used do not. For example, the memory page R1 of the first column includes available memory pages using the order values 5'd10,5'd9,5d '11, and 5'd6. The memory page R2 of the second column includes available memory pages using the sequence values 5d '7,5d'4,5d '15, and 5'd1. The memory page R3 of the third column includes available memory pages using the order values 5'd3,5'd0, and 5'd 8. The memory page R4 of the fourth column includes available memory pages using the order values 5'd2,5'd5,5'd12,5'd14, and 5'd 13. As shown in fig. 1, the memory pages using the sequence values 5'd0 to 5'd15 represent the sequence in which the receiving end RX or the transmitting end TX accesses the memory pages. Also, for the first memory 10, the use order values 5'd0 to 5'd15 may be use order values corresponding to a plurality of consecutive memory addresses or use order values corresponding to a plurality of non-consecutive memory addresses. In the data access system 100, a receiving end RX is configured to receive input data and write the input data into the first memory 10. The transfer terminal TX is used to read transfer data from the first memory 10. The processor 11 is coupled to the first memory 10, the receiving end RX and the transmitting end TX, and is used for controlling the first memory 10, the receiving end RX and the transmitting end TX. The processor 11 may be a memory frame page sequential link controller (Frame Page Order Link Controller). In the data access system 100, a second memory 12 may also be included. The second memory 12 is coupled to the first memory 10 and the processor 11. The second memory 12 may be a separate register for caching the highest priority order of use sequence values in each column of memory pages. In the data access system 100, when the receiving end RX or the transmitting end TX sequentially uses the memory pages, the contents stored in the first memory 10 and the second memory 12 are updated synchronously. In the data access system 100, the processor 11 may obtain a usage sequence value of each of the memory pages in the first memory 10. Then, the processor 11 can obtain the first usage sequence value with the highest priority in the first memory 10, so that the receiving end RX and the transmitting end RX can access the data through the first memory page corresponding to the first usage sequence value. After using the first memory page corresponding to the first usage order value, the processor 11 may update the first memory 10 and the second memory 12 at the same time. Then, after the first memory 10 and the second memory 12 are updated, the processor 11 may obtain a second usage sequence value with the highest priority in the updated first memory, so that the receiving end RX and the transmitting end TX may access the data through the second memory page corresponding to the second usage sequence value. Details of the data access system 100 searching for the appropriate memory page will be described in detail below.
FIG. 2 is a diagram of a first utilization sequence value 5'd0 with highest priority in the data access system 100. As mentioned above, in the data access system 100 of the foregoing embodiment, the first memory 10 may include the memory page R1 of the first column, the memory page R2 of the second column, the memory page R3 of the third column, and the memory page R4 of the fourth column. The processor 11 may obtain the minimum rank order value in each of the plurality of ranks of memory pages (e.g., R1 to R4) in the first memory 10. For example, the memory page R1 of the first column includes available memory pages using the order values 5'd10,5'd9,5d '11, and 5'd6. The processor 11 will select the order value with the highest priority among the sets of order values {5'd10,5'd9,5d '11,5'd6}, i.e. in the memory page R1 of the first column, the smallest column use order value 5'd6 can be selected. The memory page R2 of the second column includes available memory pages using the sequence values 5d '7,5d'4,5d '15, and 5'd1. The processor 11 will select the sequence value with the highest priority among the sequence value sets {5d '7,5d '4,5d '15,5'd1}, i.e. in the memory page R2 of the second column, the smallest column use sequence value 5'd1 can be selected. The memory page R3 of the third column includes available memory pages using the order values 5'd3,5'd0, and 5'd 8. The processor 11 will select the order value with the highest priority among the sets of order values {5'd3,5'd0,5'd8}, i.e. the smallest column use order value 5'd0 may be selected in the memory page R3 of the third column. The memory page R4 of the fourth column includes available memory pages using the order values 5'd2,5'd5,5'd12,5'd14, and 5'd 13. The processor 11 will select the order value with the highest priority among the sets of order values {5'd2,5'd5,5'd12,5'd14,5'd13}, i.e. in the memory page R4 of the fourth column the smallest column use order value 5'd2 can be selected. Also, the processor 11 may cache the smallest column use order value in the second memory 12 in the memory page of each column. For example, in the first column of memory page R1, the smallest column use order value 5'd6 may be cached in the first value of the second memory 12. In the memory page R2 of the second column, the smallest column use order value 5'd1 may be cached in the second value of the second memory 12. In the third column of memory pages R3, the smallest column use order value of 5'd0 may be cached in the third value of the second memory 12. In the fourth column of memory pages R4, the smallest column use order value 5'd2 may be cached in the fourth value of the second memory 12. The processor 11 may obtain the smallest usage sequence value in the second memory 12 as the first usage sequence value. For example, after the second memory 12 caches the usage sequence value sets {5'd6,5'd1,5'd0,5'd2}, the processor 11 may select the smallest usage sequence value 5'd0 from the usage sequence value sets {5'd6,5'd1,5'd0,5'd2}, as the first usage sequence value. Therefore, the first memory page MP1 corresponding to the first usage order value 5'd0 can be regarded as the memory page of the first memory 10 used by the transmitting terminal TX or the receiving terminal RX first.
Fig. 3 is a schematic diagram of the data access system 100, after the first memory 10 is updated, obtaining the second usage sequence value 5'd1 with the highest priority. Following the previous steps, the first memory 10 may be updated after the processor 11 selects the smallest used sequence value 5'd0 from the sequence value sets {5'd6,5'd1,5'd0,5'd 2}. For example, in the first memory 10, after the first memory page MP1 corresponding to the original use order value 5'd0 is used, the first memory page MP1 may be set to be unusable or already used, so the order value set is set to "xxx", which indicates that page candidates are not considered. In other words, after the first memory 10 is updated, the processor 11 may again obtain the sequence value of the smallest column usage sequence value in the memory pages of each of the plurality of columns of memory pages (e.g. R1 to R4) in the updated first memory 10. In fig. 3. The first memory 10 changes only the use order value 5'd0 (set to "xxx") in the memory page R3 of the third column. Thus, the set of usage order values {5'd10,5'd9,5d '11,5'd6} in the memory page R1 of the first column, the set of usage order values {5d '7,5d '4,5d '15,5'd1} in the memory page R2 of the second column, and the set of usage order values {5'd2,5'd5,5'd12,5'd14,5'd13} in the memory page R4 of the fourth column are unchanged. In other words, the use order value 5'd6 stored in the first value of the second memory 12 by the memory page R1 of the first column, the use order value 5'd1 stored in the second value of the second memory 12 by the memory page R2 of the second column, and the use order value 5'd2 stored in the fourth value of the second memory 12 by the memory page R4 of the fourth column are the same as the state of the first memory 10 before the update. However, since the set of order of use values in memory page R3 of the third column has been updated to {5d '3,5d'8}. Thus, the smallest column usage order value selected by the processor 11 at the usage order value set {5d '3,5d '8} of the memory page R3 of the third column is 5d '3. Also, in the memory page R3 of the third column, the smallest column use order value 5'd3 may be cached in the third value of the second memory 12. In other words, the sequence value set {5'd6,5'd1,5'd0,5'd2} buffered by the second memory 12 may be updated as the sequence value set {5'd6,5'd1,5'd3,5'd2}. Similarly, the processor 11 may select the smallest usage order value 5'd1 from the order value set {5'd6,5'd1,5'd3,5'd2} as the second usage order value. Therefore, the second memory page MP2 corresponding to the second usage order value 5'd1 can be regarded as the next memory page of the first memory 10 used by the transmitting terminal TX or the receiving terminal RX. Similarly, after the second memory page MP2 corresponding to the second usage order value 5'd1 is used, the processor 11 may set the second memory page MP2 to be in a used (memory used in) or unusable state, and the order value set is "xxx", which indicates that the page candidate is not considered. By analogy, when the first memory 10 and the second memory 12 are updated again, the processor 11 can select the next memory page sequence value to be used, i.e. 5'd2. Therefore, since the first memory 10 can cooperate with the second memory 12 to quickly search the memory pages to be accessed according to the usage sequence values corresponding to the memory pages, the search speed and the time complexity can be improved in addition to the space required for storing pointers (pointers).
Also, as mentioned above, after the first memory 10 is updated, if a memory page of a certain column in the updated first memory 10 is unavailable, the processor 11 may mark the memory page of the certain column to reduce the search dimension of the second usage order value. For example, after the first memory 10 and the second memory 12 are updated multiple times, if all memory pages in a column in the first memory 10 are corresponding to "xxx", it indicates that the column page has not considered candidates. Thus, processor 11 may flag memory page columns that do not consider page candidates to reduce search dimensions and complexity. Also, as mentioned previously, for the first memory 10, the use order value may be a use order value corresponding to a plurality of consecutive memory addresses, or a use order value corresponding to a plurality of non-consecutive memory addresses. In the data access system 100, a Mapping Table (Mapping Table) may also be introduced to translate the order of use values of the memory pages. For example, a mapping table may be introduced at the data access system 100 to map the order of use values for a plurality of non-contiguous memory addresses to the order of use values for a plurality of contiguous memory addresses. Any reasonable variation of hardware or technology is within the scope of the present disclosure.
FIG. 4 is a flow chart of a method for performing data access in the data access system 100. The process of the data access system 100 for executing the data access method includes steps S401 to S406. Any reasonable variation of the steps is within the scope of the present disclosure. The descriptions of step S401 to step S406 are as follows:
step S401: providing a first memory 10, the first memory 10 comprising a plurality of memory pages;
step S402: obtaining a use sequence value of each memory page in the memory pages;
step S403: acquiring a first use sequence value 5'd0 with highest priority in the first memory 10 from the use sequence values corresponding to the memory pages;
step S404: after the first memory page MP1 corresponding to the first use order value 5'd0 is used, the first memory 10 is updated;
step S405: after the first memory 10 is updated, acquiring a second usage order value 5'd1 with the highest priority in the updated first memory 10;
step S406: a second memory page MP2 corresponding to the second order of use value 5d'1 is used.
Details of steps S401 to S406 are described in detail above, and will not be described here again. With steps S401 to S406, the data access system 100 can quickly search the memory pages to be accessed according to the usage sequence values corresponding to the memory pages. Thus, the data access system 100 may improve search speed and time complexity in addition to reducing the space required to store pointers.
In summary, the present application describes a data access system and a data access method. The static random access memory in the data access system includes a plurality of memory pages and their corresponding use sequence values. The data access system can use the use sequence value of the memory pages and match with the register space to quickly search the memory pages to be accessed. And, the data access system can synchronously update the static random access memory and the content stored in the register space. Unlike conventional linked lists, data access systems handle data that uses sequential values. Therefore, the data access system can improve the search speed and time complexity in addition to reducing the space required for storing pointers.
The foregoing description is only of the preferred embodiments of the invention, and all changes and modifications that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Reference numerals
100: data access system
RX: receiving terminal
TX: transmitting end
10: first memory
11: processor and method for controlling the same
12: second memory
R1: memory pages of first column
R2: memory pages of second column
R3: memory pages of the third column
R4: memory pages of the fourth column
5d '0 to 5d'15: using sequential values
MP1: first memory page
MP2: second memory page
S401 to S406: step (a)

Claims (10)

1. A data access method, comprising:
providing a first memory, the first memory comprising a plurality of memory pages;
obtaining a use sequence value of each memory page in the memory pages;
acquiring a first use sequence value with highest priority in the first memory from the use sequence values corresponding to the memory pages;
updating the first memory after using the first memory page corresponding to the first use sequence value; and
after the first memory is updated, acquiring a second use sequence value with highest priority sequence in the updated first memory; and
and using a second memory page corresponding to the second use sequence value.
2. The method of claim 1, wherein retrieving the first order of use value having the highest priority in the first memory among the order of use values corresponding to the memory pages comprises:
acquiring a minimum column use sequence value in each column of memory pages of a plurality of columns of memory pages in the first memory;
caching the minimum column use sequence value in a second memory in the memory page of each column; and
and obtaining the smallest use sequence value in the second memory as the first use sequence value.
3. The method of claim 2, further comprising:
updating a plurality of usage sequence values cached by the second memory after the first memory is updated;
after the first memory page corresponding to the first use sequence value is used, the first memory page is set to be unusable to update the first memory.
4. The method of claim 1, wherein after the first memory is updated, retrieving the second order of use value with the highest priority in the updated first memory comprises:
acquiring the minimum column use sequence value in each column of memory pages of the plurality of columns in the updated first memory;
caching the minimum column use sequence value in a second memory in the memory page of each column; and
and obtaining the smallest use sequence value in the second memory as the second use sequence value.
5. The method of claim 4, wherein the second memory page corresponding to the second order of use value is set to be unusable after the second memory page is used.
6. The method of claim 1, further comprising:
providing a mapping table to translate the order of use value of the memory pages;
wherein the usage order value is a usage order value of a plurality of consecutive memory addresses or a usage order value of a plurality of non-consecutive memory addresses.
7. The method of claim 1, wherein the memory pages of the first memory comprise at least one available memory page and at least one unavailable memory page, and the at least one unavailable memory page has no order of use value.
8. The method of claim 1, wherein the first memory is a static random access memory and the second memory is a register.
9. The method of claim 1, wherein after the first memory is updated, if a column of memory pages in the updated first memory is unavailable, marking the column of memory pages to reduce the search dimension of the second order of use value.
10. A data access system, comprising:
a first memory including a plurality of memory pages for storing data;
the receiving end is used for receiving input data and writing the input data into the first memory;
a transfer terminal for reading transfer data from the first memory; and
the processor is coupled to the first memory, the receiving end and the transmitting end and used for controlling the first memory, the receiving end and the transmitting end;
the processor obtains a first use sequence value with highest priority in the first memory from the use sequence values corresponding to the memory pages, so that the receiving end and the transmitting end access data through the first memory page corresponding to the first use sequence value, the processor updates the first memory after using the first memory page corresponding to the first use sequence value, and the processor obtains a second use sequence value with highest priority in the updated first memory after the first memory is updated, so that the receiving end and the transmitting end access data through the second memory page corresponding to the second use sequence value.
CN202210849234.8A 2022-07-19 2022-07-19 Data access method and data access system Pending CN117472791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210849234.8A CN117472791A (en) 2022-07-19 2022-07-19 Data access method and data access system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210849234.8A CN117472791A (en) 2022-07-19 2022-07-19 Data access method and data access system

Publications (1)

Publication Number Publication Date
CN117472791A true CN117472791A (en) 2024-01-30

Family

ID=89635256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210849234.8A Pending CN117472791A (en) 2022-07-19 2022-07-19 Data access method and data access system

Country Status (1)

Country Link
CN (1) CN117472791A (en)

Similar Documents

Publication Publication Date Title
TWI744457B (en) Method for accessing metadata in hybrid memory module and hybrid memory module
CN110555001B (en) Data processing method, device, terminal and medium
US20190220443A1 (en) Method, apparatus, and computer program product for indexing a file
US20210019257A1 (en) Persistent memory storage engine device based on log structure and control method thereof
KR20170112952A (en) Optimized hopscotch multiple hash tables for efficient memory in-line deduplication application
US20120117297A1 (en) Storage tiering with minimal use of dram memory for header overhead
KR20170112958A (en) Dedupe dram system algorithm architecture
US20180129605A1 (en) Information processing device and data structure
CN115774699A (en) Database shared dictionary compression method and device, electronic equipment and storage medium
US11829292B1 (en) Priority-based cache-line fitting in compressed memory systems of processor-based systems
CN102227717B (en) Method and apparatus for data storage and access
US20180217930A1 (en) Reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur
KR102321346B1 (en) Data journaling method for large solid state drive device
US11868244B2 (en) Priority-based cache-line fitting in compressed memory systems of processor-based systems
CN117472791A (en) Data access method and data access system
KR20160121819A (en) Apparatus for data management based on hybrid memory
TWI842009B (en) Data accessing method and data accessing system
KR102729105B1 (en) Priority-based cache line alignment in compressed memory systems on processor-based systems
CN113760788A (en) memory and method of operation
CN119781686B (en) L2P mapping table storage method and device
US8200920B2 (en) Systems and methods for storing and accessing data stored in a data array
CN115391349A (en) Data processing method and device
JPH0784886A (en) Cache memory control method and cache memory control device
US20100057685A1 (en) Information storage and retrieval system
CN119781686A (en) L2P mapping table storage method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination