[go: up one dir, main page]

CN115794669A - Method, device and related equipment for expanding memory - Google Patents

Method, device and related equipment for expanding memory Download PDF

Info

Publication number
CN115794669A
CN115794669A CN202111266429.1A CN202111266429A CN115794669A CN 115794669 A CN115794669 A CN 115794669A CN 202111266429 A CN202111266429 A CN 202111266429A CN 115794669 A CN115794669 A CN 115794669A
Authority
CN
China
Prior art keywords
storage medium
processor
storage
address
virtual memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111266429.1A
Other languages
Chinese (zh)
Inventor
姚建业
张瑛
赵金蔚
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2022/091824 priority Critical patent/WO2023035646A1/en
Publication of CN115794669A publication Critical patent/CN115794669A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application provides a method for expanding a memory, which is applied to a computing device comprising a processor and a first storage device. When the memory is expanded, the processor acquires physical addresses of the first storage medium and the second storage medium, maps the physical address of the first storage medium in the computing device to a first virtual memory address capable of being directly accessed, maps the physical address of the second storage medium to a second virtual memory address capable of being directly accessed, and stores hot data in a storage space indicated by the first virtual memory address or stores cold data in a storage space indicated by the second virtual memory address. Therefore, not only can the data access performance of the processor be effectively prevented from being reduced, but also the physical addresses of a plurality of storage media in the storage device can be mapped into the memory of the computing device, so that the memory expansion effect of the computing device can be improved. In addition, the application also provides a corresponding device and related equipment.

Description

Method, device and related equipment for expanding memory
The present application claims priority of the chinese patent application filed on 11/09/11/2021 under the national intellectual property office of china, having application number 202111064925.9 and entitled "a solid state disk", the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of storage technologies, and in particular, to a method and an apparatus for expanding a memory, and a related device.
Background
In new application scenarios such as big data, artificial intelligence, etc., the capacity of the memory in the computing device affects the performance of the application. Generally, the larger the capacity of the memory in the computing device is, the more data can be stored in the memory, so that the probability that an application on the computing device accesses data from the memory is higher, and the efficiency of the application in acquiring data is improved.
Since directly stacking multiple layers of memory or adding multiple memory banks (physical memory) in a computing device may cause the cost required for expanding the memory to be too high, a part of a storage space on a Solid State Disk (SSD) accessed to the computing device is usually mapped to a virtual memory address of the computing device, so as to obtain a memory space larger than the physical memory capacity in the computing device based on lower cost. In this way, when the data accessed by the processor in the computing device is located in the physical memory of the computing device, the required data can be directly accessed according to the address of the data in the physical memory. When the data accessed by the computing device is located at the virtual memory address, the page fault interrupt is triggered, at this time, the computing device will swap out part of the currently unaccessed data in the physical memory to another location (for example, swap out to the solid state disk, etc.) to vacate a free memory space, then swap in the data that the processor needs to access from the solid state disk to the free physical memory space, and obtain the data by accessing the physical memory again.
However, in an actual application scenario, a virtual memory address obtained based on the mapping of the solid state disk is usually large, which may cause the computing device to frequently execute page fault interruption and a data swap-in and swap-out process, thereby affecting data access performance of the processor, for example, the computing device has a high delay in accessing obtained data, and has a high resource consumption.
Disclosure of Invention
The embodiment of the application provides a method for expanding a memory, so that the reduction of the data access performance of a processor is avoided after the memory of a computing device is expanded. In addition, the embodiment of the application also provides a device for expanding the memory, a computing device, a computer readable medium and a computer program product.
In a first aspect, an embodiment of the present application provides a method for expanding a memory, which is used to expand a memory of a computing device, where the computing device includes a processor and a first storage device, and the first storage device includes a first storage medium and a second storage medium, where an access latency of the first storage medium is smaller than an access latency of the second storage medium, and in a general case, a data read/write performance of the first storage medium may be better than a data read/write performance of the second storage medium. When the memory is expanded, the processor acquires a physical address of the first storage medium and a physical address of the second storage medium, maps the physical address of the first storage medium to a first virtual memory address, and maps the physical address of the second storage medium to a second virtual memory address, wherein the first virtual memory address and the second virtual memory address can be directly accessed by the processor, and the processor stores hot data in a storage space indicated by the first virtual memory address or stores cold data in a storage space indicated by the second virtual memory address.
Therefore, on one hand, after the memory of the computing device is expanded by using the first storage medium and/or the second storage medium, the processor can directly access the first storage medium and/or the second storage medium without executing the processes of page fault interruption, data swap-in and swap-out and the like, so that the reduction of the data access performance of the processor can be effectively avoided. On the other hand, the physical addresses of the plurality of storage media in the storage device can be mapped into the memory of the computing device, which can maximally utilize the storage resources of the storage device to perform memory expansion, compared with the method for expanding the memory of the computing device by using the physical addresses of a single storage medium, thereby improving the memory expansion effect of the computing device. Moreover, the first virtual memory address and the second virtual memory address extended in the computing device can be used for respectively storing data of different heat degrees, so that when the processor accesses the data, the processor can have a relatively high probability of accessing the data from the first storage medium (physical memory or) with a small access delay, and the data access performance of the computing device can reach a high level.
In a possible implementation, the first storage device further includes a third storage medium, and the processor may further perform memory expansion on the computing device based on the third storage medium. In a specific implementation, the processor may obtain a physical address of the third storage medium, and map the physical address of the third storage medium to a third virtual memory address, and the processor may directly access the third virtual memory. Therefore, a larger number of storage media in the first storage device can be used as the extended memory of the computing device, so that the memory extension effect of the computing device can be further improved.
Optionally, the access latency of the second storage medium is smaller than the access latency of the third storage medium. For example, the second storage medium is SCM, and the third storage medium is flash memory or the like.
In one possible implementation, the computing device further includes a second storage device, and the computing device may not only extend the memory using the storage resources on the first storage device, but may also extend the memory using the storage resources on the second storage device. In a specific implementation, the processor may obtain a physical address of at least one storage medium in the second storage device, and map the physical address of the at least one storage medium in the second storage device to a virtual memory address directly accessible to the processor. Therefore, the memory of the computing equipment can be expanded by utilizing the storage resources on the plurality of storage equipment, so that the memory expansion effect of the computing equipment can be further improved.
In a possible implementation manner, the computing device further includes a physical memory, and the heat of the data stored in the physical memory is higher than the heat of the data stored in the storage space indicated by the first virtual memory address, so that when a subsequent processor accesses the data, the subsequent processor can access the data from the physical memory with smaller access delay, and the data access performance of the computing device reaches a higher level.
In one possible implementation, the processor obtains an access request for target data, and when the target data is not included in the physical memory, the processor searches the target data from the first storage medium according to the first virtual memory address. Therefore, when the processor reads the data, the processor can preferentially search the data from the hot data stored in the physical memory, and if the data is not searched, the processor searches whether the data to be read is included in the data stored in the storage space indicated by the first virtual memory address (namely, the storage space on the first storage medium), so that the data access performance of the computing device can reach a higher level.
In one possible embodiment, the first virtual memory address and the second virtual memory address have different attribute identifications, and the attribute identifications are used for indicating the storage characteristics of the storage medium. For example, when the first storage medium is a volatile storage medium and the second storage medium is a non-volatile storage medium, the corresponding attribute identifier may be used to indicate that the storage space corresponding to the first virtual memory address may be used to cache data, and the storage space corresponding to the second virtual memory address may be used to perform persistent storage on data, and the like. Therefore, the computing equipment can store the data through different virtual memory addresses according to the requirements of practical application.
In one possible implementation, the first storage medium may be a Dynamic Random Access Memory (DRAM) and the second storage medium may be a flash memory (flash).
In a second aspect, an embodiment of the present application further provides an apparatus for expanding a memory, which is used to execute the method described in any implementation manner of the first aspect.
In a third aspect, an embodiment of the present application further provides a computing device, where the computing device includes a memory and a processor, and the processor is configured to execute instructions stored in the memory to perform the method described in any implementation manner of the first aspect.
A fourth aspect of the present application provides a computer-readable medium having stored therein instructions, which when run on a computer, cause the computer to perform the method of the above-described aspects.
A fifth aspect of the present application provides a computer program product which, when run on a computer, causes the computer to perform the method of the above-described aspects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a schematic structural diagram of a computing device 110 according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a storage device 105 according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for expanding a memory according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of another storage device 105 according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of another computing device 110 provided in the embodiments of the present application;
fig. 6 is a schematic flowchart of another method for expanding a memory according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating two segments of virtual memory addresses mapped according to multiple physical addresses;
fig. 8 is a schematic structural diagram of an apparatus for expanding a memory according to an embodiment of the present disclosure.
Detailed Description
The embodiment of the application provides a method for expanding a memory, so that the data access performance of a processor is prevented from being reduced after the memory of a computing device is expanded.
Fig. 1 is a schematic structural diagram of a computing device according to an embodiment of the present application, where the computing device may adopt a fully converged architecture. The fully converged architecture shown in fig. 1 may include one or more computing devices 110 (in fig. 1, three computing devices 110 are taken as an example, and any number of computing devices 110 may be included in practical applications), and the computing devices 110 may communicate with each other. Computing device 110 is a device, such as a server, desktop computer, etc., that has both computing and storage capabilities. In software, each computing device 110 has an operating system thereon. A virtual machine 107 may be created on the computing device 110, the computing resources required by the virtual machine 107 originating from the processor 112 and memory 113 local to the computing device 110, and the storage resources required by the virtual machine 107 originating from the storage device 105 connected to the computing device 110. In addition, various applications may be running in the virtual machine 107, and a user may trigger a read/write data request through the applications in the virtual machine 107.
In hardware, as shown in FIG. 1, computing device 110 includes at least a processor 112, a memory 113, and a storage device 105. Further, the computing device 110 may also include a network card 114. The processor 112, the memory 113, and the network card 114 are connected via an internal bus in the computing device 110, and the storage device 105 and the computing device 100 are connected via an external bus (e.g., a serial bus). The processor 112 and the memory 113 are used to provide computing resources, among other things. Specifically, processor 112 is a Central Processing Unit (CPU) that processes data access requests from outside computing device 110 or requests generated internally within computing device 110. For example, when the processor 112 receives a write data request sent by a user, the data in the write data request is temporarily stored in the memory 113. When the total amount of data in the memory 113 reaches a certain threshold, the processor 112 sends the data stored in the memory 113 to the storage device 105 for storage. In addition, the processor 112 is used for data calculation or processing, such as metadata management, data de-duplication, data compression, data verification, virtualized storage space, address translation, and the like. Only one processor 112 is shown in fig. 1, and in practical applications, the number of processors 112 is often multiple, wherein one processor 112 has one or more processor cores. The number of processors, and the number of processor cores, are not limited in this embodiment. Further, the processor 112 may also be an Application Specific Integrated Circuit (ASIC), or be configured as one or more integrated circuits, such as: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
The memory 113 is an internal memory for directly exchanging data with the processor, and it can read and write data at any time, and it is fast, and it is used as a temporary data storage for an operating system or other programs in operation. The Memory includes at least two types of Memory, for example, the Memory may be a random access Memory (ram) or a Read Only Memory (ROM). The Random Access Memory is, for example, a Dynamic Random Access Memory (DRAM) or a Storage Class Memory (SCM). DRAM is a semiconductor Memory, and belongs to a volatile Memory (volatile Memory) device, like most Random Access Memories (RAMs). SCM is a hybrid storage technology that combines the features of both traditional storage devices and memory, memory-class memory providing faster read and write speeds than hard disks, but slower access speeds and lower cost than DRAM. However, the DRAM and the SCM are only exemplary in this embodiment, and the Memory may also include other Random Access memories, such as Static Random Access Memory (SRAM), and the like. As the rom, for example, a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), and the like can be used. In addition, the Memory 113 may also be a Dual In-line Memory Module (Dual In-line Memory Module, DIMM for short), that is, a Module composed of a Dynamic Random Access Memory (DRAM), or a Solid State Disk (SSD). In practical applications, the computing node 110 may be configured with a plurality of memories 113 and different types of memories 113. The number and type of the memories 113 are not limited in this embodiment. In addition, the memory 113 may be configured to have a power conservation function. The power-saving function means that when the system is powered off and powered on again, the data stored in the memory 113 will not be lost. A memory having a power saving function is called a nonvolatile memory.
The storage device 105 is used to provide storage resources, such as storing data. In embodiments of the present application, the storage device 105 may be used to expand memory for the computing device 100. It may include a variety of storage media such as magnetic disks, solid state disks, shingled magnetic recording hard disks, magnetic random access memories, and the like. Network card 114 is used to support computing device 110 in communicating with other computing devices 110.
It is noted that the computing device shown in FIG. 1 is merely an illustrative example and is not intended to be limiting. For example, the virtual machine 107 may not be created on each computing device 110, among other possible computing devices.
In the embodiment of the present application, the storage device 105 includes at least two storage media. Specifically, as shown in fig. 2, the storage device 105 includes a master 1051, a storage medium 1052, a buffer 1053, and a storage medium 1054. In fig. 2, two storage media are illustrated as examples, and in practical applications, the storage device 105 may further include more types of storage media. The master 1051 includes drive and control logic for controlling and implementing data access to the storage medium 1052 and the storage medium 1054, including writing new data into the storage medium 1052 and the storage medium 1054 or reading data stored therein, and the like, and the master 1051 may also control and implement communication between the storage device 105 and the processor 112, and the like. The data read-write performance of the storage medium 1052 is better than that of the storage medium 1054, and specifically, the access delay of the storage medium 1052 is smaller than that of the storage medium 1054. For example, the storage medium 1052 may be a DRAM, and the storage medium 1054 may be a flash memory (flash) or an SCM. In practice, the storage space of storage medium 1052 may be smaller than the storage space of storage medium 1054. The buffer 1053 may serve as a read-write buffer for the storage medium 1054, that is, for data to be accessed or written by the computing device 110, the data may be first placed in the buffer 1053, and then corresponding data read-write operations may be performed on the data in the buffer 1053. In this way, the buffer 1053 may provide the computing device 110 with byte access capability (i.e., data read and write in units of bytes) so that when data is stored in the storage medium 1054 in blocks of data, the computing device 110 may be masked from access characteristics of the storage medium 1054 by the buffer 1053, such as blocking access characteristics of the storage medium 1054.
In practical application scenarios, the memory in the computing device 110 is limited, and it may be difficult to meet the performance of the application on the computing device 110. For this reason, in the embodiment of the present application, both the storage space in the storage medium 1052 and the storage space in the storage medium 1054 in the storage device 105 may be expanded into the virtual memory of the computing device 110, so as to implement the memory expansion for the computing device 110. In particular implementations, computing device 110 obtains the physical address of storage medium 1052 and the physical address of storage medium 1054, respectively. Computing device 110 then maps the physical address of storage medium 1052 into a first virtual memory address of computing device 110 and the physical address of storage medium 1054 into a second virtual memory address of computing device 110. In this way, the memory capacity of the computing device 110 may be expanded, and the increased memory capacity is the sum of the capacities corresponding to the first virtual memory address and the second virtual memory address.
Accordingly, the processor 112 in the computing device 110 may first determine whether the data in the memory 113 is required to be read when accessing the data in the memory. If yes, the processor 112 directly reads data from the memory 113; if not, the processor 112 may retrieve the data from the storage medium 1052 based on the first virtual memory address via a connection (e.g., a serial bus connection, etc.) between the computing device 110 and the storage device 105. Further, if the storage medium 1052 does not include data required by the processor 112, the processor 112 may continue to perform the lookup from the storage medium 1054 according to the second virtual memory address.
Thus, on one hand, after the memory of the computing device 110 is expanded by using the storage medium 1052 and/or the storage medium 1054, specifically, the processor 112 can directly access the storage medium 1052 and/or the storage medium 1054 on the storage device based on the connection between the processor and the storage device 105, and processes such as page fault interruption and data swap-in and swap-out do not need to be performed, so that the data access performance of the processor 112 can be effectively prevented from being reduced; moreover, the processor 112 and the storage device 105 may be connected via a bus, so that when the memory of the computing device 110 is expanded based on the storage device 105, the limited memory slots in the computing device 110 are not occupied, and the memory of the computing device 110 is expanded without increasing the number of the memory slots. On the other hand, the physical addresses of the plurality of storage media in the storage device 105 may all be mapped to the memory of the computing device 110, which may maximize the memory expansion using the storage resources of the storage device 105 compared to expanding the memory of the computing device 110 using the physical addresses of a single storage medium, thereby improving the memory expansion effect of the computing device 110.
Further, each computing device 110 may simultaneously include a plurality of storage devices 105, for example, the plurality of storage devices 105 may form an array and be configured on the computing device 110, and the like, so that each computing device 110 may simultaneously utilize the storage resources of the plurality of storage devices 105 to expand the memory of the computing device 110, that is, the storage resources on the plurality of storage devices 105 are all used as the virtual memory resources of the computing device 110, thereby further increasing the memory capacity of the computing device 110 and improving the effect of expanding the memory.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, various non-limiting embodiments accompanying examples of the present application are described below with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for expanding a memory according to an embodiment of the present application, where the method may be applied to the computing device shown in fig. 1, or may be applied to other applicable computing devices. For convenience of understanding, the embodiment is exemplified by taking the example of expanding the memory for the computing device 110 by using the storage resource of one storage device 105. The method for expanding a memory in this embodiment may specifically include:
s301: storage device 105 reports the physical address of storage medium 1052 and the physical address of storage medium 1054 to processor 112.
In one possible implementation, after storage device 105 accesses computing device 110 and is powered on, master 1051 in storage device 105 may actively send storage medium 1052 and storage medium 1054, respectively, to computing device 110 via a connection with processor 112 at the physical address of storage device 105. The physical address may be configured in the master 1051 by a technician in advance, or may be automatically acquired by the master 1051.
Alternatively, in other possible implementations, when computing device 110 needs to perform a memory expansion, processor 112 may send a request to storage device 110 to obtain a physical address on storage device 105, so that storage device 105 sends storage medium 1052 and the physical address of storage medium 1054 to computing device 110 in response to the request. In this embodiment, a specific implementation manner of obtaining the physical address by the computing device 110 is not limited.
Storage medium 1052 and storage medium 1054 may be two different types of storage media, and particularly, the access latency of storage medium 1052 is smaller than that of storage medium 1054. For example, the storage medium 1052 is a DRAM, and the storage medium 1054 is a flash memory or SCM.
As an implementation example, the processor 112 and the storage device 105 may be connected by a serial bus, and a transmission protocol used by the serial bus may be, for example, any one of an open coherent processor interface (openCAPI) protocol, a computer express link (CXL) protocol, and a generation Z (GenZ) protocol, or other applicable transmission protocols, and the present embodiment does not limit this. As such, processor 112 in computing device 110 may directly access the storage medium in storage device 105 based on the connection.
S302: processor 112 maps the physical address of storage medium 1052 to a first virtual memory address of computing device 110 and maps the physical address of storage medium 1054 to a second virtual memory address of computing device 110.
Typically, the computing device 110 may include a physical memory (such as the memory 113), and an address space of the physical memory may be managed by the memory management unit. However, in practical applications, the capacity of the physical memory is limited, and it may be difficult to support the actual usage requirement of the computing device 110, for example, the capacity of the physical memory is difficult to support the computing device 110 to store more business data, and the like, so that the efficiency of accessing and processing the business data by the computing device 110 is affected. Therefore, in this embodiment, the computing device 110 realizes the capacity expansion of the memory of the computing device 110 based on the storage resource on the storage device 105.
As an implementation example, a Basic Input Output System (BIOS) in the computing device 110 may receive two pieces of address space (i.e., a storage space indicated by a physical address of the storage device 1052 and a storage space indicated by a physical address of the storage device 1054) sent by the master 1051 and send the two pieces of address space to a memory management unit in an operating system in the computing device 110. Then, the memory management unit maps the physical address of the storage device 1052 to obtain a first virtual memory address of the computing device 110, and generates a first mapping table corresponding to the first virtual memory address, where the first mapping table records a corresponding relationship between the first virtual memory address and the physical address of the storage device 1052. Meanwhile, the memory management unit further maps the physical address of the storage device 1054 to obtain a second virtual memory address of the computing device 110, and generates a second mapping table corresponding to the second virtual memory address, where the second mapping table records a corresponding relationship between the second virtual memory address and the physical address of the storage device 1054. Finally, the memory management unit may notify the processor 112 of the generated first mapping table and the second mapping table, so that the processor 112 may access data of the storage medium in the storage device 105 according to the first mapping table and/or the second mapping table. It should be noted that, in the present embodiment, the first virtual memory address and the second virtual memory address refer to an address space including a plurality of addresses.
In this embodiment, the first virtual memory address and the second virtual memory address may be independent from each other in the computing device 110, and the memory management unit may create two independent memory pools in the computing device 110 based on the two virtual memory addresses. Or, the memory management unit may splice the first virtual memory address and the second virtual memory address to obtain a virtual memory address with a larger storage space. At this time, the memory management unit may create a memory pool in the computing device 110, and the memory capacity of the created memory pool is the sum of the memory capacities corresponding to the first virtual memory address and the second virtual memory. In actual application, the memory management unit may create two independent memory pools, or may create one memory pool. Of course, this is merely an exemplary illustration, and is not used to limit the implementation manner of creating the memory pool by the memory management unit.
As such, the memory capacity of the computing device 110 may be increased, and the increased memory capacity is the sum of the capacities corresponding to the first virtual memory address and the second virtual memory address. In this way, the computing device 110 may not only use the physical memory to store the service data, but also use the storage medium 1052 corresponding to the first virtual memory address and the storage medium 1054 corresponding to the second virtual memory address to store other service data, so as to increase the data volume of the service data stored in the memory of the computing device 110, and use the physical addresses of the plurality of storage media in the storage device 105 to implement the memory capacity expansion of the computing device 110, which may achieve a higher memory expansion effect. In practical applications, when the storage medium 1052 is a volatile storage medium and the storage medium 1054 is a persistent storage medium, the computing device 110 may further use the first virtual memory address for data caching and the second virtual memory address for data storage, so that when the memory of the computing device 110 is expanded, the cache space and the storage space may be simultaneously increased, for example, proportionally increased, and the like.
Meanwhile, since the computing device 110 can directly access the storage medium 1052 and/or the storage medium 1054 in the storage device 105 through the connection with the storage device 105, the speed of the computing device 110 accessing the storage device 105 is approximate to the speed of the computing device 110 accessing the physical memory, so that the reduction of the data access performance of the computing device 110 can be avoided in the case of expanding the memory of the computing device 110.
S303: the processor 112 stores hot data in the storage space indicated by the first virtual memory address or stores cold data in the storage space indicated by the second virtual memory address.
In this embodiment, the extended first virtual memory address and the extended second virtual memory address may be used to store data of a specific heat. For example, the processor 112 may store the hot data by using the storage space indicated by the first virtual memory address, in this case, the processor 112 may store the hot data by using the storage space indicated by the second virtual memory address, and may also store the cold data with lower heat by using the storage space indicated by the second virtual memory address. Alternatively, the processor 112 may store cold data and the like by using the storage space indicated by the first virtual memory address and the storage space indicated by the second virtual memory address at the same time, which is not limited in this embodiment. The specific implementation manner for determining the heat level of the data may be determined according to the access frequency of the data, the heat identifier, and the like, for example, which is not limited in this embodiment. In this embodiment, the hot data and the cold data are relative concepts, in other words, it is sufficient that the heat degree of the data stored in the first virtual memory address is greater than the heat degree of the data stored in the second virtual memory address, and the data is not absolutely divided into the hot data and the cold data.
Further, after expanding the memory of the computing device 110, the processor 112 may access the required data (hereinafter referred to as target data) from the storage medium corresponding to the physical memory or virtual memory address. Specifically, when the processor 112 needs to acquire data, the method may further include:
s304: the processor 112 obtains an access request for the target data.
Illustratively, one or more applications may be run on the computing device 110, and the applications may need to access business data already stored in the computing device 110 (or the storage device 105) during running, for example, the applications may need to query business data such as merchandise information during running. At this point, the application may generate an access request for the target data and send it to the processor 112 in the computing device 110.
Alternatively, the processor 112 may automatically generate an access request for the business data during the process of providing the business service for the application program, and perform a subsequent data access process based on the access request. The specific implementation process of the processor 112 to obtain the access request is not limited in this embodiment.
S305: the processor 112 responds to the access request to determine whether the physical memory stores the target data to be accessed.
In practical applications, since the efficiency of accessing data from the cache (cache) by the computing device 110 is generally high, the computing device 110 may first query whether the target data is stored in the cache. If so, the computing device 110 may directly read the target data from the cache; if not, the computing device 110 may continue to look up the target data from memory (including physical memory as well as virtual memory).
In this embodiment, when the processor 112 stores data in the memory in advance, the storage location of the data may be determined according to the heat level of the data. As an example, processor 112 may prioritize storage of the most hot data in physical memory, and store the relatively less hot data in storage media (including storage media 1052 and/or storage media 1054) corresponding to the virtual memory addresses. Because the processor 112 reads data from the physical memory at a relatively high speed, when the data in the memory needs to be accessed, the computing device 110 preferentially searches the physical memory with a high data heat for whether the target data is stored. If the target data is stored in the physical memory, the processor 112 may directly read the target data from the physical memory, so that the efficiency of the computing device 110 accessing the data is high. If the target data is not stored in the physical memory, the processor 112 may continue to perform step S305 to further search for the target data.
S306: when the target data is not stored in the physical memory, the processor 112 searches the target data from the storage medium 1052 according to the first virtual memory address.
When the processor 112 needs to read data, the processor 112 may preferentially search whether the storage medium 1052 has target data stored therein, which is required to be accessed, according to the first virtual memory address, and if so, the processor 112 directly reads the target data in the storage medium 1052. If not, the computing device 110 may continue to perform step S306 to further search for the target data.
It should be noted that the above implementation is only an exemplary illustration, and the present embodiment is not limited to the magnitude of the heat of the data stored in the storage medium 1052 and the storage medium 1054. For example, the storage medium 1052 and the storage medium 1054 may store hot data, cold data, or the like.
S307: when the target data is not stored in the storage medium 1052, the processor 112 searches the target data from the storage medium 1054 according to the second virtual memory address.
As an implementation example, storage medium 1054 has a relatively large storage space. In this way, processor 112 may continue to seek data from storage medium 1054 having a larger storage space when the target data is not yet stored in storage medium 1052. Specifically, the processor 112 may access the storage medium 1054 according to the second virtual memory address corresponding to the storage medium 1054, so as to obtain the data stored in the storage medium 1054. In practical applications, if the storage medium 1054 still does not store the target data, the processor 112 may feed back a data search failure, etc.
It should be noted that in the embodiment shown in fig. 3, the storage device 105 includes only two storage media, and in other possible embodiments, the storage device 105 may further include more than three (including three) storage media, in which case, the processor 112 may utilize the storage resources of the more than three storage media in the storage device 105 to implement memory expansion.
Specifically, taking the example that the storage device 105 includes three types of storage media, as shown in fig. 4, the storage device 105 further includes a storage medium 1055. For example, the storage device 105 may include DRAM, SCM, and flash memory, wherein the storage medium 1052 is specifically DRAM, the storage medium 1054 is specifically SCM, and the storage medium 1055 is specifically flash memory. Then, the storage device 105 may report the physical address of the storage medium 1052, the physical address of the storage medium 1054, and the physical address of the storage medium 105 to the processor 112, so that the processor 112 may not only obtain the first virtual memory address based on the physical address mapping of the storage medium 1052, obtain the second virtual memory address based on the physical address mapping of the storage medium 1054, but also obtain the third virtual memory address of the computing device 110 based on the physical address mapping of the storage medium 1055, and the processor 112 may directly access the storage space indicated by the third virtual memory address. Thus, based on the embodiment shown in fig. 3, the memory of the computing device 110 may be further expanded, and the further expanded memory capacity is the memory capacity corresponding to the third virtual memory address.
It is noted that, in the embodiment described in fig. 3, the memory expansion of the computing device 110 is performed by using the storage resources in one storage device, but in other possible embodiments, the computing device 110 may perform the memory expansion based on the storage resources in a plurality of storage devices 105, so as to improve the memory expansion effect for the computing device 110.
In the following, an example of expanding the memory for the computing device 110 by using two storage devices is described. Referring to fig. 5 and fig. 6, fig. 5 shows a schematic structural diagram of another computing device, in the computing device 110 shown in fig. 5, the processor 112 is connected to the storage device 105 and the storage device 106, respectively, and the processor 112 can directly access the storage media in the storage device 105 and the storage device 106 based on the connection. FIG. 6 is a flow chart illustrating a method for expanding memory for computing device 110 using storage device 105 and storage device 106. Similar to the storage device 105, the storage device 106 shown in fig. 5 may include a master 1061, a storage medium 1062, a buffer 1063, and a storage medium 1064. The specific implementation and functions of each component in the storage device 106 are similar to those of the corresponding component in the storage device 105, and reference may be made to the description of the foregoing relevant parts, which are not described herein again. Based on the computing device shown in fig. 5, the method for expanding the memory shown in fig. 6 includes:
s601: the storage device 105 reports the physical address 1 of the storage medium 1052 and the physical address 2 of the storage medium 1054 to the processor 112; and the storage device 106 reports the physical address 3 of the storage medium 1062 and the physical address 4 of the storage medium 1064 to the processor 112.
In this embodiment, for a specific implementation process in which the storage device 105 and the storage device 106 respectively report the physical address to the processor 112, reference may be made to the description of relevant parts in the foregoing embodiments, which is not described herein again.
S602: the processor 112 maps the physical addresses reported by the storage device 105 and the storage device 106 to corresponding virtual memory addresses.
In this embodiment, the processor 112 may map the physical addresses respectively reported by the multiple storage devices into the virtual memory address of the computing device 110, so as to implement memory expansion for the computing device 110.
In an implementation example, the processor 112 may map virtual memory addresses to physical addresses on different storage devices, and the virtual memory addresses mapped by different physical addresses are independent from each other in the processor 112. At this time, the processor 112 may respectively manage the mapped virtual memory addresses by creating a plurality of memory pools. Accordingly, the processor 112 may generate a plurality of mapping tables for recording virtual memory addresses corresponding to the plurality of physical addresses. The specific implementation manner of mapping each physical address into a separate virtual memory address by the processor 112 may refer to the description of relevant parts in the foregoing embodiments, and is not described here again.
In yet another example of an implementation, processor 112 may map physical addresses on multiple storage devices to contiguous virtual memory addresses. For example, assuming that physical address 1 of storage medium 1052 in storage device 105 is 64M (megabyte), physical address 2 of storage medium 1054 is 1024M, physical address 3 of storage medium 1062 in storage device 106 is 64M, and physical address 4 of storage medium 1064 is 1024M, processor 112 may map consecutive virtual memory addresses of 2176M (i.e., 64m +1024m +64m + 1024m) based on these 4 physical addresses. Specifically, the processor 112 may map the virtual memory address 1 of 64M based on the physical address 1 of 64M corresponding to the storage medium 1052, and obtain the first address a and the memory space length (i.e. 64M) of the virtual memory address 1. Then, in the process of mapping the physical address 2 of the storage medium 1054 to the virtual memory address 2, the processor 112 may calculate the last address b of the virtual memory address 1 according to the first address a of the virtual memory address 1 and the length of the memory space, and add 1 to the last address b to obtain a new address (b + 1), where the new address is the first address of the virtual memory address 2, and the length of the virtual memory address 2 is the length of the physical address 2, so that the last address c of the virtual memory address 2 may be calculated according to the first address (b + 1) and the length of the virtual memory address 2. Similarly, the processor 112 adds 1 to the last address c to obtain the first address (c + 1) of the virtual memory address 3 corresponding to the physical address 3, and determines the virtual memory address 3 based on the first address (c + 1). By analogy, the processor 112 may sequentially determine the first address (d + 1) and the last address e corresponding to the virtual memory address 4 corresponding to the physical address 4, so that based on the storage resources on the storage device 105 and the storage device 106, the continuous virtual memory addresses [ a, e ] with the first address a and the last address e may be obtained through mapping. Wherein, the virtual memory address [ a, b ] in the continuous virtual memory addresses [ a, e ] is the virtual memory address 1 corresponding to the physical address 1, the virtual memory address [ b +1,c ] in the virtual memory addresses [ a, e ] is the virtual memory address 2 corresponding to the physical address 2, the virtual memory address [ c +1,d ] in the virtual memory address [ a, e ] is the virtual memory address 3 corresponding to the physical address 3, and the virtual memory address [ d +1,e ] in the virtual memory address [ a, e ] is the virtual memory address 4 corresponding to the physical address 4. At this time, the processor 112 may create a memory pool for managing the mapped continuous virtual memory addresses, and generate a corresponding mapping table for the continuous virtual memory addresses, so as to record virtual memory addresses corresponding to the multiple physical addresses respectively.
In practical application scenarios, storage medium 1052 and storage medium 1054 in storage device 105 may belong to different types of storage media, such as storage medium 1052 is a volatile storage medium (it is difficult to save data after power is turned off), and storage medium 1054 is a volatile storage medium (data can be saved after power is turned off). Thus, in a further possible embodiment, the processor 112 may create a corresponding number of memory pools based on the type to which the storage medium belongs, and each memory pool is used to manage consecutive virtual memory addresses corresponding to a plurality of storage media of the same type.
For example, assuming that the storage medium 1052 and the storage medium 1062 are both volatile storage media (for example, two storage media are both DRAMs, etc.) and the storage medium 1054 and the storage medium 1064 are both non-volatile storage media (for example, two storage media are both flash memories), in the process of obtaining the virtual memory address by mapping, the processor 112 may obtain two virtual memory addresses, which are the virtual memory address M and the virtual memory address N, based on the types of the storage media, as shown in fig. 7Shown in the figure. The virtual memory address M is obtained by performing address mapping on the basis of a volatile storage medium, and the virtual memory address N is obtained by performing address mapping on the basis of a nonvolatile storage medium. In particular implementations, the processor 112 may obtain the first address m based on the physical address 1 mapping 1 Is a virtual memory address M' whose current length is the length of physical address 1, i.e., 64M, and whose last address is M 2 . Then, since storage medium 1054 and storage medium 1052 belong to different storage media, processor 112 may obtain the first address n based on the physical address 2 mapping 1 The current length of the virtual memory address N' is the length of physical address 2, i.e. 1024M, and the last address is N 2 . Next, since storage medium 1062 is the same type of storage medium as storage medium 1052, processor 112 may access virtual memory address M' at current end address M 2 Adding 1, and adding the obtained new address (m) 2 + 1) as the first address of virtual memory address 3 corresponding to physical address 3, completing the address mapping for physical address 3, and obtaining virtual memory address M by this mapping, and the first address of virtual memory address M is M 1 Length is 128M (i.e. 64M + 64M). Thus, the processor 112 can map two separate physical addresses 1 and 3 into a continuous virtual memory address M. Similarly, when address mapping is performed for physical address 4, the processor may map the current end address N to virtual memory address N 2 Adding 1, and adding the obtained new address (n) 2 + 1) as the first address of virtual memory address 4 corresponding to physical address 4, completing the address mapping for physical address 4, and obtaining virtual memory address N by this mapping, and the first address of virtual memory address N is N 1 And the length is 2048M (namely 1024M + 1024M). Thus, the processor 112 can map two separate physical addresses 2 and 4 into a contiguous segment of virtual memory address N.
Further, the processor 112 may add an attribute identifier to different virtual memory addresses mapped by different types of storage media, so as to identify storage characteristics of the storage media implementing the virtual memory addresses. Illustratively, the storage characteristics of the storage medium may include, for example, volatility, non-volatility, and the like, or may be divided into high-performance read-write, low-performance read-write, and the like according to the data read-write performance of the storage medium. Still taking the virtual memory address M and the virtual memory address N as examples, the processor 112 may add a tag of a volatile storage medium to the mapped virtual memory address M, for indicating that the virtual memory address M has a volatile storage characteristic; meanwhile, the processor 112 adds a tag of the nonvolatile storage medium to the mapped virtual memory address N, which indicates that the virtual memory address N has a nonvolatile storage characteristic (data can be persistently stored). It should be understood that when the processor 112 is connected to a greater number of storage devices, the processor 112 may continue to perform address mapping on physical addresses on other storage devices starting from the last address of the virtual memory address M and/or the last address of the virtual memory address N based on the above manner of expanding the memory, which is not described in detail in this embodiment.
Further, when the processor 112 accesses a new storage device, the processor 112 may further add memory to the processor 112 based on the storage resources on the newly accessed storage device, for example, continuing to perform address mapping from the last address of the virtual memory address M and/or the virtual memory address N, etc., with reference to the similar process described above. In this manner, dynamic expansion of memory for the processor 112 may be achieved.
When the processor 112 loses connection with the accessed storage device, for example, a communication link between the processor 112 and the storage device is failed or disconnected, the processor 112 may release mapping of the physical address of the storage medium in the storage device in the virtual memory address, for example, may delete a mapping relationship between the virtual memory and the physical address on the storage device from the generated mapping table, so as to implement dynamic reduction of the memory.
In an actual application scenario, the multiple storage devices used for expanding the memory of the processor 112 may form a storage array, such as a Redundant Array of Independent Disks (RAID), and the like. Then, in an implementation example, when the plurality of storage devices store data, the data may be stored by RAID 0 or RAID 5. Of course, other ways of storing data and the like may be adopted, and this embodiment does not limit this.
S603: the processor 112 obtains an access request for the target data.
S604: the processor 112 responds to the access request to determine whether the physical memory stores the target data to be accessed.
S605: when the target data is not stored in the physical memory, the processor 112 determines whether the storage device 105 stores the target data according to the virtual memory address obtained by mapping.
In a specific implementation, the processor 112 may first search whether the storage medium 1052 in the storage device 105 stores the target data according to the virtual memory address, and if so, the processor 112 may directly read the target data stored in the storage medium 1052. If not, the processor 112 may continue to search the storage medium 1054 in the storage device 105 for the target data. If so, the processor 112 may directly read the target data stored in the storage medium 1054. If not, the processor 112 may determine that the target data is not stored in the storage device 105.
S606: when the storage device 105 does not store the target data, the processor 112 determines whether the storage device 106 stores the target data according to the mapped virtual memory address.
In a specific implementation, the processor 112 may first search, according to the virtual memory address, whether the storage medium 1062 in the storage device 106 stores the target data, and if so, the processor 112 may directly read the target data stored in the storage medium 1062. If not, the processor 112 may continue to search whether the storage medium 1062 of the storage device 106 stores the target data. If so, the processor 112 may directly read the target data stored in the storage medium 1064. If the target data is still not available on the storage medium 1064, the processor 112 may feed back that the data lookup failed, or continue with data lookup from other storage devices that are used to expand the memory of the computing device 110, etc.
In this embodiment, the specific implementation process of step S603 to step S606 is similar to the implementation of the related steps in the foregoing embodiment, and specific reference may be made to the description of the related parts in the foregoing embodiment, which is not described herein again.
In a further possible embodiment, the processor 112 may construct a large capacity Logical Unit Number (LUN) based on storage resources on multiple storage devices. Moreover, the processor 112 may speed up data reading of the processor 112 and reduce data reading latency by prefetching data stored in the virtual memory address N to the virtual memory address M, so as to improve performance of the LUN.
In this embodiment, an example is given in which the storage device 106 also has two different storage media, and in other possible embodiments, after the storage medium 1052 and the storage medium 1054 in the storage device 105 are used as the memory for the computing device 110 to expand, only the storage medium 1062 in the storage device 106 or only the storage medium 1064 may be used as the memory for the computing device 110 to expand, which is not limited in this embodiment.
The garbage recycling device and method of the present application are described above with reference to fig. 1 to 7, and the device and apparatus of the present application are described below with reference to the accompanying drawings.
The embodiment of the application also provides a device for expanding the memory. Referring to fig. 8, which shows a schematic structural diagram of an apparatus for expanding a memory in this embodiment, where the apparatus 800 shown in fig. 8 may be applied to a processor in a computing device, such as the processor 112 and the like in the foregoing embodiment, the computing device further includes a first storage device, and the first storage device includes a first storage medium and a second storage medium, an access latency of the first storage medium is smaller than an access latency of the second storage medium, and the apparatus 800 may include:
an obtaining module 801, configured to obtain a physical address of the first storage medium; acquiring a physical address of the second storage medium;
a mapping module 802, configured to map a physical address of the first storage medium to a first virtual memory address, where the first virtual memory address is directly accessible to the processor; mapping a physical address of the second storage medium to a second virtual memory address, the second virtual memory address being directly accessible by the processor;
a storage module 803, configured to store hot data in the storage space indicated by the first virtual memory address, or store cold data in the storage space indicated by the second virtual memory address.
In one possible implementation, the first storage device further includes a third storage medium;
the obtaining module 801 is further configured to obtain a physical address of the third storage medium;
the mapping module 802 is further configured to map the physical address of the third storage medium to a third virtual memory address, where the third virtual memory address is directly accessible to the processor.
In one possible implementation, the computing device further includes a second storage device;
the obtaining module 801 is further configured to obtain a physical address of at least one storage medium in the second storage device;
the mapping module 802 is further configured to map a physical address of the at least one storage medium to a virtual memory address directly accessible to the processor.
In a possible implementation, the computing device further includes a physical memory, and the heat of the data stored in the physical memory is higher than the heat of the data stored in the storage space indicated by the first virtual memory address.
In a possible implementation manner, the obtaining module 801 is further configured to obtain an access request for target data;
the apparatus 800 further comprises:
a searching module 804, configured to search the target data from the first storage medium according to the first virtual memory address when the target data is not included in the physical memory.
In a possible implementation manner, the first virtual memory address and the second virtual memory address have different attribute identifiers, and the attribute identifiers are used for indicating the storage characteristics of the storage medium.
In one possible implementation, the first storage medium comprises a dynamic random access memory DRAM and the second storage medium comprises a flash memory.
An apparatus of an embodiment of the present application may correspond to performing a method described in an embodiment of the present application. Moreover, the above and other operations and/or functions of each module in the memory expansion apparatus 800 are respectively for implementing corresponding flows of each method in fig. 3 and fig. 6. The functions of the above modules can be referred to the description of the method embodiments shown in fig. 3 and fig. 6. The functions of each module in the memory expansion apparatus 800 can be executed by the processor 112 shown in fig. 3 and fig. 6.
Embodiments of the present application also provide a computer-readable medium having stored therein instructions, which when executed on a computer, cause the computer to perform the method of the above aspects.
Embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to perform the method of the above aspects.
It should be noted that the above-described embodiments are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the manner in which objects of the same nature are distinguished in the embodiments of the application.
As will be appreciated by one of ordinary skill in the art, the aforementioned computer-readable storage media include: a U-disk, a removable hard disk, a magnetic disk, an optical disk, a RAM, an SSD, or a non-volatile memory (non-volatile memory), among various non-transitory machine-readable media that may store program code.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same.

Claims (17)

1. A method for extending a memory, the method being applied to a computing device, the computing device including a processor and a first storage device, the first storage device including a first storage medium and a second storage medium, an access latency of the first storage medium being smaller than an access latency of the second storage medium, the method comprising:
the processor acquires a physical address of the first storage medium;
the processor mapping a physical address of the first storage medium to a first virtual memory address, the first virtual memory address being directly accessible by the processor;
the processor acquires a physical address of the second storage medium;
the processor mapping the physical address of the second storage medium to a second virtual memory address, the second virtual memory address being directly accessible by the processor;
the processor stores hot data in the storage space indicated by the first virtual memory address or stores cold data in the storage space indicated by the second virtual memory address.
2. The method of claim 1, wherein the first storage device further comprises a third storage medium, the method further comprising:
the processor acquires a physical address of the third storage medium;
the processor maps the physical address of the third storage medium to a third virtual memory address, the third virtual memory address being directly accessible by the processor.
3. The method of claim 1 or 2, wherein the computing device further comprises a second storage device, the method further comprising:
the processor acquires a physical address of at least one storage medium in the second storage device;
the processor maps physical addresses of the at least one storage medium to virtual memory addresses directly accessible to the processor.
4. The method of claim 3, wherein the computing device further comprises a physical memory, and wherein the data stored in the physical memory is hot enough to be stored in the storage space indicated by the first virtual memory address.
5. The method of claim 4, further comprising:
the processor acquires an access request aiming at target data;
when the physical memory does not contain the target data, the processor searches the target data from the first storage medium according to the first virtual memory address.
6. The method according to any one of claims 1 to 5, wherein the first virtual memory address and the second virtual memory address have different attribute identifiers, and the attribute identifiers are used for indicating storage characteristics of the storage medium.
7. The method of any of claims 1 to 6, wherein the first storage medium comprises Dynamic Random Access Memory (DRAM) and the second storage medium comprises flash memory.
8. An apparatus for extending a memory, the apparatus being applied to a processor in a computing device, the computing device further including a first storage device, the first storage device including a first storage medium and a second storage medium, an access latency of the first storage medium being smaller than an access latency of the second storage medium, the apparatus comprising:
an obtaining module, configured to obtain a physical address of the first storage medium; acquiring a physical address of the second storage medium;
a mapping module to map a physical address of the first storage medium to a first virtual memory address, the first virtual memory address being directly accessible by the processor; mapping a physical address of the second storage medium to a second virtual memory address, the second virtual memory address being directly accessible by the processor;
and the storage module is used for storing hot data in the storage space indicated by the first virtual memory address or storing cold data in the storage space indicated by the second virtual memory address.
9. The apparatus of claim 8, wherein the first storage device further comprises a third storage medium;
the obtaining module is further configured to obtain a physical address of the third storage medium;
the mapping module is further configured to map the physical address of the third storage medium to a third virtual memory address, where the third virtual memory address is directly accessible to the processor.
10. The apparatus of claim 8 or 9, wherein the computing device further comprises a second storage device;
the obtaining module is further configured to obtain a physical address of at least one storage medium in the second storage device;
the mapping module is further configured to map a physical address of the at least one storage medium to a virtual memory address directly accessible to the processor.
11. The apparatus according to claim 10, wherein the computing device further comprises a physical memory, and wherein the data stored in the physical memory is hot and hot compared to the data stored in the storage space indicated by the first virtual memory address.
12. The apparatus of claim 11, wherein the obtaining module is further configured to obtain an access request for target data;
the device further comprises:
and the searching module is used for searching the target data from the first storage medium according to the first virtual memory address when the target data is not included in the physical memory.
13. The apparatus according to any of claims 8 to 12, wherein the first virtual memory address and the second virtual memory address have different attribute identifiers, and wherein the attribute identifiers are used to indicate storage characteristics of the storage medium.
14. The apparatus of any of claims 8 to 13, wherein the first storage medium comprises Dynamic Random Access Memory (DRAM) and the second storage medium comprises flash memory.
15. A computing device comprising a processor, a memory;
the processor is to execute instructions stored in the memory to cause the computing device to perform the method of any of claims 1 to 7.
16. A computer-readable storage medium comprising instructions that, when executed on a computing device, cause the computing device to perform the method of any of claims 1 to 7.
17. A computer program product which, when run on a computer, causes the computing device to perform the method of any of claims 1 to 7.
CN202111266429.1A 2021-09-11 2021-10-28 Method, device and related equipment for expanding memory Pending CN115794669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/091824 WO2023035646A1 (en) 2021-09-11 2022-05-10 Method and apparatus for expanding memory, and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021110649259 2021-09-11
CN202111064925 2021-09-11

Publications (1)

Publication Number Publication Date
CN115794669A true CN115794669A (en) 2023-03-14

Family

ID=85473619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111266429.1A Pending CN115794669A (en) 2021-09-11 2021-10-28 Method, device and related equipment for expanding memory

Country Status (2)

Country Link
CN (1) CN115794669A (en)
WO (1) WO2023035646A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116466879A (en) * 2023-03-17 2023-07-21 北京超弦存储器研究院 CXL memory module, memory data replacement method and computer system
CN116483738A (en) * 2023-06-20 2023-07-25 苏州浪潮智能科技有限公司 Data access method and device, storage medium and electronic device
CN116644006A (en) * 2023-07-27 2023-08-25 浪潮电子信息产业股份有限公司 Memory page management method, system, device, equipment and computer medium
CN117971135A (en) * 2024-03-29 2024-05-03 苏州元脑智能科技有限公司 Storage device access method and device, storage medium and electronic device
WO2024193096A1 (en) * 2023-03-21 2024-09-26 超聚变数字技术有限公司 Data migration method and computing device
WO2024198546A1 (en) * 2023-03-31 2024-10-03 华为技术有限公司 Memory controller, memory access method, storage module and electronic device
US12235766B2 (en) 2023-03-17 2025-02-25 Beijing Superstring Academy Of Memory Technology CXL memory module, memory data swap method and computer system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118573674B (en) * 2024-08-01 2024-10-22 恒生电子股份有限公司 Data access processing method, device, equipment, storage medium and program product

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112714906A (en) * 2018-09-28 2021-04-27 英特尔公司 Method and apparatus to use DRAM as a cache for slow byte-addressable memory for efficient cloud applications
US20200327049A1 (en) * 2019-04-11 2020-10-15 Alibaba Group Holding Limited Method and system for memory expansion with low overhead latency
CN112764925A (en) * 2021-01-18 2021-05-07 苏州浪潮智能科技有限公司 Data storage method, device, equipment and storage medium based on virtual memory
CN112860381B (en) * 2021-03-09 2022-04-26 上海交通大学 Method and system for virtual machine memory expansion based on Shenwei processor

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116466879A (en) * 2023-03-17 2023-07-21 北京超弦存储器研究院 CXL memory module, memory data replacement method and computer system
CN116466879B (en) * 2023-03-17 2023-12-29 北京超弦存储器研究院 CXL memory module, memory data replacement method and computer system
US12235766B2 (en) 2023-03-17 2025-02-25 Beijing Superstring Academy Of Memory Technology CXL memory module, memory data swap method and computer system
WO2024193096A1 (en) * 2023-03-21 2024-09-26 超聚变数字技术有限公司 Data migration method and computing device
WO2024198546A1 (en) * 2023-03-31 2024-10-03 华为技术有限公司 Memory controller, memory access method, storage module and electronic device
CN116483738A (en) * 2023-06-20 2023-07-25 苏州浪潮智能科技有限公司 Data access method and device, storage medium and electronic device
CN116483738B (en) * 2023-06-20 2023-09-05 苏州浪潮智能科技有限公司 Data access method and device, storage medium and electronic device
CN116644006A (en) * 2023-07-27 2023-08-25 浪潮电子信息产业股份有限公司 Memory page management method, system, device, equipment and computer medium
CN116644006B (en) * 2023-07-27 2023-11-03 浪潮电子信息产业股份有限公司 A memory page management method, system, device, equipment and computer medium
CN117971135A (en) * 2024-03-29 2024-05-03 苏州元脑智能科技有限公司 Storage device access method and device, storage medium and electronic device

Also Published As

Publication number Publication date
WO2023035646A1 (en) 2023-03-16

Similar Documents

Publication Publication Date Title
WO2023035646A1 (en) Method and apparatus for expanding memory, and related device
US20200057729A1 (en) Memory access method and computer system
US12216928B2 (en) Fragment management method and fragment management apparatus
CN114860163B (en) Storage system, memory management method and management node
KR101944876B1 (en) File access method and apparatus and storage device
US20160085585A1 (en) Memory System, Method for Processing Memory Access Request and Computer System
JP6651444B2 (en) Hybrid storage
WO2019085769A1 (en) Tiered data storage and tiered query method and apparatus
CN109800185B (en) Data caching method in data storage system
CN112632069B (en) Hash table data storage management method, device, medium and electronic equipment
CN104111804A (en) Distributed file system
EP3974974A1 (en) Virtualization method and system for persistent memory
CN115904212A (en) Data processing method and device, processor and hybrid memory system
US11586353B2 (en) Optimized access to high-speed storage device
CN113407120A (en) Mapping table management method and device based on HMB and computer equipment
CN116665727B (en) Write I/O aggregation method, apparatus, storage device and storage medium
CN118152434A (en) Data management method and computing device
EP4307129A1 (en) Method for writing data into solid-state hard disk
CN100405777C (en) A caching method based on target memory device in Ethernet storage area network
CN104424124A (en) Memory device, electronic equipment and method for controlling memory device
CN111190543A (en) Storage method and system for sharing NVDIMM storage resources among threads
CN118779280B (en) Method for reducing bus load, CXL module, processing system and processor chip
US12038852B2 (en) Partial logical-to-physical (L2P) address translation table for multiple namespaces
CN113076267B (en) Address conversion method and data storage device based on hot spot aggregation
WO2024169158A1 (en) Storage system, data access method, apparatus, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination