CN115599544B - Memory management method, device, computer equipment and storage medium - Google Patents
Memory management method, device, computer equipment and storage mediumInfo
- Publication number
- CN115599544B CN115599544B CN202211248341.1A CN202211248341A CN115599544B CN 115599544 B CN115599544 B CN 115599544B CN 202211248341 A CN202211248341 A CN 202211248341A CN 115599544 B CN115599544 B CN 115599544B
- Authority
- CN
- China
- Prior art keywords
- memory
- block
- metadata
- segments
- allocated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
- G06F12/0646—Configuration or reconfiguration
- G06F12/0653—Configuration or reconfiguration with centralised address assignment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System (AREA)
Abstract
The embodiment of the specification provides a memory management method, a device, computer equipment and a storage medium, wherein a memory comprises a plurality of memory blocks, each memory block is divided into a plurality of memory segments, the memory is used for storing total metadata and block metadata corresponding to each allocated memory block, the block metadata comprises allocation state information of each memory segment in the allocated memory blocks, the total metadata comprises quantity information of unallocated memory segments in each allocated memory block, the method comprises the steps of responding to a memory adjustment request, determining a target memory segment with a state to be adjusted according to the total metadata and the block metadata, adjusting the allocation state of the target memory segment based on a memory adjustment type corresponding to the memory adjustment request, updating the allocation state information of the block metadata of the target memory block to which the target memory segment belongs after adjusting the allocation state of the target memory segment, and updating the quantity information of the unallocated memory segments in the target memory block in the total metadata.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a memory management method, a device, a computer device, and a storage medium.
Background
In a conventional memory management scheme, a memory is divided into a plurality of memory pages (pages), and metadata (e.g., struct pages) needs to be created for each page to manage each page. Since memory pages are usually smaller (e.g. 4 kbytes), such as each 4k memory page, the metadata of each memory page needs 64 bytes, and in a large memory scenario, a large amount of memory space is occupied by storing metadata, which results in a large amount of metadata occupying memory.
In other schemes, in order to avoid occupation of the memory by a large amount of metadata, a larger granularity is used as a management unit to manage the memory, for example, the memory is divided into memory blocks with the size of 2m and the like for management. However, with a larger management granularity, each memory block may not completely store data, which may result in waste of storage space inside the memory block. In addition, for the scene that needs small blocks of memory, such as the scene that stores compressed memory data, there is also a management requirement for performing smaller granularity on the memory blocks. Based on this, under the large management granularity, how to avoid the memory waste is a technical problem to be solved.
Disclosure of Invention
In order to overcome the problems in the related art, the embodiments of the present disclosure provide a memory management method, apparatus, and computer device.
According to a first aspect of embodiments of the present disclosure, a memory management method is provided, where the memory includes a plurality of memory blocks, each of the memory blocks is divided into a plurality of memory segments, and the memory is configured to store total metadata and block metadata corresponding to each allocated memory block;
the block metadata comprises allocation status information of each memory segment in the allocated memory block;
the total metadata comprises the quantity information of unallocated memory segments in each allocated memory block;
The method comprises the following steps:
responding to a memory adjustment request, and determining a target memory segment of which the state needs to be adjusted according to the total metadata and the block metadata;
Adjusting the allocation state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
After the allocation state of the target memory segment is adjusted, updating allocation state information of block metadata of a target memory block to which the target memory segment belongs, and updating quantity information of unallocated memory segments in the target memory block in the total metadata.
According to a second aspect of embodiments of the present disclosure, there is provided a memory management device, where the memory includes a plurality of memory blocks, and each of the memory blocks is divided into a plurality of memory segments;
the memory is used for storing total metadata and block metadata corresponding to each allocated memory block;
the block metadata comprises allocation status information of each memory segment in the allocated memory block;
the total metadata comprises the quantity information of unallocated memory segments in each allocated memory block;
The device comprises:
the determining module is used for responding to the memory adjusting request and determining a target memory segment of which the allocation state needs to be adjusted according to the total metadata and the block metadata;
the adjusting module is used for adjusting the allocation state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
And the updating module is used for updating the distribution state information of the block metadata of the target memory block to which the target memory segment belongs and updating the quantity information of the unallocated memory segments in the target memory block in the total metadata after the distribution state of the target memory segment is adjusted.
According to a third aspect of embodiments of the present specification, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method embodiments of the first aspect are implemented when the computer program is executed by the processor. According to a third aspect of the embodiments of the present specification, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method embodiments of the first aspect described above.
According to a fourth aspect of embodiments of the present specification, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method embodiments of the first aspect described above.
The technical scheme provided by the embodiment of the specification can comprise the following beneficial effects:
In the embodiment of the present disclosure, the memory includes a plurality of memory blocks, and a plurality of memory segments are divided for each memory block, so that the memory blocks may be designed with a larger granularity, thereby reducing occupation of block metadata of the memory blocks, and further, two layers of metadata including total metadata and block metadata of each allocated memory block are designed, where the block metadata includes allocation status information of each memory segment in an allocated memory block to be used for determining a memory segment available for allocation in the allocated memory block, and the total metadata includes number information of unallocated memory segments in each allocated memory block to be used for determining an allocable memory block in the memory. And after the allocation state of the target memory segment is adjusted, updating the allocation state information of the block metadata of the target memory block to which the target memory segment belongs and updating the quantity information of the target memory block in the total metadata, thereby realizing the allocation of the memory segment, reducing the residual space waste of the memory block with large granularity and realizing the management of the memory block with smaller granularity.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the specification and together with the description, serve to explain the principles of the specification.
Fig. 1A and 1B are schematic diagrams of a memory architecture according to an exemplary embodiment of the present disclosure.
Fig. 2A is a schematic diagram of a memory block division of memory segments according to an exemplary embodiment of the present disclosure.
FIG. 2B is a schematic diagram of a singly linked list as illustrated in the present specification according to an exemplary embodiment.
FIG. 2C is a schematic diagram of a doubly linked list as illustrated in the present specification according to an exemplary embodiment.
FIG. 2D is a schematic diagram of two doubly linked lists as illustrated in this specification according to an exemplary embodiment.
FIG. 2E is a diagram of a linked list array as illustrated in the present specification according to an exemplary embodiment.
Fig. 2F is a schematic diagram of overall metadata shown in the present specification according to an exemplary embodiment.
Fig. 2G to 2J are schematic views of memory management according to an exemplary embodiment of the present disclosure.
Fig. 3 is a block diagram of a computer device in which a memory management apparatus is shown according to an exemplary embodiment of the present disclosure.
Fig. 4 is a block diagram of a memory management device according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present description as detailed in the accompanying claims.
The terminology used in the description presented herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this specification to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present description. The term "if" as used herein may be interpreted as "at..once" or "when..once" or "in response to a determination", depending on the context.
In the field of memory management, metadata of a memory refers to data in which state information is recorded for each management unit (which may be referred to as a memory page or a memory block) of the memory in order to facilitate management of the memory. Based on specific management needs, various types of status information can be recorded in the metadata. It will be appreciated that during operation of the computer device, metadata is stored in the memory. As described in the background art, the smaller granularity of memory management may cause a large amount of metadata to occupy the memory, for example, the higher overhead of memory management in a virtual machine. FIG. 1A is a schematic diagram illustrating running a virtual machine on a Host machine (Host) according to an exemplary embodiment.
The host of the present embodiment refers to an entity computer for installing virtual machine software, and the host is a concept with respect to a virtual machine.
The virtual machine (virtual machine) of this embodiment refers to a complete computer system that runs in a completely isolated environment with complete hardware system functions through software simulation. The work done by the physical computer can be realized in the virtual machine. When creating a virtual machine in a computer, a part of hard disk and memory capacity of the physical machine are required to be used as the hard disk and memory capacity of the virtual machine. Each virtual machine has an independent operating system, and can operate as if it were a physical machine. Common Virtual machine software includes, but is not limited to VMware (VMWare ACE), virtual Box, virtual PC or KVM (Kernel-based Virtual Machine), etc., which can virtualize multiple computers in a physical machine system.
In connection with fig. 1A, multiple virtual machines VM1, VM2, VMn may be running in a HOST, where the Memory used is from the Memory of the HOST, and the kernel of the HOST and other applications on the HOST (e.g., application 1 to Application 3 in the figure) may also use the Memory. In this way, during the running process, the memory used by the kernel and the application program can compete with the memory used by the virtual machine, so that the memory available for sale of the host machine is uncertain, and especially under the condition of serious tension of the memory, the memory of the virtual machine can be exchanged out and even the virtual machine cannot be used, thereby affecting the performance and stability of the system.
Based on this, a memory allocation architecture of reserved memory may be adopted, as shown in fig. 1B, which is a schematic diagram of a reserved memory scenario according to an exemplary embodiment of the present disclosure, where the memory of the host includes two storage spaces, where two storage spaces of the memory are shown in fig. 1B in different filling manners, including an unreserved storage space a (in the drawing, diagonal filling) for the kernel, and a reserved storage space B (in the drawing, vertical line and gray filling) for the virtual machine. That is, the unreserved storage space a is used for the kernel in the figure, and applications running on the operating system (e.g., application 1 to application 3 illustrated in the figure) can use the unreserved storage space a. The reserved storage space b is available for Virtual Machines (VM), such as VM1 to VMn as shown in the figure. The two storage spaces may be managed with different granularity, i.e. the memory may be partitioned differently. For ease of illustration in fig. 1B, two storage spaces are illustrated in a continuous fashion in the figure. It will be appreciated that in practice, the two storage spaces may be non-contiguous.
The reserved memory space occupies most of the memory and is not available to the host kernel, and a reserved memory module can be inserted into the kernel of the operating system for special management. In order to facilitate management of the series of memory and avoid occupation of the memory by a large amount of metadata, and considering that when memory is allocated to the virtual machine, the reserved memory module typically manages the reserved memory with a larger granularity as a management unit, for example, the reserved memory is divided into memory blocks (ms) with a size of 2MB, etc., and in some scenarios, large memory is also commonly used, and other granularity, such as 1GB (GigaByte ) is optional.
However, with a larger management granularity, each memory block may not completely store data, which may result in waste of storage space inside the memory block. In addition, in the memory compression scene, the compressed data is smaller than the size of the memory block. Based on the above, how to avoid memory waste and efficiently manage the memory block under a large management granularity is a technical problem to be solved.
Based on this, the embodiment of the present disclosure provides a memory management method, where a memory includes a plurality of memory blocks, and a plurality of memory segments are divided for each of the memory blocks, so that the memory blocks may be designed with a larger granularity, thereby reducing occupation of block metadata of the memory blocks, and in addition, two layers of metadata may be specifically managed for the memory segments in the memory blocks, and based on the block metadata of each allocated memory block, the embodiment further includes total metadata, where the block metadata includes allocation status information of each memory segment in the allocated memory block, for determining a memory segment available for allocation in the allocated memory block, and the total metadata includes information of a number of unallocated memory segments in each allocated memory block, for determining a memory block available for allocation in the memory. And after the allocation state of the target memory segment is adjusted, updating the allocation state information of the block metadata of the target memory block to which the target memory segment belongs and updating the quantity information of the target memory block in the total metadata so as to realize the allocation of the memory segment and reduce the residual space waste of the large-granularity memory block. Next, the present embodiment will be described in detail.
The memory of this embodiment includes a plurality of memory blocks, each of which is divided into a plurality of memory segments. The size of the memory block may be flexibly configured according to the needs, for example, 2MB or 1GB, which is not limited in this embodiment. The memory blocks may be continuous or discontinuous.
As shown in fig. 2A, the present disclosure is an embodiment of dividing each memory block into a plurality of memory segments, where the sizes of the memory segments are flexibly configured according to needs, and the present disclosure is not limited thereto. For example, a memory block of 2MB is divided into a plurality of small segments, for example, with granularity of 1k, and may be divided into 2048 memory segments.
Based on the above design of dividing the memory block into a plurality of memory segments, a data structure of each memory segment needs to be designed and managed, and the data structure of this embodiment includes block metadata and total metadata.
In this embodiment, the memory blocks are used as granularity, and metadata of each allocated memory block is established and called block metadata (header). The block metadata includes allocation status information of each memory segment in the allocated memory block. In practical application, the data structure of the block metadata can be flexibly implemented according to the needs, and the implementation of the allocation status information can also be implemented in various ways, which is not limited in this embodiment. It will be appreciated that block metadata need not be established for memory blocks that have been allocated, i.e., have stored data.
For example, the memory segments of the memory block may be numbered in a set order (e.g., in order of address from high to low or from low to high, etc.), and an allocation status flag indicating whether or not allocation has been made may be configured for each number. Or the allocation status of each memory segment in the memory block may be represented in a bitmap manner. A bitmap (bitmap) is a data structure, and includes at least one element, where elements are arranged in sequence, and each element indicates that its corresponding element is absent or present by using "0" or "1". In this embodiment, each memory segment may be ordered according to a set order, and two states of whether the memory segment is allocated are represented by using 0 bit and 1 bit, so that allocation status information of each memory segment in one memory block may be represented by using one bitmap, and the occupation of data is low, which is also convenient for rapid analysis of occupation conditions of each segment in the memory block during subsequent processing. Of course, it will be clear to those skilled in the art that the allocation status information may be implemented in a variety of ways in practical applications, and this embodiment is not limited thereto.
The storage location of the block metadata may be in various manners, for example, the block metadata occupies a small amount of memory and may be stored in a memory segment of the memory block, the block metadata may occupy one or more memory segments according to the size of the block metadata and the size of the memory segment, and the memory segment storing the block metadata may be configured according to needs, for example, the block metadata is stored from the first memory segment of the memory block, or the block metadata is stored from the last memory segment of the memory block. In other examples, the block metadata may be uniformly stored in other locations of the memory, which is not limited in this embodiment.
In a reserved memory scenario, memory may include a first storage space (unreserved memory) for use by an operating system of a computer device and a second storage space (reserved memory) for use by a virtual machine, the second storage space including the plurality of memory blocks. The first storage space and the second storage space can adopt different management granularities, the first storage space can be managed by a first memory management module of the operating system, and the method of the embodiment can be applied to a second memory management module used for managing the second storage space in the operating system. If the second memory management module uses the first memory space, memory allocation needs to be initiated to the first memory management module. If the block metadata changes frequently, the second memory management module needs to interact with the first memory management module frequently, based on the block metadata is stored in the memory segment of the memory block, and the second memory management module is used for directly managing the block metadata, so that the processing efficiency can be improved. Moreover, the memory blocks of the reserved memory are of larger granularity and often are not completely used, and the space occupied by the block metadata is also very limited, so that the use of the memory blocks is not affected. And the corresponding relation between the block metadata and the memory block is not required to be specially established, and when the address of the memory block is determined, the block metadata of the memory block can be directly determined.
As an example, the size of the memory segment may be determined based on the size of the block metadata, and the size of the memory segment is greater than or equal to the size of the block metadata, so that the block metadata is stored in one memory segment, for example, the first memory segment, so that management may be facilitated and management efficiency may be improved.
For example, the block metadata may further include other information, such as the physical address paddr (physics address) of the memory block ms, the number of free memory segments free, the number of maximum free segments (i.e. the number of maximum continuous unallocated memory segments in the allocated memory block) max_free, and the sequence number max of the starting position of the maximum continuous free segment, so as to facilitate the subsequent allocation or release of the memory segments.
The embodiment also establishes total metadata, wherein the total metadata includes the number information of the unallocated memory segments in each allocated memory block, so as to be used for determining the allocable memory blocks in the memory.
The information about the number of unallocated memory segments in each allocated memory block may include information about each free memory segment in the allocated memory block, such as the number of free memory segments and/or the number of maximum free segments, which represents the number of maximum consecutive free memory segments in the memory block. For example, the number of idle memory segments in a certain memory block ms is 200, which includes two consecutive idle memory segments, wherein one consecutive idle memory segment is 50, the other consecutive idle memory segment is 150, and the maximum idle segment number is 150. There are two types of allocated memory blocks, i.e. memory blocks with free memory segments in the above example, and memory blocks that are fully occupied, i.e. there are no free memory segments in the memory block, the number of free memory segments being zero. When the memory allocation request is acquired through the information, whether the memory blocks meeting the allocation request exist in the allocated memory blocks or not can be rapidly determined by utilizing the total metadata. In practical application, the implementation manner of the data structure of the total metadata can be flexibly configured according to the needs, and the embodiment is not limited to this.
For example, the total metadata may be stored in the unreserved storage space of the memory or in the reserved storage space of the memory in a reserved memory scenario. For example, due to the size of the total metadata, and the need for memory to be reserved as much as possible for use by the virtual machine in a reserved memory scenario, the total metadata may be stored in unreserved storage space.
In some examples, the total metadata may include an address of each of the block metadata, and after determining that there is at least one candidate memory block satisfying the memory allocation request, the method further includes reading the block metadata of the at least one candidate memory block according to the address of the block metadata of the at least one candidate memory block, so that the embodiment may quickly read the block metadata of the candidate memory block after determining the candidate memory block.
For convenience of management and fast allocation, since there are two types of allocated memory blocks, in some examples, information of completely allocated memory blocks and information of incompletely allocated memory blocks in the total metadata may be separately managed and stored.
In practical applications, there may be multiple implementations for recording the total metadata of the number information of the unallocated memory segments in each allocated memory block. In some examples, the number of unallocated memory segments in each of the allocated memory blocks in the total metadata may be stored in a linked list.
A linked list is a non-contiguous, non-sequential storage structure on a physical storage structure, and the logical order of data elements is achieved by the order of pointer links in the linked list. The linked list is made up of a series of nodes (each element in the linked list is called a node) that can be dynamically generated at runtime. Each node comprises two parts, one being a data field storing data elements and the other being a pointer field storing addresses.
The linked list includes a single linked list and a double linked list, and as shown in fig. 2B, is a schematic diagram of a single linked list according to an embodiment of the present disclosure, in which a first node of the linked list includes a head pointer head, a data field of the head pointer head is null, and the head pointer head points to a data field of a next node. The last node's post pointer next points to end null. The end indicates that the linked list is a non-circular linked list, and in other examples, the post pointer next of the last node may also point to the head pointer, thereby forming a circular linked list.
The pointer field of each node in the doubly linked list includes a pre-pointer prev (the data field for pointing to the previous node) and a post-pointer next, so that the previous node of the current node can be found quickly with respect to the singly linked list. Similarly, the doubly linked list may also include non-circular and circular doubly linked lists based on the pointer to the last node's post pointer next. Referring to fig. 2C, a schematic diagram of a doubly linked list is shown in this specification, where a circular doubly linked list is taken as an example, a pointer field (i.e., a head pointer) of a first node includes a front pointer and a back pointer, a data field head may be null or store data according to needs, and other nodes are similar, the pointer fields include a front pointer and a back pointer, and the data fields in the figure are a1, a2 and a3 in sequence. In the figure, for convenience of illustration, each node is shown to be a front pointer, a data field and a back pointer in sequence, and in practical application, other modes may be adopted as required, for example, the front pointer, the back pointer and the data field are sequentially shown, which is not limited in this embodiment. In practical application, the type of the doubly linked list can be selected according to the requirement, and the embodiment is not limited to this.
In some examples, the total metadata includes one or more first linked lists, where different first linked lists correspond to different quantity information, and in practical application, the first linked list may be a single-chain table or a double-chain table according to needs, which is not limited in this embodiment.
The first linked list includes at least one node for storing an address of block metadata of an allocated memory block so that the block metadata of the memory block can be accessed quickly through the total metadata. Wherein addresses of block metadata of allocated memory blocks of the same amount of information are stored in different nodes of the first linked list.
Two doubly linked lists, list_a and list_b, are shown in fig. 2D, and the linked List in this embodiment includes a head node, which is also optional in practical application according to needs, and this embodiment is not limited thereto.
The list_a is a bidirectional circular List, the pointer field (head pointer) of the first node comprises a front pointer and a back pointer and further comprises a data head_a, the back pointer points to the next node a1, the front pointer points to the last node a1, the back pointer points to the next node a1, and similarly, the front pointer of the node a1 points to the head_a and the back pointer points to the head_a.
The list_b is a bidirectional circular List, the pointer field (head pointer) of the first node comprises a front pointer and a back pointer, the head_b also comprises data, the back pointer points to the next node b1, the front pointer points to the last node b2, and the other two nodes point to the same process, and the reference is shown in the attached drawing.
In this embodiment, the node may store the address of the block metadata header of the allocated memory block. A1, b1 and b2 as shown in the figure store the addresses of the block metadata headers of the corresponding allocated memory blocks, respectively. The block metadata is stored in a memory segment, such as the first memory segment, of the memory segment ms, and the header of the memory block is accessible through the node.
Node a1 in list_a is used to connect to memory block m1. Nodes b1 and b2 in list_b represent memory blocks m2 and m3, respectively, i.e., the block metadata of m2 and the block metadata of m3 are linked in a List, indicating that the two allocated memory blocks have the same number of unallocated memory segments (e.g., max_free).
The block metadata of the allocated memory block corresponding to a1 and the block metadata of the allocated memory block corresponding to b1 are different from each other, i.e. the number information (e.g. max_free) of the unallocated memory segments representing the allocated memory block corresponding to a1 is different from the number information (e.g. max_free) of the unallocated memory segments of the allocated memory block corresponding to b 1.
In practical application, there are cases where there are a large number of memory segments, and there are also many cases where there are many possibilities for the number information of unallocated memory segments of each allocated memory block. For example, a memory block has 2048 memory segments, when there are many memory blocks, there are 2048 possibilities for the number of unallocated memory segments in the allocated memory block, and each first linked list corresponds to information about the number of unallocated memory segments, that is, there may be a plurality of first linked lists. In order to facilitate the inquiry of the allocable memory blocks through the total data during the memory allocation, in this embodiment, the total metadata includes a linked list array, each element in the linked list array corresponds to a different number range, each element is used for being linked to one or more first linked lists, and the number corresponding to the linked first linked list is in the number range corresponding to the element. For example, the linked list array in the total metadata may be separate metadata for managing the linked list of each element in the array.
The number range can be divided according to the number of the memory segments, the number range is multiple, and each number range can be the same or different. For example, there are 2048 memory segments, which can be divided into 16 shares, 1 to 128 as a range of numbers, 129 to 256 as a range of numbers, and so on. It will be clear to those skilled in the art that many other ways of dividing are possible in practical applications, and this embodiment is not limited thereto.
Based on this, n number ranges are partitioned, and there are n elements in the linked list array. Each element in the linked list array is a linked list. As an example, 16 elements, part 0 through part 15, are included in the linked list array part nlist (where nlisit represents n linked list elements). The first linked list, i.e., the first element linked to the array, partial [0], with the number of unallocated memory segments in the range of "1 to 128", and so on.
FIG. 2E is a schematic diagram of a linked list array according to an exemplary embodiment of the present disclosure, where the linked list array shown in FIG. 2E includes 16 elements, and the number range corresponding to each element is shown. For example, assuming that there are three allocated memory blocks in the memory, according to the maximum free segment memory segment number of the three allocated memory blocks, the corresponding linked list may be:
Assuming that the number of maximum free memory segments of the memory block ms1 is 100, the block metadata header of the memory block may be linked in a linked List list_a shown in fig. 2D;
Assuming that the maximum number of free memory segments max_free for memory block ms2 and memory block ms3 are 120, the block metadata headers for these two memory blocks may be linked in a linked List list_b as shown in fig. 2D, where b1 represents the header of memory block ms2 and b2 represents the header of memory block ms 3.
Since the maximum free memory segment number of these three memory blocks is in the range of "1-128", it is possible to link to the first element of the array, partial [0].
In practical application, various link modes can be adopted, for example, each element corresponds to one total linked list and is used for storing head pointers of the corresponding total linked list, and the head pointers of each first linked list are stored in nodes of the total linked list corresponding to the element corresponding to the first linked list.
Each element in the linked list array is a linked list, and each element storage information can be information of the first node of the linked list. For example, FIG. 2E shows the List array total, the information stored by the first element partial [0], i.e., the first node information of the doubly-circular List List_k. Specifically, the doubly-linked List List_k under partial [0] includes a node head_k pointing to node k1, and the next of node k1 pointing to node k2, k2 may point to head_k, thereby forming a doubly-linked List. That is, head_k, k1 and k2 constitute list_k, and other two sets of list_a and list_b may be linked with list_k separately from list_k, for example, node k1 may store the first node of list_a and the other node k2 may store the first node of list_b, as shown in fig. 2F, k1 actually stores the information of the first node of list_a framed by the dashed line frame in the figure, and k2 actually stores the information of the first node of list_b framed by the dashed line frame in the figure, thereby realizing the links between list_k, list_a and list_b. For ease of understanding, the information within the dashed boxes is not placed in k1 and k 2.
Illustratively, the above examples refer to two cases where max_free is 100 and 120, and the max_free may also be stored in a linked List as needed, for example, may be stored in the data field head_a of the first node of list_a and the data field head_b of the first node of list_b, respectively.
The order of the two linked lists list_a and list_b linked by the linked List list_k may be flexibly configured according to needs, for example, may be ordered in ascending order of max_free, may be ordered in descending order, or may be ordered by other custom, which is not limited in this embodiment.
Since there are various possibilities for the maximum number of memory segments max_free, in this embodiment, the corresponding first linked List may be created only when max_free occurs, for example, in the range of "1-128" corresponding to the first element partial [0], since only max_free is 100 and 120, only linked lists list_a corresponding to 100 and linked lists list_b corresponding to 120 are created, so that resource occupation may be reduced, and correspondingly, the linked lists list_k includes nodes for linking the two linked lists. It can be understood that in practical application, a corresponding linked list may be created for each max_free, and for the case that a certain max_free does not have a corresponding allocated memory block, the linked list corresponding to the max_free stores a null value, which is not limited in this embodiment.
In practical applications, there is a completely allocated memory block, i.e. all memory segments of the allocated memory block are allocated, and the free memory segment of the allocated memory block is zero, in which case, as described above, a first linked list indicating that max_free is zero may be created, as in the previous example. In other examples, the fully allocated memory blocks may be separately managed, for example, another link table may be created, which is referred to as a second link table in this embodiment, where the second link table is not linked with the first link table in the link table array, and a data field of a node in the second link table may be used to store an address of block metadata of the fully allocated memory block, so as to link block metadata headers of each fully allocated memory block, so that the block metadata headers of each fully allocated memory block may be mounted in the same link table. Under the memory allocation scene, the completely allocated memory blocks have no free memory segments and cannot be used for allocation, and based on the free memory segments, the independent management of the completely allocated memory blocks is realized, and the processing efficiency of memory allocation is improved.
As can be seen from the above embodiments, elements in the linked list array may not be directly linked to the block metadata header, there is a layer of structure list (i.e. each first linked list) in the middle, which may be allocated according to the number of maximum free segments as required, for example, if 1-128 corresponding to partial [0] is only one ms containing 5 continuous small segments, then a linked list is allocated to make its corresponding max_free be 5, then this list is linked up to partial [0] and linked down to the header, and the other maximum free segments are not yet present, so that metadata waste is avoided.
The linked list data and the second linked list may be organized in a pool total structure, which is used as total metadata, and may be used to manage all memory blocks in this embodiment, and optionally, the total metadata may further include other information, for example, record the number nr of the memory blocks ms included in the total metadata, a protection flag lock for protecting the linked list operation, and a cache pool for caching list metadata, which may be flexibly configured according to needs in practical application, and this embodiment is not limited to this embodiment.
Based on the above-mentioned metadata design, the present specification also provides an embodiment of a memory management method. As shown in fig. 2G and 2H, which are flowcharts illustrating a memory management method according to an exemplary embodiment of the present disclosure, the method may include the steps of:
In step 202, in response to the memory adjustment request, a target memory segment whose allocation status needs to be adjusted is determined according to the total metadata and the block metadata.
In step 204, based on the memory adjustment type corresponding to the memory adjustment request, adjusting the allocation status of the target memory segment;
in step 206, after adjusting the allocation status of the target memory segment, the allocation status information of the block metadata of the target memory block to which the target memory segment belongs is updated, and the number information of unallocated memory segments in the target memory block in the total metadata is updated.
The memory management method of the present embodiment may be applied to any scenario requiring memory management, including but not limited to the reserved memory scenario described above. In some examples, the method of the embodiment may be used to manage all or part of the memory space of the internal memory, for example, in a reserved memory scenario, the memory space of the internal memory is reserved for use by a virtual machine.
When applied to the reserved memory scenario, the memory may include a first storage space for use by an operating system of the computer device and a second storage space for use by the virtual machine, where the second storage space includes the plurality of memory blocks. The first storage space and the second storage space may adopt different management units, the first storage space may be managed by a first memory management module of the operating system, and the method of the embodiment is applied to a second memory management module of the operating system for managing the second storage space, that is, the scheme of the embodiment may be used for managing the second storage space of the memory.
As shown in fig. 2I and 2J, management of memory typically involves two operations, memory allocation and memory release. Next, the respective descriptions will be given. Taking the application of the method of the embodiment to the memory management module as an example, in practical application, the memory allocation and the memory release can be functions of independent operation. The memory allocation request 21 is input to the memory management module, and the step of determining the target memory segment 211 in step 211 and the step of updating 212 after the allocation status is adjusted for the target memory segment may be performed, which specifically includes the step of updating the block metadata of the target memory block and the step of updating the total metadata. Similarly, the memory release request 22 is input to the memory management module, and the step of determining the target memory segment 221 in step 221 may be performed, and after the allocation status is adjusted for the target memory segment, the step of updating 222 is performed, which specifically includes a step of updating the block metadata of the target memory block and a step of updating the total metadata.
In some examples, the memory adjustment request includes a memory allocation request, and the determining, according to the total metadata and the block metadata, a target memory segment of which allocation status needs to be adjusted includes:
determining whether at least one alternative memory block meeting the memory allocation request exists according to the total metadata;
If so, determining a target memory block and a target memory segment for distributing memory in the target memory block in the at least one candidate memory block according to the block metadata corresponding to the at least one candidate memory block.
In this embodiment, the memory allocation request may carry the size of the memory to be allocated. In practical applications, the size of the storage space may be larger than the size of one memory block, and may be smaller than the size of one memory block. In the case of a size smaller than one memory block, it can be determined whether there is a suitable free memory segment to be allocated by the above-mentioned total metadata and block metadata.
In some examples, the number of unallocated memory segments information includes a maximum number of free segments, the maximum number of free segments characterizing a maximum number of consecutive unallocated memory segments in the allocated memory blocks, and the determining whether there is at least one candidate memory block satisfying the memory allocation request based on the total metadata includes:
determining the number of memory segments to be allocated required for meeting the memory allocation request;
and determining whether at least one alternative memory block with the number of the maximum idle segments being greater than or equal to the number of the memory segments to be allocated exists according to the total metadata.
For example, the size of the memory space size may be divided by the size of the memory segment and rounded up to obtain the number of memory segments to be allocated chunk.
Because the total metadata includes the number information of the unallocated memory segments in each allocated memory block, it can be determined whether there is an allocable memory block in the memory, and then the allocable memory segments are queried. In some examples, the number of unallocated memory segments recorded in the total metadata may be the number of free memory segments, and the storage space required for one memory allocation request may be discontinuous memory segments.
In other examples, the storage space required for a memory allocation request may be a contiguous memory segment. For example, the present embodiment is based on the design of the maximum number of idle segments, and can allocate continuous target memory segments when responding to each memory allocation request, thereby reducing the complexity of memory management. And storing the max_free of each memory block by the total metadata, determining the number range of the memory segments to be allocated, further querying the information stored by each element in the linked list array, wherein the element corresponding to the number range of the memory segments to be allocated is not empty, and the element is linked with a first linked list, so that the existence of the memory blocks which can be allocated can be determined.
In some examples, the determining, according to the block metadata corresponding to at least one of the candidate memory blocks, a target memory block in the at least one candidate memory block and a target memory segment in the target memory block for allocating memory includes:
If the number of the spare memory blocks with the maximum free segments equal to the number of the memory segments to be allocated exists, determining the spare memory blocks and the maximum free segments in the spare memory blocks as target memory blocks and target memory segments used for allocation in the target memory blocks according to the block metadata of the spare memory blocks;
If the number of the maximum free sections of the at least one alternative memory block is larger than the number of the memory sections to be allocated, determining the difference between the number of the continuous unallocated memory sections in the alternative memory block and the number of the memory sections to be allocated according to the block metadata of the at least one alternative memory block, and determining a target memory block and a target memory section for allocation in the target memory block according to the difference.
In this embodiment, if the total data is stored with a max_free that is just equal to the memory segment to be allocated, one or more memory blocks linked in the first linked list corresponding to the max_free may be used as the target memory block, and if there are multiple memory blocks, one of the memory blocks may be flexibly selected as the target memory block according to needs, for example, in order to facilitate updating of metadata, the memory block to which the block metadata linked in the last node in the first linked list corresponding to the max_free belongs may be used as the target memory block, so that the node may be quickly removed from the first linked list, thereby implementing quick updating of the total metadata.
If the memory block has no max_free which is just equal to the memory segment to be allocated, the memory block corresponding to other max_free can be selected according to the requirement. For example, the memory segment to be allocated is 110, and the max_free includes 120, 150, 200, etc. in ascending order, and the allocated memory block corresponding to 120 may be selected, so that after the 120 memory segments of the memory block are allocated, the situation of reducing the fragments of the memory segment as much as possible is reduced, and of course, the allocated memory blocks corresponding to other max_free may be selected.
Taking the example of selecting each allocated memory block corresponding to max_free as 120 as an alternative memory block, the number of memory blocks corresponding to max_free as 120 is plural, and assuming that 2 memory blocks are taken as examples, the memory blocks include an alternative memory block ms2 and an alternative memory block ms3. Since the number of maximum continuous idle segments of ms2 and ms3 is greater than the number of memory segments to be allocated, there may be smaller continuous idle memory segments in ms2 and ms3, and it is possible to exactly match the number of memory segments. To reduce fragmentation of a memory segment, one of the memory blocks may be optionally selected as needed, and its block metadata traversed to determine if there is a more appropriate contiguous memory segment. Of course, in other examples, a plurality of memory blocks or all memory blocks are selected, and the block metadata of each memory block is also optional to traverse, but this way generates overhead when the system is busy, and the configuration can be flexibly performed according to the needs in practical application, which is not limited in this embodiment. Taking ms2 as an example, the block metadata of ms2 is read, the allocation status information of each memory segment is traversed, and finally the continuous idle memory segments required by meeting the chunk are determined. For example, the number of one continuous idle memory segment is determined to be 115 through the block metadata of ms2, and the difference between 115 and the chunk is smaller than the difference between 118 and the chunk, so that ms2 is determined to be a target memory segment, and the found continuous idle memory segment of 115 is determined to be the target memory segment.
Based on this, the determined target memory segment in the target memory block is the memory block for the current allocation, and the address of the target memory segment may be returned to the request. And, the allocation status is adjusted for each target memory segment, and the unallocated status is adjusted to the allocated status. And then updating the block metadata of the target memory block, namely updating the allocation state information of each memory segment of the target memory block.
And updating the quantity information of the unallocated memory segments of the target memory block in the total metadata. For example, if the allocation status is adjusted, the target memory block becomes a fully allocated memory block, and the header of the target memory block is removed from the original first linked list and linked to the second linked list representing the fully allocated memory block. And if the head of the memory block is not linked under the original first linked list after removal, deleting the original first linked list, namely deleting the head metadata of the link table. If the number of the new maximum free segment has a corresponding first linked list, adding the new maximum free segment into the first linked list, and if the number of the new maximum free segment does not have the corresponding first linked list, creating the first linked list and linking the first linked list to the corresponding element in the linked list array.
Next, through an embodiment of memory allocation:
1. The method comprises the steps of receiving a memory allocation request, determining the size of a memory space to be allocated, recording information of memory blocks to be allocated in the total metadata pool, and converting the information into the number of memory segments to be allocated according to the size and granularity of the memory segments.
2. And (3) searching whether the max_free meeting the chunk exists in the linked list array partial [ ] of the pool, if so, obtaining the address information of the block metadata header of the target memory block according to the corresponding linked list, returning the address information of the found header, and jumping to the step (5), otherwise, executing the step (3).
3. Since the free memory blocks of all existing memory blocks ms of pool cannot meet the allocation requirement, one free memory block ms needs to be reallocated. If the memory space is insufficient, the allocation fails to be directly exited, otherwise, the step 4 is executed.
4. A memory block ms is newly allocated, the block metadata header of the memory block is initialized and address information of the header is returned, and other related processes such as establishing virtual address mapping can be executed, which will not be described in detail in this embodiment.
5. And setting the initial allocation position six as the initial position max of the largest continuous small segment in the block metadata header under the condition that max_free is equal to the required chunk, and directly jumping to the step 14.
6. Otherwise, i.e. max_free is greater than the required chunk (there may be multiple memory blocks with the same max_free), traversing the bitmap recorded in the header, and finding the first idle segment position idx.
7. And (3) adding 1 to the number free of the continuous idle segments, continuously judging whether the next small segment is idle, if yes, step 11, otherwise, step 8.
8. And judging whether the free section free is equal to the demand size chunk, if so, indicating that the allocation requirement is met, and directly carrying out step 13.
9. If the free segment free is larger than the chunk, the difference diff between the segments is recorded and compared with the minimum difference min_diff, and if the difference is smaller than the minimum difference min_diff, the position min_idx at which the continuous segment starts is recorded and the min_diff is updated.
10. And judging whether an end mark is set, if yes, step 13, and if not, step 11.
11. And continuing to search the starting position of the next free memory segment.
12. The next segment is also an idle memory segment, whether the traversal is finished is judged (in the case that a plurality of memory blocks with the same max_free are provided, the bitmap of one or a plurality of memory blocks can be traversed as required), the setting of the ending mark is finished, the step 8 is skipped, and otherwise, the step 7 is skipped.
13. The minimum difference value min_diff found at this time is the required target memory segment, and the allocation start position sidx is set to min_idx.
14. The chunk size at the beginning of the allocation position sidx is set to the allocated state.
15. Returning to the virtual address handle where the sidx memory segment of the memory block is located, the location may be used to save the small block of memory.
16. If the memory block is not newly allocated, the address information of the memory block header is removed from the existing linked list, and if the header of the memory block is not linked under the original linked list after the removal, the original first linked list is deleted, namely the link table header metadata is deleted.
17. Judging the number of the hollow memory segments in the header at the moment, if the number is full, moving into a full linked list, jumping to the step 23, otherwise executing the step 18.
18. Since the memory block is not full, the location max and the size max_free of the largest continuous memory segment in the header in the block metadata are updated.
19. And searching the partial [ i ] of the partial array position where the partial array is located according to the max_free.
20. Traversing the head of the partial [ i ] chain table to check whether a linked list node of max_free exists in the list below the head, executing step 21 if not, otherwise, finding the list, and executing step 22.
21. A linked list node list is assigned first, and its max_free value is set, linking it up into the partial [ i ].
22. Finding a list that satisfies max_free links the header down.
23. The whole distribution process is completed.
Embodiments of memory release are provided next. The memory adjustment request comprises a memory release request carrying a memory size to be released and a memory address to be released, wherein the determining of a target memory segment to be adjusted according to the total metadata and the block metadata comprises determining a target memory block according to the memory block size and the memory address to be released, and determining a target memory segment to be released in the target memory block according to the memory segment size and the memory size to be released.
1. Responding to a memory release request, and removing a small memory from the pool; determining a position handler to be released according to the memory release request, wherein the size of the position handler is size;
2. determining the memory block in which the handler is located according to the address of the handler, and accessing the block metadata header of the memory block;
3. determining the position idx of the handler in the header according to the block metadata header;
4. Setting the chunk memory segments from the idx to be in an idle state;
5. Removing the head from the existing linked list, if the head of the memory block is not linked under the original linked list after the head is removed, deleting the original linked list, namely deleting the head metadata of the link list;
6. judging the number of the empty memory segments in the header at the moment, if the number is full, directly releasing the memory block ms, and jumping to the step 12 for the reserved memory management system of the previous stage, otherwise, executing the step 7;
7. if not, updating the position max and the size max_free of the largest continuous memory small section in the header of the memory block ms;
8. searching the position partial [ i ] of the corresponding element in the linked list array according to the max_free;
9. traversing the head of the partial [ i ] chain table to check whether a link table node of max_free exists in a list below the head, wherein the link table node does not exist, step 11;
10. firstly, distributing a linked list node list, setting a max_free value of the linked list node list, and upwards linking the linked list node list to a partial [ i ];
11. finding a list that satisfies max_free links the header down.
12. The whole distribution process is completed.
Corresponding to the foregoing embodiments of the memory management method, the present disclosure further provides embodiments of the memory management device and a computer apparatus to which the memory management device is applied.
The embodiments of the memory management device of the present specification may be applied to a computer device, for example, a server or a terminal device. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor where the device is located. In terms of hardware, as shown in fig. 3, a hardware structure diagram of a computer device where the memory management device in the present disclosure is located is shown in fig. 3, and the computer device where the memory management device 331 is located in the embodiment of the present disclosure may include other hardware besides the processor 310, the memory 330, the network interface 320, and the nonvolatile memory 340 shown in fig. 3, which is not described herein again, generally according to the actual function of the computer device.
As shown in fig. 4, fig. 4 is a block diagram of a memory management device according to an exemplary embodiment of the present disclosure, where the memory includes a plurality of memory blocks, and each of the memory blocks is divided into a plurality of memory segments;
the memory is used for storing total metadata and block metadata corresponding to each allocated memory block;
the block metadata comprises allocation status information of each memory segment in the allocated memory block;
the total metadata comprises the quantity information of unallocated memory segments in each allocated memory block;
The device comprises:
A determining module 41, configured to determine, in response to a memory adjustment request, a target memory segment for which an allocation status needs to be adjusted according to the total metadata and the block metadata;
an adjustment module 42, configured to adjust the allocation status of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
And the updating module 43 is configured to update allocation status information of block metadata of a target memory block to which the target memory segment belongs and update quantity information of unallocated memory segments in the target memory block in the total metadata after adjusting allocation status of the target memory segment.
In some examples, the memory adjustment request includes a memory allocation request;
the determining module is further configured to:
determining whether at least one alternative memory block meeting the memory allocation request exists according to the total metadata;
If so, determining a target memory block and a target memory segment for distributing memory in the target memory block in the at least one candidate memory block according to the block metadata corresponding to the at least one candidate memory block.
In some examples, the number of unallocated memory segments information includes a maximum free number of segments, the maximum free number of segments characterizing a maximum number of consecutive unallocated memory segments in the allocated memory block;
the determining module is further configured to:
determining the number of memory segments to be allocated required for meeting the memory allocation request;
and determining whether at least one alternative memory block with the number of the maximum idle segments being greater than or equal to the number of the memory segments to be allocated exists according to the total metadata.
In some examples, the determining module is further to:
If the number of the spare memory blocks with the maximum free segments equal to the number of the memory segments to be allocated exists, determining the spare memory blocks and the maximum free segments in the spare memory blocks as target memory blocks and target memory segments used for allocation in the target memory blocks according to the block metadata of the spare memory blocks;
If the number of the maximum free sections of the at least one alternative memory block is larger than the number of the memory sections to be allocated, determining the difference between the number of the continuous unallocated memory sections in the alternative memory block and the number of the memory sections to be allocated according to the block metadata of the at least one alternative memory block, and determining a target memory block and a target memory section for allocation in the target memory block according to the difference.
In some examples, the total metadata further includes an address of each of the block metadata, and the determining module is further configured to, after determining that there is at least one candidate memory block satisfying the memory allocation request, read the block metadata of the at least one candidate memory block according to the address of the block metadata of the at least one candidate memory block.
In some examples, the total metadata includes one or more first linked lists, different ones of the first linked lists corresponding to different ones of the quantity information;
The first linked list comprises at least one node, each node is used for storing the address of the block metadata of an allocated memory block so as to access the block metadata of the alternative memory block after the alternative memory block is determined, and the addresses of the block metadata of the allocated memory block with the same quantity of information are stored in different nodes of the first linked list.
In some examples, the total metadata includes a linked list array, each element in the linked list array corresponding to a different number range;
each element is used for being linked to one or more first linked lists, and the quantity information corresponding to the linked first linked list is in a quantity range corresponding to the element.
In some examples, each of the elements corresponds to a total linked list and is configured to store a head pointer of the corresponding total linked list;
And the head pointer of each first linked list is stored in the node of the total linked list corresponding to the element corresponding to the first linked list.
In some examples, the memory adjustment request includes a memory release request, where the memory release request carries a size of a memory to be released and an address of the memory to be released;
the determining module is further configured to:
determining a target memory block according to the size of the memory block and the address of the memory to be released;
and determining the target memory segment to be released in the target memory block according to the size of the memory segment and the size of the memory to be released.
In some examples, the memory includes a first storage space for use by an operating system of the computer device and a second storage space for use by the virtual machine, the second storage space including the plurality of memory blocks;
The first storage space is managed by a first memory management module of the operating system, and the device is applied to a second memory management module of the operating system for managing the second storage space;
And storing the block metadata of the allocated memory block in a memory segment of the memory block, wherein the total metadata is stored in the first storage space by calling the first memory management module.
The implementation process of the functions and roles of each module in the memory management device is specifically shown in the implementation process of the corresponding steps in the memory management method, and will not be described herein.
Accordingly, embodiments of the present disclosure also provide a computer program product comprising a computer program which, when executed by a processor, implements the steps of the foregoing memory management method embodiments.
Accordingly, the embodiments of the present disclosure further provide a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the steps of the embodiments of the memory management method are implemented when the processor executes the program.
Accordingly, the present embodiments also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the memory management method embodiments.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The above-described embodiments may be applied to one or more electronic devices, which are devices capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and hardware of the electronic devices includes, but is not limited to, microprocessors, application SPECIFIC INTEGRATED Circuits (ASICs), programmable gate arrays (Field-Programmable GATE ARRAY, FPGA), digital processors (DIGITAL SIGNAL processors, DSPs), embedded devices, and the like.
The electronic device may be any electronic product that can interact with a user in a human-computer manner, such as a Personal computer, a tablet computer, a smart phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a game console, an interactive internet protocol television (Internet Protocol Television, IPTV), a smart wearable device, etc.
The electronic device may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The network in which the electronic device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and all the steps are within the scope of protection of the present patent, and adding insignificant modification to the algorithm or the process or introducing insignificant design, but not changing the core design of the algorithm and the process, and all the steps are within the scope of protection of the present application.
Where a description of "a specific example", or "some examples", etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present description. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It is to be understood that the present description is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.
Claims (13)
1. The memory comprises a plurality of memory blocks, wherein each memory block is divided into a plurality of memory segments, and the memory is used for storing total metadata and block metadata corresponding to each allocated memory block;
the block metadata comprises allocation status information of each memory segment in the allocated memory block, wherein the allocation status information represents whether the memory segment is allocated or not;
the total metadata comprises the number information of unallocated memory segments in each allocated memory block, wherein the number information of unallocated memory segments comprises the number of unallocated memory segments and the number of maximum free segments, and the number of maximum free segments represents the number of maximum continuous unallocated memory segments in the allocated memory block;
The method comprises the following steps:
responding to a memory adjustment request, and determining a target memory segment of which the state needs to be adjusted according to the total metadata and the block metadata;
Adjusting the allocation state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
After the allocation state of the target memory segment is adjusted, updating allocation state information of block metadata of a target memory block to which the target memory segment belongs, and updating quantity information of unallocated memory segments in the target memory block in the total metadata.
2. The method of claim 1, wherein the memory adjustment request comprises a memory allocation request;
the determining the target memory segment of the state to be adjusted according to the total metadata and the block metadata comprises the following steps:
determining whether at least one alternative memory block meeting the memory allocation request exists according to the total metadata;
If so, determining a target memory block and a target memory segment for distributing memory in the target memory block in the at least one candidate memory block according to the block metadata corresponding to the at least one candidate memory block.
3. The method of claim 2, the determining whether there is at least one candidate memory block that satisfies the memory allocation request according to the total metadata, comprising:
determining the number of memory segments to be allocated required for meeting the memory allocation request;
and determining whether at least one alternative memory block with the number of the maximum idle segments being greater than or equal to the number of the memory segments to be allocated exists according to the total metadata.
4. The method according to claim 3, wherein the determining, in the at least one candidate memory block, a target memory block and a target memory segment in the target memory block for allocating memory according to the block metadata corresponding to the at least one candidate memory block includes:
If the number of the spare memory blocks with the maximum free segments equal to the number of the memory segments to be allocated exists, determining the spare memory blocks and the maximum free segments in the spare memory blocks as target memory blocks and target memory segments used for allocation in the target memory blocks according to the block metadata of the spare memory blocks;
If the number of the maximum free sections of the at least one alternative memory block is larger than the number of the memory sections to be allocated, determining the difference between the number of the continuous unallocated memory sections in the alternative memory block and the number of the memory sections to be allocated according to the block metadata of the at least one alternative memory block, and determining a target memory block and a target memory section for allocation in the target memory block according to the difference.
5. The method of claim 2, wherein the total metadata further includes an address for each of the block metadata, and wherein upon determining that there is at least one candidate memory block satisfying the memory allocation request, the method further comprises reading the block metadata of the at least one candidate memory block based on the address of the block metadata of the at least one candidate memory block.
6. The method of claim 5, the total metadata comprising one or more first linked lists, different ones of the first linked lists corresponding to different ones of the quantity information;
The first linked list comprises at least one node, each node is used for storing the address of the block metadata of an allocated memory block so as to access the block metadata of the alternative memory block after the alternative memory block is determined, and the addresses of the block metadata of the allocated memory block with the same quantity of information are stored in different nodes of the first linked list.
7. The method of claim 6, the total metadata comprising a linked list array, each element in the linked list array corresponding to a different number range;
each element is used for being linked to one or more first linked lists, and the quantity information corresponding to the linked first linked list is in a quantity range corresponding to the element.
8. The method of claim 7, wherein each of the elements corresponds to a total linked list and is configured to store a head pointer of the corresponding total linked list;
And the head pointer of each first linked list is stored in the node of the total linked list corresponding to the element corresponding to the first linked list.
9. The method of claim 1, wherein the memory adjustment request comprises a memory release request, the memory release request carrying a size of a memory to be released and an address of the memory to be released;
the determining the target memory segment of the state to be adjusted according to the total metadata and the block metadata comprises the following steps:
determining a target memory block according to the size of the memory block and the address of the memory to be released;
and determining the target memory segment to be released in the target memory block according to the size of the memory segment and the size of the memory to be released.
10. The method of any of claims 1 to 9, the memory comprising a first storage space for use by an operating system of a computer device and a second storage space for use by a virtual machine, the second storage space comprising the plurality of memory blocks;
the first storage space is managed by a first memory management module of the operating system, and the method is applied to a second memory management module of the operating system for managing the second storage space;
And storing the block metadata of the allocated memory block in a memory segment of the memory block, wherein the total metadata is stored in the first storage space by calling the first memory management module.
11. A memory management device, wherein the memory comprises a plurality of memory blocks, and each memory block is divided into a plurality of memory segments;
the memory is used for storing total metadata and block metadata corresponding to each allocated memory block;
the block metadata comprises allocation status information of each memory segment in the allocated memory block, wherein the allocation status information represents whether the memory segment is allocated or not;
the total metadata comprises the number information of unallocated memory segments in each allocated memory block, wherein the number information of unallocated memory segments comprises the number of unallocated memory segments and the number of maximum free segments, and the number of maximum free segments represents the number of maximum continuous unallocated memory segments in the allocated memory block;
The device comprises:
the determining module is used for responding to the memory adjusting request and determining a target memory segment of which the allocation state needs to be adjusted according to the total metadata and the block metadata;
the adjusting module is used for adjusting the allocation state of the target memory segment based on the memory adjustment type corresponding to the memory adjustment request;
And the updating module is used for updating the distribution state information of the block metadata of the target memory block to which the target memory segment belongs and updating the quantity information of the unallocated memory segments in the target memory block in the total metadata after the distribution state of the target memory segment is adjusted.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any of claims 1 to 10 when the computer program is executed.
13. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of claims 1 to 10.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211248341.1A CN115599544B (en) | 2022-10-12 | 2022-10-12 | Memory management method, device, computer equipment and storage medium |
| PCT/CN2023/123475 WO2024078429A1 (en) | 2022-10-12 | 2023-10-09 | Memory management method and apparatus, computer device, and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211248341.1A CN115599544B (en) | 2022-10-12 | 2022-10-12 | Memory management method, device, computer equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115599544A CN115599544A (en) | 2023-01-13 |
| CN115599544B true CN115599544B (en) | 2026-01-13 |
Family
ID=84847498
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211248341.1A Active CN115599544B (en) | 2022-10-12 | 2022-10-12 | Memory management method, device, computer equipment and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN115599544B (en) |
| WO (1) | WO2024078429A1 (en) |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116991595B (en) * | 2023-09-27 | 2024-02-23 | 太初(无锡)电子科技有限公司 | Memory allocation method, device, equipment and medium based on Bitmap |
| CN117130565B (en) * | 2023-10-25 | 2024-02-06 | 苏州元脑智能科技有限公司 | Data processing method, device, disk array card and medium |
| CN117555674B (en) * | 2023-10-26 | 2024-05-14 | 南京集成电路设计服务产业创新中心有限公司 | Efficient multithreading batch processing block memory pool management method |
| CN120540576A (en) * | 2024-02-26 | 2025-08-26 | 杭州阿里云飞天信息技术有限公司 | Memory operation method, device, equipment, storage medium and program product |
| CN118689393B (en) * | 2024-05-31 | 2025-11-28 | 福建天晴在线互动科技有限公司 | A data storage method and terminal |
| CN121070811A (en) * | 2024-06-05 | 2025-12-05 | 阿里云计算有限公司 | Memory swapping methods, storage media and electronic devices |
| CN118409979B (en) * | 2024-06-28 | 2024-10-01 | 中国兵器装备集团兵器装备研究所 | Low-fragmentation memory management method based on multi-size data block management |
| CN118626272B (en) * | 2024-07-18 | 2024-11-12 | 西交利物浦大学 | Memory allocation method, device, equipment, medium and product |
| CN119271403B (en) * | 2024-09-24 | 2025-06-17 | 无锡众星微系统技术有限公司 | A memory management method, device, computer equipment and storage medium |
| CN119668848B (en) * | 2024-11-28 | 2025-10-03 | 四川长虹电子控股集团有限公司 | Memory management method for reducing memory fragmentation |
| CN119473635B (en) * | 2025-01-15 | 2025-06-03 | 天津南大通用数据技术股份有限公司 | Database memory allocation method, device and equipment |
| CN120371720B (en) * | 2025-04-09 | 2026-01-27 | 摩尔线程智能科技(北京)股份有限公司 | Storage space management methods, devices, electronic devices, storage media and software products |
| CN120821439B (en) * | 2025-09-12 | 2026-01-27 | 山东云海国创云计算装备产业创新中心有限公司 | A dynamic memory allocation method, device, medium, and product |
| CN121166607B (en) * | 2025-11-19 | 2026-02-03 | 沈阳邦粹科技有限公司 | Bitmap-based memory management methods, memory management units, systems, and media |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108121813A (en) * | 2017-12-27 | 2018-06-05 | 东软集团股份有限公司 | Data managing method, device, system, storage medium and electronic equipment |
| CN113010453A (en) * | 2021-04-12 | 2021-06-22 | 杭州和利时自动化有限公司 | Memory management method, system, equipment and readable storage medium |
Family Cites Families (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5555399A (en) * | 1994-07-07 | 1996-09-10 | International Business Machines Corporation | Dynamic idle list size processing in a virtual memory management operating system |
| US7533228B1 (en) * | 2005-05-27 | 2009-05-12 | Sun Microsystems, Inc. | Two-pass sliding compaction |
| CN102915276B (en) * | 2012-09-25 | 2015-06-03 | 武汉邮电科学研究院 | Memory control method for embedded systems |
| CN108304259B (en) * | 2017-01-11 | 2023-04-14 | 中兴通讯股份有限公司 | Memory management method and system |
| CN110008020B (en) * | 2019-03-05 | 2024-04-09 | 平安科技(深圳)有限公司 | Memory management method, memory management device, electronic equipment and computer readable storage medium |
| CN110287127A (en) * | 2019-05-14 | 2019-09-27 | 江苏大学 | A multi-granularity multi-core scalable non-volatile memory management method and system |
| CN111143058A (en) * | 2019-12-17 | 2020-05-12 | 长沙新弘软件有限公司 | Memory management method based on backup list |
| CN114780014B (en) * | 2021-01-22 | 2025-07-18 | 伊姆西Ip控股有限责任公司 | Method, electronic device and computer program product for managing metadata storage units |
| CN113971091B (en) * | 2021-10-25 | 2024-05-14 | 重庆大学 | Method for distributing persistent memory in consideration of process difference |
| CN114385370B (en) * | 2022-01-18 | 2022-10-25 | 重庆紫光华山智安科技有限公司 | Memory allocation method, system, device and medium |
| CN114490060B (en) * | 2022-01-24 | 2025-06-13 | 网易(杭州)网络有限公司 | Memory allocation method, device, computer equipment and computer readable storage medium |
| CN114510439B (en) * | 2022-01-27 | 2024-11-08 | 浙江大学 | Memory allocator metadata alternating mapping method and system based on persistent memory |
| CN114490076A (en) * | 2022-02-07 | 2022-05-13 | 北京高途云集教育科技有限公司 | Memory allocation method and device, computer equipment and storage medium |
| CN114546661A (en) * | 2022-03-01 | 2022-05-27 | 浙江大学 | Dynamic memory allocation method and device based on memory transformation |
| CN114327917A (en) * | 2022-03-11 | 2022-04-12 | 武汉深之度科技有限公司 | Memory management method, computing device and readable storage medium |
-
2022
- 2022-10-12 CN CN202211248341.1A patent/CN115599544B/en active Active
-
2023
- 2023-10-09 WO PCT/CN2023/123475 patent/WO2024078429A1/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108121813A (en) * | 2017-12-27 | 2018-06-05 | 东软集团股份有限公司 | Data managing method, device, system, storage medium and electronic equipment |
| CN113010453A (en) * | 2021-04-12 | 2021-06-22 | 杭州和利时自动化有限公司 | Memory management method, system, equipment and readable storage medium |
Non-Patent Citations (1)
| Title |
|---|
| 基于多链表结构的嵌入式系统内存管理;何煦岚;何晓岚;;计算机应用与软件;20080415(第04期);正文第2页,第3.1节 * |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024078429A1 (en) | 2024-04-18 |
| CN115599544A (en) | 2023-01-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN115599544B (en) | Memory management method, device, computer equipment and storage medium | |
| US12216929B2 (en) | Storage system, memory management method, and management node | |
| RU2658886C1 (en) | Files management method, distributed storage system and control unit | |
| JP2858795B2 (en) | Real memory allocation method | |
| CN103544269B (en) | Methods and node controllers for storing and enquiring directories | |
| EP2645259A1 (en) | Method, device and system for caching data in multi-node system | |
| US10824555B2 (en) | Method and system for flash-aware heap memory management wherein responsive to a page fault, mapping a physical page (of a logical segment) that was previously reserved in response to another page fault for another page in the first logical segment | |
| WO2024099448A1 (en) | Memory release method and apparatus, memory recovery method and apparatus, and computer device and storage medium | |
| CN114327917A (en) | Memory management method, computing device and readable storage medium | |
| CN111984425A (en) | Memory management method, device and equipment for operating system | |
| WO2017050064A1 (en) | Memory management method and device for shared memory database | |
| US12417169B2 (en) | Dynamically allocating memory pool subinstances | |
| US20170364442A1 (en) | Method for accessing data visitor directory in multi-core system and device | |
| US10152258B1 (en) | Big block allocation of persistent main memory | |
| CN114556309A (en) | Memory space allocation method and device and storage medium | |
| CN116661690A (en) | Method, device, computer equipment and storage medium for recording memory state | |
| EP3690660B1 (en) | Cache address mapping method and related device | |
| CN116225693A (en) | Metadata management method, device, computer equipment and storage medium | |
| CN119782270B (en) | A memory data processing method, device, equipment and readable storage medium | |
| CN115756838A (en) | Memory release method, memory recovery method, memory release device, memory recovery device, computer equipment and storage medium | |
| CN119645651A (en) | Memory allocation method, memory allocation device, computer equipment, storage medium and program product | |
| US10168911B1 (en) | Defragmentation of persistent main memory | |
| CN116048377A (en) | Solid state disk data processing method and related equipment | |
| CN113741787A (en) | Data storage method, device, equipment and medium | |
| CN116382579A (en) | Memory normalization method, memory normalization device, computer equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |