CN110287044B - Lock-free shared memory processing method and device, electronic equipment and readable storage medium - Google Patents
Lock-free shared memory processing method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN110287044B CN110287044B CN201910591481.0A CN201910591481A CN110287044B CN 110287044 B CN110287044 B CN 110287044B CN 201910591481 A CN201910591481 A CN 201910591481A CN 110287044 B CN110287044 B CN 110287044B
- Authority
- CN
- China
- Prior art keywords
- data
- shared memory
- area
- block
- mapping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application provides a lock-free shared memory processing method, a lock-free shared memory processing device, an electronic device and a readable storage medium. On the basis, an atom data structure updated by atoms is applied, and a plurality of atom integer methods of an index area, a hash array data area, an extraction pool and a storage data area are divided through data, so that private retrieval data do not need to be set for each process or thread, the occupied memory is smaller when the data is processed in a high-concurrency mode, high-concurrency data read-write service can be provided for high-speed development service in live broadcast service, and a high-concurrency high-performance read-write and storage scheme is provided for a memory database.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a lock-free shared memory processing method and apparatus, an electronic device, and a readable storage medium.
Background
The direct broadcast service has large concurrency amount and frequent data updating and reading, and the traditional concurrent reading and writing under lock competition has a bottleneck. In addition, the traditional single data read-write mode no longer meets the requirements of rapid growth of the live broadcast service and huge read-write concurrent requests.
Disclosure of Invention
In view of the above, embodiments of the present disclosure provide a lock-free shared memory processing method, device, electronic device and readable storage medium to solve or improve the above problems.
According to an aspect of embodiments of the present application, there is provided an electronic device that may include one or more storage media and one or more processors in communication with the storage media. One or more storage media store machine-executable instructions that are executable by a processor. When the electronic device is running, the processor executes the machine executable instructions to perform the lock-free shared memory processing method described below.
According to another aspect of the embodiments of the present application, there is provided a lock-free shared memory processing method applied to an electronic device, the method including:
configuring memory configuration data of each data service, wherein the memory configuration data comprises the size of a required data block, the number of the data blocks and a shared memory file path;
calculating the shared memory space required by the data service according to the size of the data blocks and the number of the data blocks;
opening a shared memory file according to the shared memory file path, mapping the shared memory file into a shared memory region in a preset memory mapping mode according to the shared memory space, and acquiring a mapping return value after the mapping is completed;
obtaining a start pointer address and an offset recording pointer address of a corresponding mapping area in the shared memory area according to the mapping return value, and recording the start pointer address and the offset recording pointer address into a memory of an application program corresponding to the data service;
and after performing atomic mapping on auxiliary data on the shared memory area, allocating an index area, a hash array data area, an extraction pool and an atomic data structure of a storage data area to the shared memory area so as to complete lock-free shared memory processing of the data service.
According to another aspect of the embodiments of the present application, there is provided a lock-free shared memory processing apparatus, applied to an electronic device, the apparatus including:
the data configuration module is used for configuring the memory configuration data of each data service, and the memory configuration data comprises the required data block size, the data block number and a shared memory file path;
the calculation module is used for calculating the shared memory space required by the data service according to the size of the data blocks and the number of the data blocks;
the memory mapping module is used for opening a shared memory file according to the shared memory file path, mapping the shared memory file into a shared memory area in a preset memory mapping mode according to the shared memory space, and acquiring a mapping return value after the mapping is finished;
an address recording module, configured to obtain a start pointer address and an offset recording pointer address of a mapping region corresponding to the shared memory region according to the mapping return value, and record the start pointer address and the offset recording pointer address in a memory of an application program corresponding to the data service;
and the distribution module is used for distributing an index area, a hash array data area, an extraction pool and an atomic data structure of a storage data area for the shared memory area after carrying out atomic mapping of auxiliary data on the shared memory area so as to complete lock-free shared memory processing of the data service.
According to another aspect of the embodiments of the present application, there is provided a readable storage medium, on which machine-executable instructions are stored, and when executed by a processor, the computer program may perform the steps of the lock-free shared memory processing method.
Based on any aspect, in the embodiment of the present application, the shared memory file is mapped into the shared memory area in a memory mapping manner, and the obtained start pointer address and the obtained offset recording pointer address of the mapping area corresponding to the shared memory area are recorded in the memory of the application program corresponding to the data service according to the mapping return value after the mapping is completed. On the basis, an atom data structure updated by atoms is applied, and a plurality of atom integer methods of an index area, a hash array data area, an extraction pool and a storage data area are divided through data, so that private retrieval data do not need to be set for each process or thread, the occupied memory is smaller when the data is processed in a high-concurrency mode, high-concurrency data read-write service can be provided for high-speed development service in live broadcast service, and a high-concurrency high-performance read-write and storage scheme is provided for a memory database.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a lock-free shared memory processing method according to an embodiment of the present application;
FIG. 2 shows a flow diagram of various sub-steps included in step S150 shown in FIG. 1;
fig. 3 is a second flowchart illustrating a lock-free shared memory processing method according to an embodiment of the present application;
fig. 4 is a schematic block diagram illustrating a structure of an electronic device for executing the lock-free shared memory processing method according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some of the embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The server is mostly a multi-core processing device, and the main mode for optimizing the performance of the server is parallel programming, so that the server can process some shared data in parallel. For example, the server may lock conflicts, i.e., multiple parallel objects (threads or processes) access (read and write) the same piece of data. When there is an object currently operating on a certain data block, another object needs to block waiting in order to protect the data area of the server.
The lock-free data structure does not need to wait and can be performed concurrently, thereby improving concurrency and expandability of the server by reducing blocking and waiting. The locked data structure may cause deadlock of the server under abnormal conditions, and in addition, if the locking force is too large, all threads or processes may be blocked, so that resource consumption and context switching consumption during subsequent unlocking and unlocking are caused, and therefore, the phenomena of priority inversion and lock guard delivery occur. The lock-free data structure can reduce resource consumption (in a sense, time consumption) compared to the above-mentioned lock-containing data structure, and can eliminate potential problems caused by conditional contention, blocking, deadlock, and insufficient combinability. However, the inventor has found in research that the current lock-free data structure requires setting private search data for each process or thread, and occupies a large memory when the data is processed in high concurrency.
For this reason, based on the findings of the above technical problems, the inventors propose the following technical solutions to solve or improve the above problems. It should be noted that the above prior art solutions have shortcomings which are the results of practical and careful study of the inventor, therefore, the discovery process of the above problems and the solutions proposed by the embodiments of the present application in the following description should be the contribution of the inventor to the present application in the course of the invention creation process, and should not be understood as technical contents known by those skilled in the art.
Fig. 1 is a flowchart illustrating a lock-free shared memory processing method according to an embodiment of the present application, and it should be understood that, in other embodiments, the order of some steps in the lock-free shared memory processing method according to the embodiment may not be limited by the order in fig. 1 and the following specific embodiments, for example, the steps may be interchanged with each other according to actual needs, or some steps may also be omitted or deleted. The detailed steps of the lock-free shared memory processing method are described as follows.
Step S110, configuring the memory configuration data of the data service for each data service.
In this embodiment, taking a live broadcast scenario as an example, the data service may include multiple services such as a live broadcast video service, a live broadcast voice service, and a live broadcast order service, and corresponding memory configuration data may be configured for different data services. The memory configuration data may include a data block size, a data block number, and a shared memory file path required by the data service.
Step S120, calculating the shared memory space required by the data service according to the size of the data block and the number of the data blocks.
Step S130, opening a shared memory file according to the shared memory file path, mapping the shared memory file into a shared memory region in a preset memory mapping manner according to the shared memory space, and obtaining a mapping return value after the mapping is completed.
For example, the preset memory mapping mode may be an mmap mode, and the mmap may map the shared memory file into an address space of a corresponding process in the shared memory region, to implement a one-to-one mapping relationship between a file disk address and a section of virtual address in the process virtual address space, and after implementing such a mapping relationship, the process may read and write the section of memory in a pointer mode. After the shared memory file mapping is completed, an mmap return value is returned as the mapping return value.
Step S140, obtaining a start pointer address and an offset record pointer address of a mapping region corresponding to the shared memory region according to the mapping return value, and recording the start pointer address and the offset record pointer address in a memory of an application program corresponding to the data service.
Step S150, after performing atomic mapping of auxiliary data on the shared memory region, allocating an index region, a hash array data region, an extraction pool, and an atomic data structure of a storage data region to the shared memory region, so as to complete lock-free shared memory processing of the data service.
Based on the design, the shared memory file is mapped into the shared memory area in a memory mapping mode, and according to the mapping return value after the mapping is completed, recording the initial pointer address and the offset recording pointer address of the mapping area corresponding to the obtained shared memory area into the memory of the application program corresponding to the data service, on the basis, an atom data structure of atom updating is applied, and a plurality of atom integer methods of an index area, a hash array data area, an extraction pool and a data storage area are divided by data, so that private retrieval data do not need to be set for each process or thread, the occupied memory is smaller during high-concurrency processing of data, so that high-concurrency data reading and writing service can be provided for high-speed development service in live broadcast service, and a high-concurrency high-performance reading and writing and storage scheme is provided for a memory database.
In a possible implementation manner, for step S150, the manner of performing atomic mapping on the auxiliary data on the shared memory area may be:
generating a remark information area of the shared memory area;
configuring a mark value at the position of the first byte in the remark information area;
configuring the current data capacity at the position of the second byte in the remark information area;
the data block size is configured at the position of the third byte in the remark information area;
the number of data blocks is configured at the position of the fourth byte in the remark information area;
and respectively pointing to the data byte blocks of the data of the first byte, the second byte, the third byte and the fourth byte through the int-type pointers of four atoms in the memory of the application program, so as to respectively control the lock-free reading and writing of the first byte, the second byte, the third byte and the fourth byte through the int-type pointers of the four atoms.
Specifically, the generated remark information area of the shared memory area is a blank area, and the first byte, the second byte, the third byte and the fourth byte in the blank area are configured as follows:
the number of data blocks: (au64 ═ m _ mem. data () + 2;
data block size: (au64 × m _ mem. data () + 1;
the mark value is: s _ flags ═ (au32 × m _ mem. data ();
current data capacity: (au32 × m _ mem. data ()) + 1;
then, the configuration process of controlling the lock-free reading and writing of the first byte, the second byte, the third byte and the fourth byte respectively through the int-type pointer types of the four atoms is as follows:
au32*s_flags;
au32*s_cnt;
au64*s_blockSize;
au64*s_blockCount;
on the basis, referring to fig. 2, in a possible implementation manner, the step S150 may be implemented by the following sub-steps:
and a substep S151 of allocating an index area to the shared memory area, and allocating a plurality of VerIdx arrays and a check information list for data index in the index area, where the VerIdx arrays store hash values corresponding to the numbers of each data block, and the check information list includes storage bytes obtained according to the number of hash buckets and extension fields obtained by adding 16 bytes to the storage bytes.
And a substep S152, allocating a hash array data area to the shared memory area, performing atomic mapping on the hash array data area, and recording metadata of each data block through a data link table, wherein the metadata is stored through atomic shaping, and the metadata includes one or a combination of multiple data block lengths, data block version numbers, data block identifiers, data block hash values, data block deletion marks, and data block reference counts. The combination of the plurality of types may be two or more.
And a substep S153, allocating an extraction pool to the shared memory region, where the extraction pool points to a next writable memory region through a 64-bit atomic pointer and records a position of a next data block, and global data block information and identification information of the next data block of each data block are recorded in an atomic shaping pointer queue corresponding to the extraction pool.
And a substep S154, allocating a storage data area for the shared memory area, wherein the size of the storage data area is equal to the product of the number and the size of the hash buckets.
Thus, through the design, by using the atomic data structure updated by atoms and a method of dividing the data into a plurality of atomic integer types of an index area, a hash array data area, an extraction pool and a storage data area, private retrieval data does not need to be set for each process or thread.
Based on the foregoing description, an application layer of the data structure after the lock-free shared memory processing based on the data service is exemplarily described below with reference to fig. 3, please refer to fig. 3, and after the step S150, the lock-free shared memory processing method provided in this embodiment may further include the following steps:
step S160, when receiving a concurrent data processing request for a data service, obtaining an identification number of data to be processed according to the concurrent data processing request.
In detail, when a concurrent data processing request for a data service is received, for different data services, an identification number FLAG associated with data to be processed is corresponding to each data service.
Step S160, calculating a hash value corresponding to the identification number, and performing a corresponding operation on the data to be processed according to the hash value.
In this embodiment, the manner of calculating the hash value corresponding to the identification number may be executed by the following code:
u32 hash=CncrHsh::HashBytes(key,klen);
in a possible example, if the concurrent data processing request is a data write request, this step first calculates the number of data blocks needed by the data to be processed according to the hash value, then extracts free data blocks corresponding to the number of data blocks from the configured extraction pool, and writes information of the free data blocks into the hash array data area. And then, writing the data to be processed into a data area corresponding to the free data block in the storage data area so as to finish data writing, and finally writing the information of the free data block into a VerIdx array (data query entry) in the index area so as to facilitate data query of a user. Thus, the concurrent writing of the data to be processed is completed.
For example, if the free data corresponding to the number of data blocks are extracted from the extraction pool configured as described above as data block 0, data block 1, and data block 2, then the information of data block 0, data block 1, and data block 2 is written into the hash array data area, the data to be processed is written into the data area corresponding to data block 0, data block 1, and data block 2 in the storage data area, and finally the information of data block 0, data block 1, and data block 2 is written into the VerIdx array in the index area.
In another possible example, if the concurrent data processing request is a data reading request, this step first obtains the number and version number of the corresponding data block from the VerIdx array of the index area according to the hash value, and then obtains a Brief information block corresponding to the data block from the hash array data area according to the number of the data block, where the Brief information block may be an information block recording basic digest information of the data block. Next, it is determined whether the hash value is the same as the hash value of the Brief information block, whether the length of the identification number of the data to be processed is the same as the length of the identification number of the Brief information block, and whether the version number of the data block is the same as the version number of the Brief information block. And if the hash value is the same as that of the Brief information block, the length of the identification number of the data to be processed is the same as that of the identification number of the Brief information block, and the version number of the data block is the same as that of the Brief information block, judging whether the identification number of the data block is the same as that of the Brief information block. And if the identification number of the data block is the same as that of the Brief information block, acquiring the data to be processed from the Brief information block and reading the data.
Furthermore, in the above-described judging process, if the hash value is different from the hash value of the Brief information block, or the length of the identification number of the data to be processed is different from the length of the identification number of the Brief information block, or the version number of the data block is different from the version number of the Brief information block, the number and the version number of the next data block are described from the VerIdx array of the index area, and the step of judging whether the hash value is the same as the hash value of the Brief information block, whether the length of the identification number of the data to be processed is the same as the length of the identification number of the Brief information block, and whether the version number of the data block is the same as the version number of the Brief information block is returned to be executed.
In another possible example, if the concurrent data processing request is a data deletion request, in this step, a corresponding pseudo-marked data block in a VerIdx array of the index area may be marked for deletion according to the hash value, and when detecting that a data reading request for the pseudo-marked data block currently exists, if detecting that the pseudo-marked data block has a deletion mark, returning deleted prompt information. And if no data reading request for the quasi-marked data block exists currently, deleting and marking a Brief information block corresponding to the quasi-marked data block in the hash array data area, marking the version number of the Brief information block as 0, and recycling data in the Brief information block. In this way, when actual deletion is performed, the corresponding pseudo-marked data block in the VerIdx array of the index area is marked for deletion, so that when a data reading request needs to read the pseudo-marked data block, if the fact that the pseudo-marked data block has the deletion mark is detected, the fact that the information in the pseudo-marked data block is deleted is returned, and the next data reading step is not performed.
Fig. 4 is a schematic diagram of an electronic device 100 for implementing the lock-free shared memory processing method according to the embodiment of the present application, where the electronic device 100 may be a server for providing an anchor service. In this embodiment, the electronic device 100 may include a storage medium 110, a processor 120, and a lock-free shared memory processing apparatus 130.
The processor 120 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application-Specific Integrated Circuit (ASIC), or one or more Integrated circuits for controlling the execution of the programs of the lockless shared memory Processing method provided in the above method embodiments.
The present application may divide the functional modules of the lock-less shared memory processing apparatus 130 according to the method embodiments, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the present application is schematic, and is only a logical function division, and there may be another division manner in actual implementation. For example, in the case of dividing each functional module according to each function, the lock-free shared memory processing apparatus 130 shown in fig. 4 is only a schematic apparatus. In detail, the lock-less shared memory processing apparatus 130 may include a data configuration module 131, a calculation module 132, a memory mapping module 133, an address recording module 134, and an allocation module 135, and the functions of the functional modules of the lock-less shared memory processing apparatus 130 are described in detail below.
The data configuration module 131 is configured to configure, for each data service, memory configuration data of the data service, where the memory configuration data includes a required data block size, a data block number, and a shared memory file path. It is understood that the data configuration module 131 can be used to execute the step S110, and for the detailed implementation of the data configuration module 131, reference can be made to the contents related to the step S110.
A calculating module 132, configured to calculate, according to the size of the data block and the number of the data blocks, a shared memory space required by the data service. It is understood that the calculating module 132 can be used to execute the step S120, and for the detailed implementation of the calculating module 132, reference can be made to the above-mentioned contents related to the step S120.
The memory mapping module 133 is configured to open a shared memory file according to the shared memory file path, map the shared memory file into a shared memory area according to the shared memory space in a preset memory mapping manner, and obtain a mapping return value after mapping is completed. It is to be understood that the memory mapping module 133 can be configured to perform the step S130, and for the detailed implementation of the memory mapping module 133, reference can be made to the content related to the step S130.
An address recording module 134, configured to obtain a start pointer address and an offset recording pointer address of a corresponding mapping area in the shared memory area according to the mapping return value, and record the start pointer address and the offset recording pointer address in a memory of an application program corresponding to the data service. It is understood that the address recording module 134 can be used to execute the step S140, and for the detailed implementation of the address recording module 134, reference can be made to the above description of the step S140.
The allocating module 135 is configured to allocate an index area, a hash array data area, an extraction pool, and an atomic data structure of a storage data area to the shared memory area after performing atomic mapping on the auxiliary data of the shared memory area, so as to complete lock-free shared memory processing of the data service. It is understood that the distribution module 135 can be used to execute the above step S150, and for the detailed implementation of the distribution module 135, reference can be made to the above description of the step S150.
Since the lock-less shared memory processing apparatus 130 provided in this embodiment is another implementation form of the lock-less shared memory processing method, and the lock-less shared memory processing apparatus 130 may be configured to execute the lock-less shared memory processing method provided in the above embodiment, for technical effects obtained by the lock-less shared memory processing apparatus, reference may be made to the above method embodiment, and details are not described here again.
Further, based on the same inventive concept, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program executes the steps of the lock-free shared memory processing method.
Specifically, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, or the like, and when a computer program on the storage medium is executed, the lock-free shared memory processing method can be executed.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (e.g., electronic device 100 of fig. 4), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The above description is only for various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and all such changes or substitutions are included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (11)
1. A lock-free shared memory processing method is applied to electronic equipment, and comprises the following steps:
configuring memory configuration data of each data service, wherein the memory configuration data comprises the size of a required data block, the number of the data blocks and a shared memory file path;
calculating the shared memory space required by the data service according to the size of the data blocks and the number of the data blocks;
opening a shared memory file according to the shared memory file path, mapping the shared memory file into a shared memory region in a preset memory mapping mode according to the shared memory space, and acquiring a mapping return value after the mapping is completed;
obtaining a start pointer address and an offset recording pointer address of a corresponding mapping area in the shared memory area according to the mapping return value, and recording the start pointer address and the offset recording pointer address into a memory of an application program corresponding to the data service;
after performing atomic mapping of auxiliary data on the shared memory area, allocating an index area, a hash array data area, an extraction pool and an atomic data structure of a storage data area to the shared memory area to complete lock-free shared memory processing of the data service;
the step of allocating the index area, the hash array data area, the extraction pool and the atomic data structure of the storage data area to the shared memory area includes:
allocating an index area for the shared memory area, and allocating a plurality of VerIdx arrays and a check information list for data index in the index area, wherein the VerIdx arrays store hash values corresponding to the serial numbers of each data block;
distributing a hash array data area for the shared memory area, performing atom mapping on the hash array data area, and recording metadata of each data block through a data link table;
distributing an extraction pool for the shared memory area, wherein global data block information and identification information of a next data block of each data block are recorded in an atom shaping pointer queue corresponding to the extraction pool;
and allocating a storage data area for the shared memory area, wherein the size of the storage data area is equal to the product of the number and the size of the hash buckets.
2. The lock-free shared memory processing method of claim 1, wherein the step of performing atomic mapping of auxiliary data to the shared memory region comprises:
generating a remark information area of the shared memory area;
configuring a mark value at the position of the first byte in the remark information area;
configuring the current data capacity at the position of the second byte in the remark information area;
the data block size is configured at the position of the third byte in the remark information area;
the number of data blocks is configured at the position of the fourth byte in the remark information area;
and respectively pointing to the data byte blocks of the data of the first byte, the second byte, the third byte and the fourth byte through the int-type pointers of four atoms in the memory of the application program, so as to respectively control the lock-free reading and writing of the first byte, the second byte, the third byte and the fourth byte through the int-type pointers of the four atoms.
3. The lock-free shared memory processing method according to claim 1, wherein the check information list includes storage bytes obtained according to the number of hash buckets and an extension field of the storage bytes after 16 bytes are added;
the metadata is stored through atomic shaping, and the metadata comprises one or more combinations of data block length, data block version number, data block identification, data block hash value, data block deletion mark and data block reference count;
the fetch pool points to the next writable memory region via a 64-bit atomic pointer and records the location of the next data block.
4. The lock-free shared memory processing method according to any one of claims 1 to 3, wherein the method further comprises:
when a concurrent data processing request aiming at a data service is received, acquiring an identification number of data to be processed according to the concurrent data processing request;
and calculating a hash value corresponding to the identification number, and executing corresponding operation on the data to be processed according to the hash value.
5. The lock-free shared memory processing method according to claim 4, wherein if the concurrent data processing request is a data write request, the step of performing corresponding operation on the to-be-processed data according to the hash value includes:
calculating the number of data blocks required by the data to be processed according to the hash value;
extracting idle data blocks corresponding to the number of the data blocks from a configured extraction pool, and writing the information of the idle data blocks into the hash array data area;
writing the data to be processed into a data area corresponding to the idle data block in the storage data area;
and writing the information of the free data block into a VerIdx array in the index area.
6. The lock-free shared memory processing method according to claim 4, wherein if the concurrent data processing request is a data read request, the step of performing corresponding operation on the to-be-processed data according to the hash value includes:
acquiring the number and the version number of the corresponding data block from the VerIdx array of the index area according to the hash value;
acquiring a Brief information block corresponding to the data block from the hash array data area according to the number of the data block;
judging whether the hash value is the same as that of the Brief information block, whether the length of the identification number of the data to be processed is the same as that of the Brief information block, and whether the version number of the data block is the same as that of the Brief information block;
if the hash value is the same as the hash value of the Brief information block, the length of the identification number of the data to be processed is the same as the length of the identification number of the Brief information block, and the version number of the data block is the same as the version number of the Brief information block, judging whether the identification number of the data block is the same as the identification number of the Brief information block;
and if the identification number of the data block is the same as that of the Brief information block, acquiring the data to be processed from the Brief information block and reading the data.
7. The lock-free shared memory processing method of claim 6, further comprising:
if the hash value is different from the hash value of the Brief information block, or the length of the identification number of the data to be processed is different from the length of the identification number of the Brief information block, or the version number of the data block is different from the version number of the Brief information block, recording the number and the version number of the next data block from the VerIdx array of the index area, and returning to execute the step of judging whether the hash value is the same as the hash value of the Brief information block, whether the length of the identification number of the data to be processed is the same as the length of the identification number of the Brief information block, and whether the version number of the data block is the same as the version number of the Brief information block.
8. The lock-free shared memory processing method according to claim 4, wherein if the concurrent data processing request is a data deletion request, the step of performing corresponding operation on the to-be-processed data according to the hash value includes:
deleting and marking the corresponding data blocks to be marked in the VerIdx array of the index area according to the hash value;
when detecting that a data reading request for the pseudo-marked data block exists currently, if detecting that the pseudo-marked data block has a deletion mark, returning deleted prompt information;
when detecting that no data reading request for the data block to be marked exists currently, deleting and marking a Brief information block corresponding to the data block to be marked in the hash array data area;
the version number of the Brief information block is marked as 0 and the data in the Brief information block is recycled.
9. A lock-free shared memory processing device applied to an electronic device, the device comprising:
the data configuration module is used for configuring the memory configuration data of each data service, and the memory configuration data comprises the required data block size, the data block number and a shared memory file path;
the calculation module is used for calculating the shared memory space required by the data service according to the size of the data blocks and the number of the data blocks;
the memory mapping module is used for opening a shared memory file according to the shared memory file path, mapping the shared memory file into a shared memory area in a preset memory mapping mode according to the shared memory space, and acquiring a mapping return value after the mapping is finished;
an address recording module, configured to obtain a start pointer address and an offset recording pointer address of a mapping region corresponding to the shared memory region according to the mapping return value, and record the start pointer address and the offset recording pointer address in a memory of an application program corresponding to the data service;
the distribution module is used for distributing an index area, a hash array data area, an extraction pool and an atomic data structure of a storage data area for the shared memory area after carrying out atomic mapping of auxiliary data on the shared memory area so as to complete lock-free shared memory processing of the data service;
the assignment module is to assign the atomic data structure by:
allocating an index area for the shared memory area, and allocating a plurality of VerIdx arrays and a check information list for data index in the index area, wherein the VerIdx arrays store hash values corresponding to the serial numbers of each data block;
distributing a hash array data area for the shared memory area, performing atom mapping on the hash array data area, and recording metadata of each data block through a data link table;
distributing an extraction pool for the shared memory area, wherein global data block information and identification information of a next data block of each data block are recorded in an atom shaping pointer queue corresponding to the extraction pool;
and allocating a storage data area for the shared memory area, wherein the size of the storage data area is equal to the product of the number and the size of the hash buckets.
10. An electronic device comprising a machine-readable storage medium having stored thereon machine-executable instructions and a processor, the processor, when executing the machine-executable instructions, causing the electronic device to implement the lock-less shared memory processing method of any one of claims 1-8.
11. A readable storage medium having stored therein machine executable instructions which when executed perform the lock-free shared memory processing method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910591481.0A CN110287044B (en) | 2019-07-02 | 2019-07-02 | Lock-free shared memory processing method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910591481.0A CN110287044B (en) | 2019-07-02 | 2019-07-02 | Lock-free shared memory processing method and device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110287044A CN110287044A (en) | 2019-09-27 |
CN110287044B true CN110287044B (en) | 2021-08-03 |
Family
ID=68020275
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910591481.0A Active CN110287044B (en) | 2019-07-02 | 2019-07-02 | Lock-free shared memory processing method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110287044B (en) |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110674052B (en) * | 2019-09-30 | 2022-03-22 | 广州虎牙科技有限公司 | Memory management method, server and readable storage medium |
CN110933448B (en) * | 2019-11-29 | 2022-07-12 | 广州市百果园信息技术有限公司 | Live list service system and method |
CN111092865B (en) * | 2019-12-04 | 2022-08-19 | 全球能源互联网研究院有限公司 | Security event analysis method and system |
CN113127415B (en) * | 2019-12-31 | 2024-02-27 | 浙江宇视科技有限公司 | Real-time stream file processing method, device, medium and electronic equipment |
CN113157199A (en) * | 2020-01-22 | 2021-07-23 | 阿里巴巴集团控股有限公司 | Snapshot occupation space calculation method and device, electronic equipment and storage medium |
CN113918312A (en) * | 2020-07-07 | 2022-01-11 | 大唐移动通信设备有限公司 | Memory configuration method, device, device and storage medium |
CN112306695A (en) * | 2020-11-19 | 2021-02-02 | 中国民航信息网络股份有限公司 | Data processing method and device, electronic equipment and computer storage medium |
CN112463333B (en) * | 2020-12-03 | 2024-10-22 | 北京浪潮数据技术有限公司 | Data access method, device and medium based on multithread concurrency |
CN112463306B (en) * | 2020-12-03 | 2024-07-23 | 南京机敏软件科技有限公司 | Method for sharing disk data consistency in virtual machine |
CN112328435B (en) * | 2020-12-07 | 2023-09-12 | 武汉绿色网络信息服务有限责任公司 | Methods, devices, equipment and storage media for target data backup and recovery |
CN112732194B (en) * | 2021-01-13 | 2022-08-19 | 同盾科技有限公司 | Irregular data storage method, device and storage medium |
CN112947856B (en) * | 2021-02-05 | 2024-05-03 | 彩讯科技股份有限公司 | Memory data management method and device, computer equipment and storage medium |
CN113194266A (en) * | 2021-04-28 | 2021-07-30 | 深圳迪乐普数码科技有限公司 | Image sequence frame real-time rendering method and device, computer equipment and storage medium |
CN115437798A (en) * | 2021-06-23 | 2022-12-06 | 北京车和家信息技术有限公司 | Data processing method, device, equipment and medium for shared memory |
CN113535437B (en) * | 2021-08-03 | 2023-04-07 | 节卡机器人股份有限公司 | Module data interaction method of robot, electronic equipment and storage medium |
CN113778674A (en) * | 2021-08-31 | 2021-12-10 | 上海弘积信息科技有限公司 | Lock-free implementation method of load balancing equipment configuration management under multi-core |
CN113688068B (en) * | 2021-10-25 | 2022-02-15 | 支付宝(杭州)信息技术有限公司 | Graph data loading method and device |
CN114168316B (en) * | 2021-11-05 | 2024-12-13 | 支付宝(杭州)信息技术有限公司 | Video memory allocation processing method, device, equipment and system |
CN114090295A (en) * | 2021-11-19 | 2022-02-25 | 中国电力科学研究院有限公司 | Hash-supported chain table type shared memory database generation method and system |
CN114217987A (en) * | 2021-12-07 | 2022-03-22 | 网易(杭州)网络有限公司 | Data sharing method, device, electronic device and storage medium |
CN114356589B (en) * | 2021-12-09 | 2024-04-12 | 北京华云安信息技术有限公司 | Multi-writer and multi-reader data storage and reading method, device and equipment |
CN114398187A (en) * | 2021-12-24 | 2022-04-26 | 新浪网技术(中国)有限公司 | Data storage method and device |
CN114490443A (en) * | 2022-02-14 | 2022-05-13 | 浪潮云信息技术股份公司 | An in-process cache method in golang based on shared memory |
CN115757039A (en) * | 2022-11-25 | 2023-03-07 | 惠州市德赛西威智能交通技术研究院有限公司 | A program monitoring method, device, electronic equipment and storage medium |
CN115934377A (en) * | 2022-11-28 | 2023-04-07 | 武汉光庭信息技术股份有限公司 | A shared memory communication method and system based on atomic operation |
CN118034610B (en) * | 2024-04-07 | 2024-07-02 | 深圳市纽创信安科技开发有限公司 | Key data processing method applied to memory, device and equipment |
CN118550735B (en) * | 2024-07-30 | 2024-11-26 | 天翼云科技有限公司 | A method and device for improving high performance computing |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101582092B (en) * | 2009-06-12 | 2011-04-20 | 中兴通讯股份有限公司 | Method and device for realizing the store of date in memory |
CN102156700A (en) * | 2010-02-12 | 2011-08-17 | 华为技术有限公司 | Database accessing method and device and system |
CN103514053B (en) * | 2013-09-22 | 2017-01-25 | 中国科学院信息工程研究所 | Shared-memory-based method for conducting communication among multiple processes |
CN103593485B (en) * | 2013-12-04 | 2017-06-16 | 网易传媒科技(北京)有限公司 | The method and apparatus for realizing database real-time operation |
KR101944876B1 (en) * | 2014-11-28 | 2019-02-01 | 후아웨이 테크놀러지 컴퍼니 리미티드 | File access method and apparatus and storage device |
CN105975407B (en) * | 2016-03-22 | 2020-10-09 | 华为技术有限公司 | Memory address mapping method and device |
CN109298935B (en) * | 2018-09-06 | 2023-02-03 | 华泰证券股份有限公司 | Method and application for multi-process write-once read-many lock-free shared memory |
-
2019
- 2019-07-02 CN CN201910591481.0A patent/CN110287044B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110287044A (en) | 2019-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110287044B (en) | Lock-free shared memory processing method and device, electronic equipment and readable storage medium | |
KR101367450B1 (en) | Performing concurrent rehashing of a hash table for multithreaded applications | |
US8788543B2 (en) | Scalable, concurrent resizing of hash tables | |
JP5339507B2 (en) | How to explore a tree structure | |
CN107608773B (en) | Task concurrent processing method and device and computing equipment | |
CN104424030B (en) | Method and device for sharing memory by multi-process operation | |
US10628200B2 (en) | Base state for thin-provisioned volumes | |
US10747593B2 (en) | Lock free container packing | |
CN112181902B (en) | Database storage method and device and electronic equipment | |
US11321302B2 (en) | Computer system and database management method | |
CN107408132B (en) | Method and system for moving hierarchical data objects across multiple types of storage | |
CN113485946B (en) | Persistent memory key-value system and operation method thereof | |
CN110727675A (en) | Method and device for processing linked list | |
US20200233801A1 (en) | TRADING OFF CACHE SPACE AND WRITE AMPLIFICATION FOR B(epsilon)-TREES | |
CN111444149A (en) | Data import method, device, equipment and storage medium | |
CN109460406A (en) | Data processing method and device | |
CN117377953A (en) | Tree-based data structure | |
US11868620B2 (en) | Read-write method and apparatus, electronic device, and readable memory medium | |
WO2005015388A2 (en) | Method and computer system for accessing thread private data | |
US12019629B2 (en) | Hash-based data structure | |
CN112068948B (en) | Data hashing method, readable storage medium and electronic device | |
US20060277221A1 (en) | Transactional file system with client partitioning | |
GB2516091A (en) | Method and system for implementing a dynamic array data structure in a cache line | |
CN111258929A (en) | Cache control method, device and computer readable storage medium | |
US9286136B1 (en) | Hash-based region locking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |