CN108363624B - Method, device and server for orderly controlling storage information by lockless threads - Google Patents
Method, device and server for orderly controlling storage information by lockless threads Download PDFInfo
- Publication number
- CN108363624B CN108363624B CN201810145873.XA CN201810145873A CN108363624B CN 108363624 B CN108363624 B CN 108363624B CN 201810145873 A CN201810145873 A CN 201810145873A CN 108363624 B CN108363624 B CN 108363624B
- Authority
- CN
- China
- Prior art keywords
- storage node
- thread
- shared storage
- state
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000013500 data storage Methods 0.000 claims abstract description 5
- 238000004590 computer program Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000004048 modification Effects 0.000 description 8
- 238000012986 modification Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/522—Barrier synchronisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/542—Event management; Broadcasting; Multicasting; Notifications
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to the technical field of data storage, and provides a method, a device and a server for orderly controlling storage information by a lock-free thread. The method comprises the following steps: calling a CAS instruction during reading current data from the shared storage node by a focus thread in at least two first threads according to the issuing state of the shared storage node in the circular queue; when reading of the current data is finished, the issuing state is modified into a non-issuing state, and the non-issuing state is used for indicating that the first thread can store the subsequent data after the current data; allocating a shared storage node in a non-issuing state for the first thread; and updating the original vernier with a new vernier according to the storage position of the shared storage node in the circular queue and the consistency of the original vernier in the CAS instruction, wherein the new vernier is equal to the sum of the original vernier and the one-way quantity. Therefore, the method can control the thread to access data in order, overcome the extra overhead generated when controlling the storage queue and improve the thread concurrency efficiency.
Description
Technical Field
The invention relates to the technical field of data storage, in particular to a method and a device for controlling storage information in a circular queue by a lock-free thread.
Background
In the related technology, in order to avoid data collision caused by simultaneous reading and writing of storage data from a storage queue by a thread of an application or an operating system, the thread is controlled to perform read and write operations in a serial manner by adopting a thread locking mode, and after the thread holding the lock releases the lock, the suspended thread can be locked to read and write the storage data in a serial manner, so that the concurrency efficiency of the storage queue is limited, and the thread without the lock is obviously more efficient.
In order to make the lock-free thread read and write the storage data in order, a blocking or non-blocking algorithm is provided for the storage queue, but it is necessary to ensure that the concurrent threads synchronously access the same storage node (for convenience of description, hereinafter referred to as a shared storage node) to maintain the integrity of the storage queue, and operations on the shared storage node in the storage queue are completed in limited steps (for example, adding a dummy node and controlling the access of the dummy node) respectively through a delay queue (for example, generating and checking an additionally added unique code) or limiting multithreading, so that additional overhead is increased.
In some cases, such as when more threads on the server need to frequently read and write storage nodes in the storage queue, the overhead incurred also reduces concurrency efficiency.
Disclosure of Invention
In view of this, the present invention provides a method for controlling ordered reading and writing of information from and to storage nodes in a storage queue by a lock-free thread on the basis of overcoming additional overhead, so as to improve thread concurrency efficiency.
Specifically, the invention is realized by the following technical scheme:
in a first aspect, the present invention provides a method for controlling storage information in a circular queue by a lock-free thread, which comprises the following specific steps:
calling a CAS instruction during reading current data from the shared storage node by a focus thread in at least two second threads according to the issuing state of the shared storage node in the circular queue; when reading the current data is finished, the issuing state is modified into a non-issuing state, and the non-issuing state is used for indicating that the first thread can store the subsequent data after the current data; allocating a shared storage node in a non-issuing state for the first thread; and updating the original vernier with a new vernier according to the storage position of the shared storage node in the circular queue and the consistency of the original vernier in the CAS instruction, wherein the new vernier is equal to the sum of the original vernier and the one-way quantity.
Optionally, when the focus thread starts to read current data, the focus thread sends notification information, where the notification information carries a storage number for uniquely identifying a storage location;
the CAS command is invoked based on the notification information.
Optionally, the non-publishing state is switched to the publishing state based on the first thread storing the data component of the post-data into the shared storage node.
Optionally, when the subsequent data is less than the total amount of data in the shared storage node, an exclusive state in the non-issue state is switched to the issue state, where the exclusive state is used to indicate that the first thread may continue to store the subsequent data in the shared storage node; when the latter number is equal to the total amount of data in the shared storage node, the exclusive state in the non-issued state is switched to the issued state.
Optionally, when the storage position is consistent with the original cursor, the original cursor is replaced by the new cursor.
Optionally, when the shared storage node is a tail node in the circular queue, modifying the single vector from a positive vector to a negative vector, wherein a preset step length in the single vector is 1; the original cursor is updated with a new cursor equal to the sum of the original cursor and the negative vector.
Optionally, when the shared storage node is a tail node in the circular queue, the new cursor is set as a storage position corresponding to a head node in the circular queue.
Optionally, when the storage position is inconsistent with the original cursor, notification information for indicating that reading of the current data is stopped is fed back to the focus thread.
Optionally, notification information indicating that current data is read continuously is fed back to the focus thread.
In a second aspect, based on the same concept, the present invention further provides an apparatus for controlling storage information in a circular queue by a lock-free thread, where the apparatus includes the following specific units:
the instruction calling unit is used for calling a CAS instruction when a focus thread in at least two first threads reads current data from the shared storage node; the state modification unit is used for modifying the issuing state into a non-issuing state when the reading of the current data is finished, wherein the non-issuing state is used for indicating that the first thread can be stored in the following data after the current data; the node distribution unit is used for distributing the shared storage node in a non-issuing state for the first thread; and the vernier updating unit is used for updating the original vernier with a new vernier according to the storage position of the shared storage node in the circular queue and the consistency of the original vernier in the CAS instruction, wherein the new vernier is equal to the sum of the original vernier and the one-way quantity.
In a third aspect, based on the same concept, the present application further provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the following steps, where the steps include the method steps of any one of the first aspect.
In a fourth aspect, based on the same concept, the present application also provides a computer storage medium having stored therein a CAS instruction and at least one or a set of other instructions which, when executed, implement the following steps including the method steps of any one of the first aspects.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
compared with the prior art, in order to store data into the circular queue in order, when the shared storage node is in an issuing state, the shared storage node is mutually preempted by at least two second non-lock threads, the CAS instruction is called during the period that the focus thread reads the current data from the focus thread, the original vernier is updated to a new vernier in the direction indicated by the one-way quantity by the CAS instruction, the shared storage node can be locked for the focus thread, other second threads different from the focus thread verify that the original vernier is inconsistent by the CAS instruction, so that the shared storage node cannot be operated, and other second threads no longer continue to preempt the shared storage node, on the basis, the first thread and the second thread serially control the shared storage node by changing the shared storage node from the issuing state to the non-issuing state in combination with a passive allocation mode, so that the additional overhead when a plurality of threads access the data in order is reduced, and the concurrency efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart illustrating a process of controlling storage of information in a circular queue by a lock-free thread according to an embodiment of the present invention;
FIG. 2a is a diagram illustrating the state of a thread control storage node 2 according to an embodiment of the present invention;
FIG. 2b is a schematic view of the cursor movement after the thread has been stored in the data of FIG. 2 a;
FIG. 2c is a schematic diagram of the storage node 3 of FIG. 2b transitioning from a non-publishing state to a publishing state;
fig. 3 is a schematic flow chart of controlling storage of information according to an embodiment of the present invention;
fig. 4a is a schematic diagram of a multithreading preemptive storage node 4 according to an embodiment of the present invention;
FIG. 4b is a schematic illustration of the multiple threads of FIG. 4a concurrently accessing data from the circular queue in order;
FIG. 4c is a schematic diagram illustrating a change in state of a storage node after a thread accesses data in FIG. 4 b;
FIG. 5 is a flowchart illustrating a process of controlling storage of information in a circular queue by a lock-free thread according to a second embodiment of the present invention;
FIG. 6 is a schematic flow chart illustrating control of storing information according to a second embodiment of the present invention;
FIGS. 7a to 7c are schematic diagrams illustrating a state change of a storage node in a storage queue according to a second embodiment;
FIG. 8 is a diagram illustrating an apparatus for controlling storage of information in a circular queue by a lock-free thread according to a third embodiment of the present invention;
FIG. 9 is a diagram of an apparatus for controlling storage of information in a circular queue by a lock-free thread according to a third embodiment of the present invention;
fig. 10 is a schematic structural diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Explanation of technical terms
The circular queue is a storage queue which links different storage nodes in a singly linked list mode and can be used for controlling the shared storage nodes to the direction indicated by the one-way quantity by the focus thread.
The vernier is characterized by taking the storage address of the shared storage node as a parameter and is used for indicating the storage position of the storage node in the circular queue so as to enable the concurrent thread to preempt the storage node.
The shared storage node refers to a node which can be preempted by at least two concurrent threads in the circular queue, so that the threads preempted to the node store or read data into the shared storage node.
The focus thread refers to a thread which preempts the shared storage node in the concurrent threads, and the thread can continuously control the shared storage node during the preempt period after the original vernier is updated.
Example one
In some cases, such as distributed clients collectively producing payment orders based on the behavior of users to purchase goods and sending the payment orders to a server, the server processes the payment orders, and in order to improve the user consumption experience, the server needs to store the consumption data in the payment orders in time and give feedback to the clients. At this time, the shared storage node is frequently preempted by the concurrent lock-free thread storing the consumption data, the storage node frequently enters the storage queue, and accordingly, the entering and exiting frequency is increased by multithreading reading of the consumption data in the server, and at this time, the concurrency efficiency is reduced by adding a dummy node or an additional identifier in order to consider the integrity of the data structure in the storage queue.
In order to improve the concurrency efficiency of threads under the above situation, as shown in fig. 1, the present invention provides a flow chart of a method for controlling storage information in a circular queue by a lock-free thread, wherein the method comprises the following specific steps:
step S110: calling a CAS instruction during the period that the focus thread in at least two first threads stores the current data into the shared storage node according to the non-issuing state of the shared storage node in the circular queue;
in this embodiment, the circular queue includes a plurality of storage nodes, and the presence or absence of the "issue" string distinguishes between an issue state and a non-issue state of each storage node, wherein only the storage node in the issue state allows the thread to read the storage data therein, and prohibits the thread from storing the storage data therein.
As shown in fig. 2a, the circular queue 200 includes storage nodes 1-4, where a storage node 2 stores issue and other storage nodes do not store, and when the storage nodes 1, 3, and 4 are in a non-issue state and the storage node 2 is in an issue state, a thread 1 in parallel threads can read storage data a from the storage node 2, and the thread 2 is prohibited from storing storage data into the storage node 2, so that data access by different threads in the same storage node can be avoided.
The thread for storing the current data into the storage node is a first thread, the thread for reading the data from the storage node is a second thread, and the first thread and the second thread can be pulled up simultaneously, so for the randomness of pulling up the threads, the two threads are parallel threads, but for the sharing of accessing the storage data in the same storage node in the release state, the first thread and the second thread are serial threads.
The method comprises the steps that a plurality of parallel first threads can freely preempt the same storage node (hereinafter referred to as a shared storage node) to store current data into the shared storage node in a non-issuing state, a scheduling thread in an operating system can schedule a plurality of first threads, a comparison and replacement (CAS) instruction is called during the period that the focus thread of the shared storage node is preempted to store the current data, and the original vernier is modified by a one-way quantity according to the identity between the original vernier and the current storage position of the shared storage node in a circular queue in the instruction, wherein the one-way quantity can be used for indicating the direction of the original vernier moving from the first node to the tail node in the circular queue.
As shown in fig. 2b, when the shared storage node 3 is in a non-issue state, two first threads preempt simultaneously, the first thread 2 preempts to store the current data a3 into the shared storage node 3, and the first thread 1 prohibits to store data into the shared storage node, which is implemented by calling a CAS instruction when the first thread 2 starts to store data, comparing the storage index 3 with the original cursor 3 in the CAS instruction, modifying the original cursor to be a new cursor 4, wherein the new cursor 4 is equal to the original cursor 3 plus a single vector 1, the other first threads 1 continuously contend for the shared storage node 3, and if the updated original cursor 4 in the CAS instruction is not consistent with the storage position 3, the storage data corresponding to the first thread 1 is not executed, so that the focus thread locks the shared storage node.
During the period that the focus thread stores the current data in the shared storage node, other first threads are forbidden to contend for the focus storage node through a scheduling thread different from at least two first threads, and therefore the first threads can be further ensured to lock the storage node.
Step S120: and when the current data storage is finished, the non-release state is modified into the release state.
Aiming at orderly accessing current data in a shared storage node, a first thread and a second thread can perform time-sharing control on the storage node through the change of an issuing state so as to reduce data conflict between accesses, the first thread finishes storing data into the storage node in a non-issuing state, the non-issuing state is immediately replaced by the issuing state, the second thread reads the stored data from the storage node, the issuing state is used for indicating that the second thread in series with the first thread can read the current data, the current data can be read only after the second thread is awakened, as shown in figure 2b, a new vernier updated by an original vernier is converted into the original vernier, a storage node 4 pointed by the vernier is a shared storage node, and a first thread 1 occupies the shared storage node 4.
Step S130: the second thread is assigned a shared storage node in an issued state.
After the non-issuing state is modified into the issuing state and before the second thread is awakened, distributing the shared storage node in the issuing state for the second thread through the scheduling thread; or when the shared storage node is converted from the non-issuing state to the issuing state, the second thread is immediately awakened, so that the second thread reads the storage data in the storage node.
Storing a plurality of second threads configured with different priorities in a thread pool, and binding each second thread with a storage node according to the priority and the sequence from high to low of storage addresses in a circular queue where the storage node is located; or, a mapping table is generated according to the corresponding size of the priority and the storage address, and the mapping table is stored in the thread pool, so that it can be ensured that different second threads read the storage nodes in the release state in order to reduce the conflict of read data.
Step S140: and updating the original vernier by a one-way quantity according to the storage position of the shared storage node in the circular queue and the consistency of the original vernier in the CAS instruction, so that the focus thread locks the shared storage node.
When the first thread starts to store the current data into the shared storage node which is obtained by the first thread in advance, the first thread sends notification information, the notification information carries the current storage position of the shared storage node in the circular queue, and a CAS instruction is called according to the notification information.
The three parameters stored in the CAS command are an original vernier, an expected value and a new vernier respectively, when the CAS command is called, a current storage address carried in the notification information is stored in the expected value, and whether the current storage address is the same as the original vernier is judged; if the two threads are the same, modifying the original vernier by using a new vernier, wherein the new vernier points to a target storage node different from the storage node pointed by the original vernier, so that other first threads can execute storage operation on the target storage node; if the two programs are different, the original vernier is not modified or rolled back, the shared storage node is not corresponding to the storage node pointed by the original vernier, and the first program is prohibited from being stored in the storage data, so that the ordering of the stored data among the first programs is ensured.
During the period that the focus thread stores data into the shared storage node, other first threads may continue to preempt to store storage data into the shared storage node, but the focus thread calls the CAS instruction first to lock the shared storage node in advance, and the other first threads send notification information when calling the CAS instruction, and at the moment, the original cursor in the CAS instruction is updated, so that the other first threads stop storing the storage data.
After the original vernier is modified, other first threads freely preempt the storage node corresponding to the new vernier in the circular queue, meanwhile, the first threads continuously store the current data in the storage node corresponding to the original vernier, different first threads can store data in different storage nodes in parallel, so that data conflict among the first threads is reduced, the first threads can store the stored data in parallel aiming at different storage nodes in the circular queue, and the concurrency efficiency is improved.
As shown in fig. 2c, when the first thread 2 finishes storing the storage data a3 in the storage node 3 corresponding to the original vernier, the storage node 3 stores the "issue" character string to change the storage node 3 from the non-issue state to the issue state, and the original vernier is replaced with the new vernier before the state is changed, the first thread 1 competes to obtain the storage node 4 corresponding to the new vernier, and the storage node 4 is in the non-issue state.
As an alternative embodiment, as shown in fig. 3.
Step S310: and when the shared storage node corresponding to the original vernier in the circular queue is in a non-issuing state, the plurality of first threads preempt the shared storage node and send notification information during the period of storing the current data.
At least three first threads compete with each other, the first process which obtains the data which is stored into the shared storage node in advance is used as a focus thread, and the shared storage node is in a non-release state.
The non-issue state may include an idle state for representing that stored data already exists in the shared storage node, and an exclusive state for representing that stored data is not stored therein, and may store an "idle" character string in the shared storage node in a state where stored data is not stored therein, and accordingly store an "exclusive" character string in a state where stored data (hereinafter referred to as previous data) already exists therein.
When the shared storage node is in an exclusive state or an idle state and the focus thread starts to store data, the notification information is sent to call the CAS instruction, so that the calling time difference is reduced, and the original vernier is changed as soon as possible.
As shown in fig. 4a, three first threads 1-3 compete for the shared storage node 4 corresponding to the original cursor in the circular queue 400, and the shared storage node 4 stores therein an "idle" character string indicating that it is in an idle state.
Step S320: and when the storage number of the shared storage node carried in the notification information is the same as that of the original vernier in the CAS instruction, replacing the original vernier with a new vernier, wherein the new vernier is equal to the sum of the original vernier and the one-way quantity.
Calling a CAS instruction according to the notification message carrying the current storage position in the shared storage node, modifying a desired value in the CAS instruction to the current storage position, replacing the original vernier with the new vernier when the current storage position is the same as the original vernier, and feeding back a first notification message for indicating that the storage node corresponding to the new vernier can be preempted to the first thread; when the current storage position is different from the original vernier, the original vernier is maintained, and a second notification message for indicating stopping storing the storage data into the shared storage node is fed back to the first thread, so that the first thread can actively sense the operation authority of each other on the shared storage node, and the notification message is not repeatedly sent, thereby causing the CAS command to be repeatedly called.
And when the focus thread receives the first notification information, continuously storing the current data in the shared storage node.
Except for the tail node in the circular queue, the new vernier can be formed by adding a single vector on the basis of the original vernier, the single vector defaults to be a positive vector and the preset step length is 1, when the original vernier is equal to the total length of the circular queue, the shared storage node pointed by the original vernier is represented to be the tail node, and the new vernier is set to be the storage position of the head node in the circular queue, so that the orderliness and the consistency of the storage nodes in the circular queue can be guaranteed.
As shown in fig. 4b, the first thread 3 preemptively can store the priority of the current data a4 into the shared storage node 4, during the period of storing a4, the original cursor is replaced by the new cursor through the CAS instruction, the shared storage node 4 is the tail end node of the circular queue 410, at this time, the original cursor is subtracted by the total number 4 of the storage nodes, and the subscript 1 of the head end node is set as the new cursor; when the first thread 3 receives the notification information 1 fed back by the CAS instruction, the current data is kept stored in the shared storage node 4, and the idle state is changed into the release state after the storage is finished; otherwise, when the second notification information is received, the first thread 3 stops storing the data a4, and the second threads 1 and 2 read the previous data from the storage nodes 2 and 3, respectively.
Or, when the shared storage node is the tail node, modifying the preset step size to be an opposite vector, and updating according to the sum of the opposite vector and the storage position and the original vernier of the shared storage node, for example: when the subscript of the shared storage node is 4, the preset step 1 is modified to be-1, and the new vernier is 4-1-3
Step S330: and when the current data storage is finished, modifying the exclusive state or the idle state in the non-exclusive state into the issuing state.
Step S340: the second thread is assigned a shared storage node in an issued state.
And when the exclusive state or the idle state is converted into the release state, the second thread bound with the shared storage node reads the storage data in the second thread.
Converting the issuing state into an exclusive state or an idle state according to the size relation between the component of the read storage data of the second thread and the total amount of the storage data in the shared storage node; when the above-mentioned component is less than the total quantity, replace "issue" character string with "monopolizing" character string; when the two are equal, the "publish" string is replaced with the "idle" string.
Referring to fig. 4b and 4c, the second thread 1 reads data a2-1 from the storage node 2, a2-1 is smaller than a2, and the "issue" is modified to be "exclusive" when the reading is finished; the second thread 2 reads data a3 from the storage node 3, and modifies "issue" to "idle" when the reading is finished.
It should be noted that at least two first threads compete with each other for the priority of the shared storage node, and at this time, the current first thread having the priority can store the storage data and call the CAS instruction in parallel, and the call of the CAS instruction primarily changes the original cursor, so that the current first thread locks the shared storage node, and other first threads compete with each other for the priority of the storage node corresponding to the new cursor. Thus, the aforementioned non-publishing state and original cursor can be modified in parallel, and those skilled in the art can also combine according to the aforementioned embodiments to make the two processes serial.
Example two
In some cases, for example, the distributed clients interact and share barrage information through communication with the server, barrage information produced based on user editing behaviors on one client is sent to the server, the server feeds back the barrage information based on video requests on other clients, the distributed clients share, live videos are played on the clients in a centralized mode, the barrage information is read on the server and inserted into the live videos, and when interaction between the server and the distributed clients is frequent, different barrage information also needs to be inserted into the live videos continuously.
In order to improve the concurrency efficiency of the threads under the above situation, as shown in fig. 5, the present application provides another method for controlling storage information in a circular queue by a lock-free thread, which includes the following specific steps.
Step S510: and calling the CAS instruction during reading the current data from the shared storage node by the focus thread of the at least two second threads according to the issuing state of the shared storage node in the circular queue.
In this embodiment, the focus thread is a thread that preempts the shared storage node from among the at least two second threads, the scheduling thread may schedule the at least two second threads according to the issue state of each storage node in the circular storage queue, the focus thread sends a notification message to the CPU during reading the current data from the shared storage node, and the reading of the current data may be by copying the current data by invoking a CAS instruction according to the notification message.
The CAS instruction may be an atomic data operation in the CPU, and compared to a mode of locking a thread in an operating system in the prior art, the atomic data operation may accelerate the operation of a CAS mechanism, for example: the CAS mechanism can reach 100w computations per second, 10w computations in the operating system.
Step S520: when the storage position of the shared storage node in the circular queue is the same as the original vernier in the CAS instruction, the original vernier is updated by a one-way quantity, so that the focus thread locks the shared storage node.
According to the consistency between the called CAS instruction and the storage index of the shared storage node in the circular queue, when the called CAS instruction and the shared storage node are the same, the original vernier is updated by the new vernier, the new vernier is equal to the sum of the original vernier and a one-way quantity, the one-way quantity can be 1, and the focus thread is enabled to lock the shared storage node.
After the original vernier is updated, the focus thread and other second threads receive the shared information, the focus thread continuously reads the current data from the shared storage node, and the first thread preempts the storage node corresponding to the new vernier according to the new vernier carried in the shared information.
Step S530: and when the focus thread finishes reading the current data, the issuing state is modified into a non-issuing state.
Establishing an information description table of the storage node and the state thereof, wherein the information description table comprises a mapping relation between the storage address of the storage node and the state information, and the state information comprises 1 for representing the publishing state and 2 for representing the non-publishing state, as shown in table 1.
TABLE 1
The non-issuing state may include an idle state and an exclusive state, the idle state may be characterized by 2-1 and the exclusive state may be characterized by 2-2, and a mapping relationship between the foregoing priority and the storage address may also be stored in the information description table, as shown in table 2.
TABLE 2
The non-issue state is used for indicating that the first thread stores the previous data before the current data and can store the next data after the current data, wherein the exclusive state is used for indicating that the first thread stores the next data after the current data, the idle state is used for indicating that the previous data does not exist in the shared storage node, and the first thread can directly store the current data into the shared storage node.
Step S540: the first thread is allocated a shared storage node in a non-issued state.
The first thread can monitor the state of a shared storage node in the circular queue, and after the shared storage node is monitored to be switched from a publishing state to a non-publishing state, the non-publishing state is switched to the publishing state based on the data component of the post data stored in the shared storage node by the first thread; when the subsequent data is less than the total amount of data in the shared storage node, switching an exclusive state in a non-issuing state to an issuing state, wherein the exclusive state is used for indicating that the first thread can continue to store the subsequent data in the shared storage node; when the amount of the following data is equal to the total amount of the data, the idle state in the non-issuing state is switched to the issuing state; the subsequent data is data stored in the shared storage node after the current data, and the data stored in the shared storage node before the current data is the previous data.
As another alternative embodiment, as shown in fig. 6.
Step S610: the CAS instruction is invoked during a read of storage data from the shared storage node by a focal thread of the at least two second threads based on an issue status of the shared storage node in the circular queue.
Step S620: and when reading the current data is finished, modifying the issuing state into a non-issuing state.
Step S630: the first thread is allocated a storage node in a non-issued state.
Step S640: when the storage location corresponding to the shared storage node is the same as the original cursor in the CAS instruction, the original cursor is modified with the new cursor, the new cursor being equal to the sum of the singleton and the original cursor.
Exemplarily, as shown in fig. 7a to 7c, after the circular queue 710 is initialized, each storage node is in an idle state and is respectively bound with the first threads 1 to 4 in the thread pool 1012, and the priorities of the first threads 1 to 4 are sequentially decreased (increased in the direction of the arrow); during the data access of the plurality of first threads and second threads from the circular queue 1011, the concurrent second threads 1 and 2 preempt the storage node 1 which is converted from the idle state to the issue state to copy the data after the first thread 1 stores the data into the storage node 1, and the first thread 1 terminates and pulls up the first thread 5 in the thread pool 1012, the priority of the first thread 5 is lower than that of the first thread 4, as shown in fig. 7 b; the thread preempting the storage node 1 in the second threads 1 and 2 copies the data a1 therefrom, as shown in fig. 7c, after the copying is finished, the storage node 1 is changed from the issue state to the exclusive state, and the storage node 1 in the exclusive state is bound with the thread 5.
It should be noted that, in the second embodiment, the description of the storage node and the state where the storage node is located, and the access of the thread to the data from the storage node in the second embodiment is similar to that in the first embodiment, and those skilled in the art may expand the specific implementation based on the first and second embodiments in combination, and details are not described here.
EXAMPLE III
Based on the same concept, the second embodiment of the present invention further provides a device for processing resource data, where the device may be implemented by software, or may be implemented by hardware, or a combination of hardware and software. Taking software implementation as an example, the device for processing test data of the invention is a device in logic meaning, and is formed by reading corresponding computer program instructions in a memory and then operating through a CPU of the device.
In an exemplary embodiment of the present invention, an apparatus for controlling information storage in a circular queue by a lock-free thread includes a CPU, a memory, and other hardware, and a logical structure of the apparatus 800 is shown in fig. 8 from a logical level, where the logical structure includes: an instruction call unit 810, a state modification unit 820, a node assignment unit 830, and a cursor update unit 840.
Example 1
The instruction calling unit 810 is configured to call a CAS instruction during a period in which a focus thread of the at least two first threads stores current data into the shared storage node according to a non-issue state of the shared storage node in the circular queue;
a state modification unit 820, configured to modify a non-release state into a release state when the current data is stored, where the release state is used to indicate that the second thread can read the current data;
a node allocating unit 830, configured to allocate a storage node in a release state for the second thread;
the cursor updating unit 840 is configured to update the original cursor by a single vector according to the storage location of the shared storage node in the circular queue and the consistency of the original cursor in the CAS instruction, so as to lock the shared storage node for the focus thread.
Example two
The instruction calling unit 910 is configured to call a CAS instruction during reading current data from the shared storage node by a focus thread of the at least two second threads according to an issue state of the shared storage node in the circular queue;
a cursor updating unit 920, configured to update the original cursor by a single vector when the storage location of the shared storage node in the circular queue is the same as the original cursor in the CAS instruction, so that the focus thread locks the shared storage node;
a state modification unit 930 configured to modify, when reading of the current data by the focus thread is finished, the issue state to a non-issue state indicating that the first thread may store the following data after the current data;
a node allocating unit 940, configured to allocate the shared storage node in the non-issued state for the first thread.
The instruction calling unit may include a message sending unit and a calling unit, and the message sending unit is configured to send a notification message when the focus thread starts to read current data; the calling unit is used for receiving the notification message and calling the CAS instruction in response to the notification message.
The cursor updating unit can comprise an original cursor, an expected value and a new cursor, wherein the expected value is used for storing the storage address of the shared storage node carried in the notification message in the circular queue, and the new cursor is equal to the sum of the original cursor and the one-way quantity; when the shared storage node corresponds to the storage address in the circular queue and the original vernier in the CAS instruction are the same, replacing the original vernier with the new vernier, and feeding back a third notification message for indicating to read the current data into the shared storage node to the focus thread; and when the storage address is different from the original cursor, feeding back a fourth notification message for indicating to stop reading into the shared storage node to the focus thread.
The one-way vector defaults to be a positive vector 1, and when the shared storage node is a tail node in the circular queue, the shared storage node can be converted into a negative vector from the positive vector, or a new vernier is set as a head node in the storage queue.
The state modification unit 930 may include a first modification unit configured to modify an exclusive state in a non-issue state to an issue state when current data stored by the first thread is less than a total amount of data in the shared storage node after the current data is stored; the second modification unit is used for modifying the idle state in the non-release state into the release state when the current data stored in the first thread is equal to the total amount of data in the shared storage node after the current data is stored.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
Example four
Referring to fig. 10, a third embodiment of the present invention further provides a server 1000, where the server 1000 may be an order server or a video server, and includes a memory 1100 and a processor 1300 connected to the memory 1100 through a control bus 1200, a communication interface 1400 connected to the control bus 1200 may communicate with a client or other server, and a computer program is stored in the memory 1100, and the computer program may be executed in the processor 1200, and when the computer program is executed, the steps and functions described in the first embodiment are implemented.
The functions described in the present invention can be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A method for controlling storage of information in a circular queue by a lock-free thread, the method comprising:
determining the issuing state of a shared storage node in a circular queue according to concurrent reading threads, wherein the reading threads at least comprise a first reading thread and a second reading thread, the shared storage node comprises a first shared storage node set to be in the issuing state and a second shared storage node located behind the first shared storage node, a cursor used for representing that preemptible is arranged at the position of the first shared storage node, and the shared storage node in the issuing state does not allow a current data thread to store data but allows the reading thread to read data from the data;
during the period from the first reading thread to the reading of the current data from the first shared storage node, calling a CAS instruction to switch the cursor to the second shared storage node so that the second reading thread can preempt the second shared storage node marked by the switched cursor, wherein the preempted first shared storage node does not allow other reading threads except the first reading thread to read the data;
when the reading of the current data of the first reading thread is finished, modifying the issuing state of the first shared storage node into a non-issuing state, wherein the non-issuing state is used for indicating that the current data thread can store the subsequent data after the current data, and the reading thread can not read the current data;
and distributing the first shared storage node in the non-release state to the current data storage thread according to the condition that the state of the first shared storage node is in the non-release state.
2. The method of claim 1, wherein invoking a CAS instruction during the time a first read thread preempts reading current data from the first shared storage node comprises:
when the first reading thread starts to read the current data from the first shared storage node, the first reading thread sends notification information, and the notification information carries a storage number for uniquely identifying a storage position;
and calling the CAS instruction according to the notification information so as to switch cursors according to the storage number.
3. The method of claim 1, wherein after said allocating the first shared storage node in the non-issue state for the logging current data thread, the method comprises:
storing the subsequent data into the shared storage node based on the current data storing thread;
switching a state of the first shared storage node from the non-publishing state to the publishing state.
4. The method of claim 3, wherein the non-publishing state comprises an idle state and an exclusive state, and wherein switching the state of the first shared storage node from the non-publishing state to the publishing state comprises:
when the latter data is less than the total amount of data in the shared storage node, switching the exclusive state in the non-issue state to the issue state, where the exclusive state is used to indicate that the thread storing the current data continues to store the latter data in the first shared storage node;
and when the post data is equal to the total amount of data in the shared storage node, switching the idle state in the non-publishing state to the publishing state.
5. The method of any of claims 1-3, wherein said invoking a CAS instruction to switch the cursor to the second shared storage node to cause the second read thread to preempt the second shared storage node marked by the switched cursor comprises:
and when the storage position of the first shared storage node is consistent with the original vernier, replacing the original vernier with a new vernier, wherein the new vernier is equal to the sum of the original vernier and the one-way quantity, and the new vernier is used for representing the storage position of the second storage shared node.
6. The method of any of claims 1-3, wherein said invoking a CAS instruction to switch the cursor to the second shared storage node to cause the second read thread to preempt the second shared storage node marked by the switched cursor comprises:
modifying a single vector from a positive vector to a negative vector when the first shared storage node is a tail node in the circular queue;
and updating the original vernier with a new vernier equal to the sum of the original vernier and the negative vector, wherein the new vernier is used for representing the storage position of the second shared storage node.
7. The method of any of claims 1-3, wherein invoking a CAS instruction to switch the cursor to the second shared storage node to cause the second read thread to preempt the second shared storage node marked by the switched cursor comprises:
and when the first shared storage node is the tail node in the circular queue, setting a new vernier as the storage position corresponding to the head node in the circular queue, wherein the new vernier is used for representing the storage position of the second shared storage node.
8. The method of any of claims 1-3, wherein said invoking a CAS instruction to switch the cursor to the second shared storage node to cause the second read thread to preempt the second shared storage node marked by the switched cursor comprises:
and when the storage position is inconsistent with the original cursor, feeding back notification information for indicating that the reading of the current data is stopped to the first reading thread.
9. The method of claim 5, wherein said replacing the original cursor with the new cursor when the storage location coincides with the original cursor comprises:
and feeding back notification information used for indicating continuous reading of the current data to a focus thread.
10. A server, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein: the processor, when executing the program, performs the steps comprising the method steps of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810145873.XA CN108363624B (en) | 2018-02-12 | 2018-02-12 | Method, device and server for orderly controlling storage information by lockless threads |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810145873.XA CN108363624B (en) | 2018-02-12 | 2018-02-12 | Method, device and server for orderly controlling storage information by lockless threads |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108363624A CN108363624A (en) | 2018-08-03 |
CN108363624B true CN108363624B (en) | 2022-04-19 |
Family
ID=63005769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810145873.XA Active CN108363624B (en) | 2018-02-12 | 2018-02-12 | Method, device and server for orderly controlling storage information by lockless threads |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108363624B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112559496B (en) * | 2020-12-24 | 2024-06-18 | 百果园技术(新加坡)有限公司 | Method and device for realizing transaction atomicity of distributed database |
CN113377509A (en) * | 2021-06-08 | 2021-09-10 | 上海哔哩哔哩科技有限公司 | Data processing method and system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8095727B2 (en) * | 2008-02-08 | 2012-01-10 | Inetco Systems Limited | Multi-reader, multi-writer lock-free ring buffer |
CN102331923A (en) * | 2011-10-13 | 2012-01-25 | 西安电子科技大学 | A Method for Implementing Function Macro Pipeline Based on Multi-core and Multi-thread Processor |
CN103713884A (en) * | 2013-12-18 | 2014-04-09 | 珠海金山网络游戏科技有限公司 | Multithreading method and system for processing data through array and multithreading processor |
CN104077113A (en) * | 2014-07-10 | 2014-10-01 | 中船重工(武汉)凌久电子有限责任公司 | Method for achieving unlocked concurrence message processing mechanism |
US9069566B1 (en) * | 2012-02-22 | 2015-06-30 | Hudku Technosoft Private Limited | Implementation of a multiple writer single reader queue in a lock free and a contention free manner |
CN105659208A (en) * | 2013-11-01 | 2016-06-08 | Arm 有限公司 | Data processing apparatus and method for processing a plurality of threads |
CN105868031A (en) * | 2016-03-24 | 2016-08-17 | 车智互联(北京)科技有限公司 | A data transmission device and method |
CN106648461A (en) * | 2016-11-15 | 2017-05-10 | 努比亚技术有限公司 | Memory management device and method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2346766A1 (en) * | 2001-05-07 | 2002-11-07 | Ibm Canada Limited-Ibm Canada Limitee | Efficient locking for thread-safe self-modifying code |
US9448856B2 (en) * | 2005-12-30 | 2016-09-20 | Level 3 Communications, Llc | Lock-free dual queue with condition synchronization and time-outs |
-
2018
- 2018-02-12 CN CN201810145873.XA patent/CN108363624B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8095727B2 (en) * | 2008-02-08 | 2012-01-10 | Inetco Systems Limited | Multi-reader, multi-writer lock-free ring buffer |
CN102331923A (en) * | 2011-10-13 | 2012-01-25 | 西安电子科技大学 | A Method for Implementing Function Macro Pipeline Based on Multi-core and Multi-thread Processor |
US9069566B1 (en) * | 2012-02-22 | 2015-06-30 | Hudku Technosoft Private Limited | Implementation of a multiple writer single reader queue in a lock free and a contention free manner |
CN105659208A (en) * | 2013-11-01 | 2016-06-08 | Arm 有限公司 | Data processing apparatus and method for processing a plurality of threads |
CN103713884A (en) * | 2013-12-18 | 2014-04-09 | 珠海金山网络游戏科技有限公司 | Multithreading method and system for processing data through array and multithreading processor |
CN104077113A (en) * | 2014-07-10 | 2014-10-01 | 中船重工(武汉)凌久电子有限责任公司 | Method for achieving unlocked concurrence message processing mechanism |
CN105868031A (en) * | 2016-03-24 | 2016-08-17 | 车智互联(北京)科技有限公司 | A data transmission device and method |
CN106648461A (en) * | 2016-11-15 | 2017-05-10 | 努比亚技术有限公司 | Memory management device and method |
Also Published As
Publication number | Publication date |
---|---|
CN108363624A (en) | 2018-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9542229B2 (en) | Multiple core real-time task execution | |
US10841390B2 (en) | Method and system for synchronizing publication and subscription of message queues | |
CN107018091B (en) | Resource request scheduling method and device | |
US11226852B2 (en) | System for inter-process communication | |
KR20200061393A (en) | Resource scheduling method, scheduling server, cloud computing system, and storage medium | |
JP2019535072A (en) | System and method for providing messages to multiple subscribers | |
US10929293B2 (en) | Atomic operations for fabric shared memories | |
CN105700939A (en) | Method and system for multi-thread synchronization in distributed system | |
WO2012026034A1 (en) | Scheduler, multi-core processor system, and scheduling method | |
KR20110019729A (en) | Scheduling Collections in the Scheduler | |
CN102426542A (en) | Data center resource management system and job scheduling method | |
CN104798045B (en) | System and method for using sequencer in parallel priority queue | |
CN112148480A (en) | Task processing method, device and equipment based on multithreading and storage medium | |
CN108363624B (en) | Method, device and server for orderly controlling storage information by lockless threads | |
CN108363625B (en) | Method, device and server for orderly controlling storage information by lockless threads | |
US20180239646A1 (en) | Information processing device, information processing system, task processing method, and storage medium for storing program | |
KR20220014280A (en) | Systems and methods for resource-based scheduling of commands | |
CN110321215A (en) | Queue control method and device | |
CN115408117A (en) | Coroutine operation method and device, computer equipment and storage medium | |
WO2012120655A1 (en) | Scheduling method and scheduling system | |
US20040039884A1 (en) | System and method for managing the memory in a computer system | |
JPWO2017056310A1 (en) | Computer and computer control method | |
Afshar et al. | An optimal spin-lock priority assignment algorithm for real-time multi-core systems | |
JP2014146366A (en) | Multi-core processor system, and control method and control program of multi-core processor system | |
US20180373573A1 (en) | Lock manager |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240729 Address after: No. 399 Songling Road, Laoshan District, Qingdao City, Shandong Province (A6 3rd Floor) Patentee after: QINGDAO JUKANYUN TECHNOLOGY CO.,LTD. Country or region after: China Address before: 266100 Songling Road, Laoshan District, Qingdao, Shandong Province, No. 399 Patentee before: JUHAOKAN TECHNOLOGY Co.,Ltd. Country or region before: China |
|
TR01 | Transfer of patent right |