CN111176806A - Service processing method, device and computer readable storage medium - Google Patents
Service processing method, device and computer readable storage medium Download PDFInfo
- Publication number
- CN111176806A CN111176806A CN201911233963.5A CN201911233963A CN111176806A CN 111176806 A CN111176806 A CN 111176806A CN 201911233963 A CN201911233963 A CN 201911233963A CN 111176806 A CN111176806 A CN 111176806A
- Authority
- CN
- China
- Prior art keywords
- thread
- thread group
- groups
- group
- threads
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a service processing method, a device, a system and a computer readable storage medium, wherein the method comprises the following steps: acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; creating a plurality of thread groups corresponding to the sub-services, and configuring a plurality of queues among the thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues. By using the method, the service to be processed can be decomposed into a plurality of sub-services, and each sub-service is subjected to independent multi-thread operation, so that the processing efficiency of complex services is improved.
Description
Technical Field
The invention belongs to the field of data processing, and particularly relates to a service processing method and device and a computer readable storage medium.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
With the complexity of a business scenario, multiple steps with different processing speeds may be nested in a complex business. In order to maintain the overall processing performance of the complex business, multi-thread processing is usually adopted. However, the number of threads required for each step in the complex business may be very different, which makes it difficult to determine the appropriate number of threads, and it is difficult to perform independent multi-thread operations on multiple steps of one complex business. Therefore, the processing efficiency of the complex service is low.
Disclosure of Invention
In view of the above problems in the prior art, a method, an apparatus, and a computer-readable storage medium for service processing are provided.
The present invention provides the following.
In a first aspect, a method for service processing is provided, including: acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; creating a plurality of thread groups corresponding to the sub-services, and configuring a plurality of queues among the thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues.
In some possible embodiments, configuring a plurality of queues among a plurality of thread groups according to a preset processing flow includes: confirming adjacent thread groups with direct dependency relationship in the multiple thread groups according to a preset processing flow; configuring a unique input queue for a subsequent thread group in the adjacent thread groups; and configuring an output queue for the preorder thread group based on the input queue of a subsequent thread group in the adjacent thread groups, so that the preorder thread group transmits the processed data to the subsequent thread group.
In some possible embodiments, processing the corresponding sub-traffic by each of the plurality of thread groups further includes: determining a first thread group in the plurality of thread groups, wherein the first thread group is used for executing the sub-service with the highest priority in the plurality of sub-services; and acquiring external data by the first thread group, and processing the corresponding sub-service by each thread group in the plurality of thread groups.
In some possible embodiments, the method further comprises: and when each thread group in the plurality of thread groups processes the corresponding sub-service, dynamically adjusting the thread number in each thread group according to the thread busy-idle state of each thread group.
In some possible embodiments, dynamically adjusting the number of threads in each thread group according to the thread busy-idle status of each thread group further comprises: the number of threads in the first thread group is dynamically adjusted by the capabilities of the external data source providing the external data.
In some possible embodiments, dynamically adjusting the number of threads in each thread group according to the thread busy-idle status of each thread group further comprises: in each monitoring period, detecting the quantity of data to be processed in an input queue of a designated thread group in a plurality of thread groups as a first value of the designated thread group; the number of threads in the designated thread group is dynamically adjusted according to the first value of the designated thread group.
In some possible embodiments, dynamically adjusting the number of threads in each thread group according to the thread busy-idle status of each thread group further comprises: in each monitoring period, detecting the thread waiting times of the specified thread group as a second value of the specified thread group; and dynamically adjusting the number of threads in the specified thread group according to the second value of the specified thread group.
In some possible embodiments, the method further comprises: if the first value of the designated thread group exceeds the preset threshold, the current thread number of the designated thread group is increased progressively until the first value is detected to be reduced to a value which does not exceed the preset threshold; and/or if the second value of the designated thread group is a value other than 0, decreasing the current thread number of the designated thread group until the second value is detected to be reduced to a value of 0; wherein the preset threshold is determined by the current thread number of the designated thread group.
In some possible embodiments, the method further comprises: stopping dynamically adjusting the thread number in each thread group in response to a preset event; the preset event is a preset action and/or the thread number adjustment amplitude of each thread group reaches a preset convergence degree.
In some possible embodiments, the method further comprises: when the service to be processed is finished, a first thread group generates a finishing mark and sequentially transmits the finishing mark to other thread groups in the thread groups through a queue; other thread groups in the plurality of thread groups end their own run upon reading the end mark.
In a second aspect, a service processing apparatus is provided, including: the decomposition unit is used for acquiring the service to be processed and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; the creating unit is used for creating a plurality of thread groups corresponding to the sub-services and configuring a plurality of queues among the thread groups according to a preset processing flow; and the processing unit is used for processing the corresponding sub-service through each thread group in the plurality of thread groups and transmitting data among the plurality of thread groups by using the plurality of queues.
In some possible embodiments, the creating unit is further configured to: confirming adjacent thread groups with direct dependency relationship in the multiple thread groups according to a preset processing flow; configuring a unique input queue for a subsequent thread group in the adjacent thread groups; and configuring an output queue for the preorder thread group based on the input queue of a subsequent thread group in the adjacent thread groups, so that the preorder thread group transmits the processed data to the subsequent thread group.
In some possible embodiments, the processing unit is further configured to: determining a first thread group in the plurality of thread groups, wherein the first thread group is used for executing the sub-service with the highest priority in the plurality of sub-services; and acquiring external data by the first thread group, and processing the corresponding sub-service by each thread group in the plurality of thread groups.
In some possible embodiments, the system further includes a dynamic adjustment unit, configured to: and when each thread group in the plurality of thread groups processes the corresponding sub-service, dynamically adjusting the thread number in each thread group according to the thread busy-idle state of each thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to:
the number of threads in the first thread group is dynamically adjusted by the capabilities of the external data source providing the external data.
In some possible embodiments, the dynamic adjustment unit is further configured to: in each monitoring period, detecting the quantity of data to be processed in an input queue of a designated thread group in a plurality of thread groups as a first value of the designated thread group; the number of threads in the designated thread group is dynamically adjusted according to the first value of the designated thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to: in each monitoring period, detecting the thread waiting times of the specified thread group as a second value of the specified thread group; and dynamically adjusting the number of threads in the specified thread group according to the second value of the specified thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to: if the first value of the designated thread group exceeds the preset threshold, the current thread number of the designated thread group is increased progressively until the first value is detected to be reduced to a value which does not exceed the preset threshold; and/or if the second value of the designated thread group is a value other than 0, decreasing the current thread number of the designated thread group until the second value is detected to be reduced to a value of 0; wherein the preset threshold is determined by the current thread number of the designated thread group.
In some possible embodiments, the processing unit is further configured to: stopping dynamically adjusting the thread number in each thread group in response to a preset event; the preset event is a preset action and/or the thread number adjustment amplitude of each thread group reaches a preset convergence degree.
In some possible embodiments, the processing unit is further configured to: when the service to be processed is finished, a first thread group generates a finishing mark and sequentially transmits the finishing mark to other thread groups in the thread groups through a queue; other thread groups in the plurality of thread groups end their own run upon reading the end mark.
In a third aspect, a service processing apparatus is provided, including: one or more multi-core processors; a memory for storing one or more programs; the one or more programs, when executed by the one or more multi-core processors, cause the one or more multi-core processors to implement: acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; creating a plurality of thread groups corresponding to the sub-services, and configuring a plurality of queues among the thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues.
In a fourth aspect, there is provided a computer readable storage medium storing a program which, when executed by a multicore processor, causes the multicore processor to perform the method of the first aspect.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: in this embodiment, the to-be-processed service is decomposed into a plurality of sub-services, and each thread group is used to process the corresponding sub-service, so that each sub-service can be subjected to independent multi-thread operation, thereby further improving the processing efficiency of complex services, and implementing connection between thread groups and data transfer through queues, so that a plurality of sub-tasks can maintain the original processing logic.
It should be understood that the above description is only an overview of the technical solutions of the present invention, so as to clearly understand the technical means of the present invention, and thus can be implemented according to the content of the description. In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
The advantages and benefits described herein, as well as other advantages and benefits, will be apparent to those of ordinary skill in the art upon reading the following detailed description of the exemplary embodiments. The drawings are only for purposes of illustrating exemplary embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like elements throughout. In the drawings:
fig. 1 is a schematic flow chart of a service processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a sub-business process according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a thread group flow for processing the sub-business flows of FIG. 2 according to an embodiment of the invention;
FIG. 4 is a core class diagram for implementing the thread group flow of FIG. 3 according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a service processing apparatus according to another embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the present invention, it is to be understood that terms such as "including" or "having," or the like, are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility of the presence of one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
When complex to-be-processed services are processed, the to-be-processed services can be decomposed into a plurality of sub-services according to a preset processing flow; creating a plurality of thread groups corresponding to the sub-services, and configuring a plurality of queues among the thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues. Therefore, independent multi-thread operation can be carried out on each sub-service based on the corresponding thread group, the thread number proportion can be adjusted based on the processing speed difference of each sub-service, the processing efficiency of complex services is further improved, connection among the thread groups is realized through queues, data transmission is realized, and a plurality of subtasks can keep the original processing logic.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Fig. 1 schematically shows a flow diagram of a traffic processing method 100 according to an embodiment of the present invention.
As shown in fig. 1, the method 100 may include:
s101, acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow;
specifically, the preset processing flow refers to any flow that can complete the service to be processed, and may be determined according to a specific situation of the service to be processed, which is not limited in this disclosure. Alternatively, the predetermined process flow may have complex asynchronous processing logic. Based on this, in this embodiment, a policy for decomposing the to-be-processed service according to the preset processing flow may be adopted. Specifically, the service to be processed may be divided into a plurality of ordered sub-services according to a preset processing flow, so that the whole service to be processed may be completed by executing the plurality of sub-services in sequence. The specific operation content of each sub-service may be determined according to an actual service scenario, for example, an operation of reading data from a database may be performed, and for example, a specific data processing operation may be performed, which is not limited in this disclosure. There may be a direct data dependency between adjacently processed sub-services. For example, the to-be-processed service may be decomposed into a sub-service flow diagram as shown in fig. 2, and it can be seen that the sub-service flow includes sub-services 1 to 5 of a preset processing flow.
As shown in fig. 1, the method 100 may further include:
s102, creating a plurality of thread groups corresponding to the sub-services, and configuring a plurality of queues among the thread groups according to a preset processing flow;
specifically, each thread group represents a set of threads, and the number of threads of each thread group may be determined according to the difficulty level of the corresponding subtask. The queues are first-in first-out queues. Each thread group transmits the data processed by the thread group to the next thread group through the queue.
For example, FIG. 3 shows a schematic diagram of a thread group flow for processing the sub-business flows of FIG. 2; it can be seen that the method comprises a plurality of thread groups and a plurality of queues, wherein the thread groups and the subtasks are in one-to-one correspondence, the subtask 1 is processed by the thread group 1, the output data of the thread group 1 is transmitted to the thread group 2 by the queue 1, the subtask 2 is processed by the thread group 2, the output data of the thread group 2 is transmitted to the thread group 3 and the thread group 4 by the queue 2 and the queue 3, the subtask 3 is processed by the thread group 3, the subtask 4 is processed by the thread group 4, the output data of the thread group 3 and the thread group 4 is transmitted to the thread group 5 by the queue 4, and the subtask 5 is executed by the thread group 5.
Further, in order to implement the above multiple thread groups and multiple queues in a specific scenario, fig. 4 shows an implementation core class diagram for implementing the thread group flow in fig. 3 in this embodiment. Wherein, the MultiThreadStep class is used to implement each thread group in fig. 3, and is specifically used to be responsible for building, maintaining, destroying the thread group, and providing an input data queue and an output data queue for the threads in the thread group, where there is only one input queue (inputQueue), and there may be one output queue (corresponding to outputQueue in fig. 4) or multiple output queues (corresponding to outputQueue list in fig. 4), for example, there are 2 branches following the thread group 2 shown in fig. 3, so the thread group 3 has two output queues, namely, queue 2 and queue 3; the TaskQueue class is used to implement the various queues in fig. 3; the Process class is used for assembling the plurality of thread groups and the plurality of queues which are constructed according to the preset processing flow, so that the thread groups are connected into the flow shown in fig. 3, and after the flow is assembled, the Process can initiate or stop the running of the flow.
In some possible embodiments, configuring a plurality of queues among a plurality of thread groups according to a preset processing flow includes: confirming adjacent thread groups with direct dependency relationship in the multiple thread groups according to a preset processing flow; configuring a unique input queue for a subsequent thread group in the adjacent thread groups; and configuring an output queue for the preorder thread group based on the input queue of a subsequent thread group in the adjacent thread groups, so that the preorder thread group transmits the processed data to the subsequent thread group. The direct dependency relationship means that the input of a subsequent thread group in the adjacent thread groups directly depends on the output of the preceding thread group. The foregoing "preamble" and "postamble" are relative concepts. For example, taking the thread groups 1-5 in fig. 3 as an example, the thread group 1 and the thread group 2 form an adjacent thread group, where the thread group 1 becomes a preceding thread group and the thread group 2 becomes a subsequent thread group; the thread group 2, the thread group 3 and the thread group 4 form an adjacent thread group, wherein the thread group 2 becomes a preamble thread group, and the thread group 3 and the thread group 4 become a subsequent thread group; the thread group 3, the thread group 4, and the thread group 5 constitute adjacent thread groups, wherein the thread group 3 and the thread group 4 become a preceding thread group, and the thread group 5 becomes a following thread group. In this way, the plurality of thread groups can be made to process the corresponding subtasks in order.
As shown in fig. 1, the method 100 may further include:
step S103, processing the corresponding sub-service through each of the plurality of thread groups, and transferring data between the plurality of thread groups using the plurality of queues.
In some possible embodiments, processing the corresponding sub-traffic by each of the plurality of thread groups further includes: determining a first thread group in the plurality of thread groups, wherein the first thread group is used for executing the sub-service with the highest priority in the plurality of sub-services; and acquiring external data by the first thread group, and processing the corresponding sub-service by each thread group in the plurality of thread groups.
For example, as shown in fig. 3, taking the e-commerce merchandise information synchronization service as an example of a service to be processed, the service may be divided into 5 threads and 4 queues, and the implementation logic of each step is as follows: the thread group 1 is responsible for calling an electric business interface and acquiring a commodity list, and assembling basic information of the commodity and putting the basic information into the queue 1; the thread group 2 is used for acquiring basic commodity information from the queue 1, acquiring commodity auxiliary information according to the basic information, and respectively putting the basic commodity information and the commodity auxiliary information into a queue 2 and a queue 3; the thread group 3 acquires basic commodity information from the queue 2, stores the basic commodity information into a corresponding table and places the basic commodity information into a queue 4; optionally, in order to improve efficiency and reduce pressure on the database, the thread 3 may adopt a strategy of accumulating and acquiring 1000 pieces of basic information of the commodity and then storing the basic information in batch; the thread group 4 acquires the commodity auxiliary information from the queue 3, stores the commodity auxiliary information into a corresponding table and puts the commodity auxiliary information into the queue 4; optionally, in order to improve efficiency and reduce pressure on the database, the thread group 3 may adopt a policy of accumulating and acquiring 1000 pieces of accessory information of the commodity and then storing the accessory information in batch; the thread group 5 acquires the table containing the basic information of the commodity and the table containing the attached information of the commodity from the queue 4, and establishes an association relationship between the commodity information and the attached information according to a Stock Keeping Unit (SKU) of the commodity.
In some possible embodiments, the method further comprises: and when each thread group in the plurality of thread groups processes the corresponding sub-service, dynamically adjusting the thread number in each thread group according to the thread busy-idle state of each thread group. It should be noted that too many threads in each thread group may cause scheduling overhead, and a thread miss may cause processing blocking, thereby affecting cache locality and overall performance. Thus, the present disclosure may dynamically adjust the number of threads for each thread group independently based on the thread busy-idle status of each thread group.
In some possible embodiments, dynamically adjusting the number of threads in each thread group according to the thread busy-idle status of each thread group further comprises: the number of threads in the first thread group is dynamically adjusted by the capabilities of the external data source providing the external data.
In some possible embodiments, dynamically adjusting the number of threads in each thread group according to the thread busy-idle status of each thread group further comprises: in each monitoring period, detecting the quantity of data to be processed in an input queue of a designated thread group in a plurality of thread groups as a first value of the designated thread group; the number of threads in the designated thread group is dynamically adjusted according to the first value of the designated thread group. The specified thread group may be any thread group other than the first thread group. For example, as shown in fig. 3, the specified thread group may be thread group 2, queue 1 is an input queue of thread group 2, and the amount of data to be processed in queue 1 may be detected as the first value of thread group 2. It can be understood that when the difference between the first value of the designated thread group and the number of threads of the designated thread group is large, and the designated thread group belongs to a busy state, the number of threads in the designated thread group can be effectively and dynamically adjusted according to the first value of the designated thread group.
In some possible embodiments, dynamically adjusting the number of threads in each thread group according to the thread busy-idle status of each thread group further comprises: in each monitoring period, detecting the thread waiting times of the specified thread group as a second value of the specified thread group; and dynamically adjusting the number of threads in the specified thread group according to the second value of the specified thread group. The specified thread group may be any thread group other than the first thread group. For example, as shown in fig. 3, the designated thread group may be thread group 2, queue 1 may be an input queue of thread group 2, and the number of times that a thread in thread group 2 fails to successfully fetch data from queue 1 within a period of time may be detected as the second value of thread group 2. It can be understood that the thread wait time should be 0 in an ideal state, and therefore when the thread wait time of the specified thread group is greater than 0 for a long time, the specified thread group belongs to an idle state, and the number of threads in the specified thread group can be effectively and dynamically adjusted according to the second value of the specified thread group.
In some possible embodiments, the method further comprises: if the first value of the designated thread group exceeds the preset threshold value, the current line of the designated thread group is processedIncreasing the number of the processes until the first value is detected to be reduced to a value which does not exceed a preset threshold value; and/or if the second value of the designated thread group is a value other than 0, decreasing the current thread number of the designated thread group until the second value is detected to be reduced to a value of 0; wherein the preset threshold is determined by the current thread number of the designated thread group. For example, the preset threshold may be 2 times the current number of threads for the specified thread group. For example, assume the designated thread groups are thread groups 2-5 in FIG. 3, and assume the current thread count for each designated thread group is set to NiAnd i is 2-5, and the optimal state of the whole process is as follows: (1) the first value of each designated thread group does not exceed 2Ni(ii) a (2) The second value for all designated thread groups is 0. Therefore, when the second value of any one designated thread group is detected to be a value other than 0, the number of threads of the designated thread group is reduced by one until the second value is a value of 0; when the first value of the designated thread group exceeds 2NiThen the number of threads of the specified thread group is increased by one until the first value of the specified thread group does not exceed 2Ni. Therefore, the optimal thread number of each thread group can be calculated according to the actual running condition, dynamic adjustment is carried out, and the dynamic balance of the thread number among the thread groups is realized.
In some possible embodiments, the method further comprises: stopping dynamically adjusting the thread number in each thread group in response to a preset event; the preset event is a preset action and/or the thread number adjustment amplitude of each thread group reaches a preset convergence degree. It can be understood that, in order to reduce the performance impact caused by dynamically adjusting the thread number of each thread group, a test run period and a dynamic adjustment switch may be set, and then the dynamic adjustment switch is turned on during the test run period, the optimized thread number of all thread groups is obtained through the test run, and then the dynamic adjustment switch is turned off after the test run period.
In some possible embodiments, the method further comprises: when the service to be processed is finished, a first thread group generates a finishing mark and sequentially transmits the finishing mark to other thread groups in the thread groups through a queue; other thread groups in the plurality of thread groups end their own run upon reading the end mark. It will be appreciated that the overall process operates similarly to a production pipeline, with each thread group corresponding to a production shift group and each queue corresponding to a warehouse, and with each thread group passing processed data to the next thread group via the queue. Based on this, the first thread group is the first producer of the production line, when the service is finished, an end mark can be generated by the first thread group and sequentially transmitted to other thread groups through the queue, the other thread groups end their operation when the end mark is read, and when the end mark is transmitted to the last thread group, the whole thread group flow stops the operation.
Thus, according to the methods provided by the various aspects of the embodiments of the present invention, independent multi-thread operation can be performed on each sub-service, and the thread number ratio can be adjusted based on the processing speed difference of each sub-service, so as to further improve the processing efficiency of complex services, and achieve connection between thread groups and data transfer through queues, so as to enable multiple sub-tasks to maintain the original processing logic. In addition, the optimal thread number of each subtask can be calculated according to the actual running condition and dynamically adjusted.
Based on the same technical concept, an embodiment of the present invention further provides a service processing apparatus, configured to execute the service processing method provided in any of the above embodiments. Fig. 5 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present invention.
As shown in fig. 5, the apparatus 500 includes:
a decomposition unit 501, configured to acquire a service to be processed, and decompose the service to be processed into multiple sub-services according to a preset processing flow;
a creating unit 502, configured to create a plurality of thread groups corresponding to the plurality of sub-services, and configure a plurality of queues between the plurality of thread groups according to a preset processing flow;
the processing unit 503 is configured to process the corresponding sub-service through each of the plurality of thread groups, and transmit data between the plurality of thread groups by using the plurality of queues.
In some possible embodiments, the creating unit 502 is further configured to: confirming adjacent thread groups with direct dependency relationship in the multiple thread groups according to a preset processing flow; configuring a unique input queue for a subsequent thread group in the adjacent thread groups; and configuring an output queue for the preorder thread group based on the input queue of a subsequent thread group in the adjacent thread groups, so that the preorder thread group transmits the processed data to the subsequent thread group.
In some possible embodiments, the processing unit 503 is further configured to: determining a first thread group in the plurality of thread groups, wherein the first thread group is used for executing the sub-service with the highest priority in the plurality of sub-services; and acquiring external data by the first thread group, and processing the corresponding sub-service by each thread group in the plurality of thread groups.
In some possible embodiments, the system further includes a dynamic adjustment unit, configured to: and when each thread group in the plurality of thread groups processes the corresponding sub-service, dynamically adjusting the thread number in each thread group according to the thread busy-idle state of each thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to:
the number of threads in the first thread group is dynamically adjusted by the capabilities of the external data source providing the external data.
In some possible embodiments, the dynamic adjustment unit is further configured to: in each monitoring period, detecting the quantity of data to be processed in an input queue of a designated thread group in a plurality of thread groups as a first value of the designated thread group; the number of threads in the designated thread group is dynamically adjusted according to the first value of the designated thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to: in each monitoring period, detecting the thread waiting times of the specified thread group as a second value of the specified thread group; and dynamically adjusting the number of threads in the specified thread group according to the second value of the specified thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to: if the first value of the designated thread group exceeds the preset threshold, the current thread number of the designated thread group is increased progressively until the first value is detected to be reduced to a value which does not exceed the preset threshold; and/or if the second value of the designated thread group is a value other than 0, decreasing the current thread number of the designated thread group until the second value is detected to be reduced to a value of 0; wherein the preset threshold is determined by the current thread number of the designated thread group.
In some possible embodiments, the processing unit 503 is further configured to: stopping dynamically adjusting the thread number in each thread group in response to a preset event; the preset event is a preset action and/or the thread number adjustment amplitude of each thread group reaches a preset convergence degree.
In some possible embodiments, the processing unit 503 is further configured to: when the service to be processed is finished, a first thread group generates a finishing mark and sequentially transmits the finishing mark to other thread groups in the thread groups through a queue; other thread groups in the plurality of thread groups end their own run upon reading the end mark.
In this way, according to the apparatus provided in each aspect of the embodiments of the present invention, independent multi-thread operation can be performed on each sub-service, and the thread number ratio can be adjusted based on the processing speed difference of each sub-service, so as to further improve the processing efficiency of complex services, and connection between thread groups and data transfer are realized through queues, so that multiple sub-tasks can maintain the original processing logic. In addition, the optimal thread number of each subtask can be calculated according to the actual running condition and dynamically adjusted.
It should be noted that, the service processing apparatus in this embodiment of the present application may implement each process of the foregoing embodiment of the service processing method, and achieve the same effect and function, which is not described herein again.
Those skilled in the art will appreciate that aspects of the present invention may be embodied as an apparatus, method, or computer-readable storage medium. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" device.
In some possible embodiments, a task processing device of the present invention may include at least one or more processors, and at least one memory. Wherein the memory stores a program that, when executed by the processor, causes the processor to perform the steps of: acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; creating a plurality of thread groups corresponding to the sub-services, and configuring a plurality of queues among the thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues.
The task processing device 6 according to this embodiment of the present invention is described below with reference to fig. 6. The device 6 shown in fig. 6 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 6, the apparatus 6 may take the form of a general purpose computing device, including but not limited to: at least one processor 10, at least one memory 20, a bus 60 connecting the different device components.
The bus 60 includes a data bus, an address bus, and a control bus.
The memory 20 may include volatile memory, such as Random Access Memory (RAM)21 and/or cache memory 22, and may further include Read Only Memory (ROM) 23.
Memory 20 may also include program modules 24, such program modules 24 including, but not limited to: an operating device, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The apparatus 6 may also communicate with one or more external devices 2 (e.g., a keyboard, a pointing device, a bluetooth device, etc.), as well as with one or more other devices. Such communication may be via an input/output (I/O) interface 40 and displayed on the display unit 30. Also, the device 6 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 50. As shown, the network adapter 50 communicates with other modules in the device 6 over a bus 60. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the apparatus 6, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID devices, tape drives, and data backup storage devices, among others.
Fig. 7 illustrates a computer-readable storage medium for performing the method as described above.
In some possible embodiments, aspects of the invention may also be embodied in the form of a computer-readable storage medium comprising program code for causing a processor to perform the above-described method when the program code is executed by the processor.
The above-described method includes a number of operations and steps shown and not shown in the above figures, which will not be described again.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, apparatus, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 7, a computer-readable storage medium 70 according to an embodiment of the present invention is described, which may employ a portable compact disc-read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the computer-readable storage medium of the present invention is not limited thereto, and in this document, the readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
Claims (22)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911233963.5A CN111176806B (en) | 2019-12-05 | 2019-12-05 | Service processing method and device and computer readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911233963.5A CN111176806B (en) | 2019-12-05 | 2019-12-05 | Service processing method and device and computer readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111176806A true CN111176806A (en) | 2020-05-19 |
| CN111176806B CN111176806B (en) | 2024-02-23 |
Family
ID=70624546
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911233963.5A Active CN111176806B (en) | 2019-12-05 | 2019-12-05 | Service processing method and device and computer readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111176806B (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111796919A (en) * | 2020-06-19 | 2020-10-20 | 浙江大华技术股份有限公司 | Scheduling method and device of intelligent algorithm |
| CN112148455A (en) * | 2020-09-29 | 2020-12-29 | 星环信息科技(上海)有限公司 | Task processing method, device and medium |
| CN113905273A (en) * | 2021-09-29 | 2022-01-07 | 上海阵量智能科技有限公司 | Task execution method and device |
| CN114595070A (en) * | 2022-05-10 | 2022-06-07 | 上海登临科技有限公司 | Processor, multithreading combination method and electronic equipment |
| CN115599558A (en) * | 2022-12-13 | 2023-01-13 | 无锡学院(Cn) | Task processing method and system for industrial Internet platform |
| CN115756886A (en) * | 2022-11-04 | 2023-03-07 | 烽火通信科技股份有限公司 | A method and system for message parallel processing based on asynchronous non-blocking |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110296420A1 (en) * | 2010-05-25 | 2011-12-01 | Anton Pegushin | Method and system for analyzing the performance of multi-threaded applications |
| CN107220033A (en) * | 2017-07-05 | 2017-09-29 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling thread pool thread quantity |
| CN109582455A (en) * | 2018-12-03 | 2019-04-05 | 恒生电子股份有限公司 | Multithreading task processing method, device and storage medium |
| CN109634761A (en) * | 2018-12-17 | 2019-04-16 | 深圳乐信软件技术有限公司 | A kind of system mode circulation method, apparatus, computer equipment and storage medium |
| CN110413390A (en) * | 2019-07-24 | 2019-11-05 | 深圳市盟天科技有限公司 | Thread task processing method, device, server and storage medium |
| CN110457124A (en) * | 2019-08-06 | 2019-11-15 | 中国工商银行股份有限公司 | For the processing method and its device of business thread, electronic equipment and medium |
-
2019
- 2019-12-05 CN CN201911233963.5A patent/CN111176806B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110296420A1 (en) * | 2010-05-25 | 2011-12-01 | Anton Pegushin | Method and system for analyzing the performance of multi-threaded applications |
| CN107220033A (en) * | 2017-07-05 | 2017-09-29 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling thread pool thread quantity |
| CN109582455A (en) * | 2018-12-03 | 2019-04-05 | 恒生电子股份有限公司 | Multithreading task processing method, device and storage medium |
| CN109634761A (en) * | 2018-12-17 | 2019-04-16 | 深圳乐信软件技术有限公司 | A kind of system mode circulation method, apparatus, computer equipment and storage medium |
| CN110413390A (en) * | 2019-07-24 | 2019-11-05 | 深圳市盟天科技有限公司 | Thread task processing method, device, server and storage medium |
| CN110457124A (en) * | 2019-08-06 | 2019-11-15 | 中国工商银行股份有限公司 | For the processing method and its device of business thread, electronic equipment and medium |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111796919A (en) * | 2020-06-19 | 2020-10-20 | 浙江大华技术股份有限公司 | Scheduling method and device of intelligent algorithm |
| CN112148455A (en) * | 2020-09-29 | 2020-12-29 | 星环信息科技(上海)有限公司 | Task processing method, device and medium |
| CN113905273A (en) * | 2021-09-29 | 2022-01-07 | 上海阵量智能科技有限公司 | Task execution method and device |
| CN113905273B (en) * | 2021-09-29 | 2024-05-17 | 上海阵量智能科技有限公司 | Task execution method and device |
| CN114595070A (en) * | 2022-05-10 | 2022-06-07 | 上海登临科技有限公司 | Processor, multithreading combination method and electronic equipment |
| CN114595070B (en) * | 2022-05-10 | 2022-08-12 | 上海登临科技有限公司 | Processor, multithreading combination method and electronic equipment |
| CN115756886A (en) * | 2022-11-04 | 2023-03-07 | 烽火通信科技股份有限公司 | A method and system for message parallel processing based on asynchronous non-blocking |
| CN115599558A (en) * | 2022-12-13 | 2023-01-13 | 无锡学院(Cn) | Task processing method and system for industrial Internet platform |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111176806B (en) | 2024-02-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111176806A (en) | Service processing method, device and computer readable storage medium | |
| US8572622B2 (en) | Reducing queue synchronization of multiple work items in a system with high memory latency between processing nodes | |
| TWI407373B (en) | Method and apparatus for managing resources of a multi-core architecture | |
| US8327363B2 (en) | Application compatibility in multi-core systems | |
| US9858115B2 (en) | Task scheduling method for dispatching tasks based on computing power of different processor cores in heterogeneous multi-core processor system and related non-transitory computer readable medium | |
| US9262220B2 (en) | Scheduling workloads and making provision decisions of computer resources in a computing environment | |
| US20120284720A1 (en) | Hardware assisted scheduling in computer system | |
| US10929181B1 (en) | Developer independent resource based multithreading module | |
| JP2010108153A (en) | Scheduler, processor system, program generating method, and program generating program | |
| US9817696B2 (en) | Low latency scheduling on simultaneous multi-threading cores | |
| CN110795254A (en) | Method for processing high-concurrency IO based on PHP | |
| US9471387B2 (en) | Scheduling in job execution | |
| CN112579267A (en) | Decentralized big data job flow scheduling method and device | |
| CN101976204B (en) | Service-oriented heterogeneous multi-core computing platform and task scheduling method used by same | |
| US20140359635A1 (en) | Processing data by using simultaneous multithreading | |
| US9229716B2 (en) | Time-based task priority boost management using boost register values | |
| CN112860396A (en) | GPU (graphics processing Unit) scheduling method and system based on distributed deep learning | |
| CN117539595A (en) | Cooperative scheduling method and related equipment | |
| JP5678347B2 (en) | IT system configuration method, computer program thereof, and IT system | |
| US8819690B2 (en) | System for reducing data transfer latency to a global queue by generating bit mask to identify selected processing nodes/units in multi-node data processing system | |
| US11256543B2 (en) | Processor and instruction scheduling method | |
| US20120158651A1 (en) | Configuration of asynchronous message processing in dataflow networks | |
| CN107562527B (en) | Real-time task scheduling method for SMP (symmetric multi-processing) on RTOS (remote terminal operating system) | |
| CN116795503A (en) | Task scheduling method, task scheduling device, graphics processor and electronic equipment | |
| CN115328549A (en) | Signal-based data stream triggering method, system, device and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |