CN108647104B - Request processing method, server and computer readable storage medium - Google Patents
Request processing method, server and computer readable storage medium Download PDFInfo
- Publication number
- CN108647104B CN108647104B CN201810460711.5A CN201810460711A CN108647104B CN 108647104 B CN108647104 B CN 108647104B CN 201810460711 A CN201810460711 A CN 201810460711A CN 108647104 B CN108647104 B CN 108647104B
- Authority
- CN
- China
- Prior art keywords
- threads
- request
- connection
- queue
- sending
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/547—Remote procedure calls [RPC]; Web services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/541—Client-server
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
The invention discloses a request processing method, a server and a computer readable storage medium, wherein the method comprises the following steps: monitoring, reading and processing connection requests sent by a client side in parallel through a plurality of created receiving threads, a plurality of read request threads and a plurality of task processing threads, and sending processed messages to the client side in parallel through a plurality of created sending threads; the queue number of the connection queue into which the monitored connection request is stored is: mod (value of source socket descriptor of the connection request, number of read request threads), and the queue number of the transmission queue into which the processed message is stored is: mod (the value of the source socket descriptor of the connection request, the number of sending threads). According to the scheme provided by the invention, unnecessary waiting is not required among threads, and queue locking is not required, so that the response time and the throughput capacity of the server are improved.
Description
Technical Field
The present invention relates to the field of computers, and in particular, to a request processing method, a server, and a computer-readable storage medium.
Background
The performance of the Server determines the quality of the service, mainly reflects in the delay and throughput of the service, and how to grab higher service performance by using limited computer hardware resources becomes the target pursued by the Server designer. The general system provides a complete set of network programming interface and concurrent programming mechanism for developers to use, how to reasonably and effectively use the interfaces and the mechanisms to construct a complete and universal efficient Server model, which has positive guiding significance for the Server developers and faster response and higher performance for the users of the Server.
The TNonblockingServer of the thread (one of the core technology frameworks of Facebook) is the best available and scalable among all servers of the thread. The TNonblockingServer is designed based on an event callback mechanism, including connection monitoring, client connection creation, client request processing and the like, all calls are non-blocking, but it does not need to wait, when a processing thread processes a certain request on a certain connection, the connection does not read the next call request until the processing thread completes the processing, and sends the result to the client, that is, a connection is synchronous in the three steps of receiving the request, processing the request and sending the response, although the processing request is completed in a thread pool, the receiving and sending threads need to wait.
Therefore, the existing Server model has the problem that the connection needs to be waited, and the response speed is influenced.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a request processing method, a server, and a computer-readable storage medium that solve the above problems.
According to an aspect of the embodiments of the present invention, there is provided a request processing method, applied to a server side, including:
monitoring, reading and processing connection requests sent by a client side in parallel through a plurality of created receiving threads, a plurality of read request threads and a plurality of task processing threads, and sending processed messages to the client side in parallel through a plurality of created sending threads;
the monitored connection requests are stored in a connection queue for the multiple read request threads to read information; the queue number of the connection queue stored is: mod (the value of the source socket descriptor of the connection request, the number of read request threads), which is a remainder;
storing the processed messages into a sending queue for the plurality of sending threads to read information; the queue number of the stored send queue is: mod (the value of the source socket descriptor of the connection request, the number of sending threads).
Optionally, the number of the read request threads is the same as the number of the sending threads;
and/or the plurality of read request threads correspond to the plurality of connection queues one to one; the plurality of sending threads correspond to the plurality of sending columns one to one.
Optionally, the concurrently monitoring, reading and processing the connection request sent by the client sequentially through the created multiple receiving threads, multiple read request threads and multiple task processing threads includes:
monitoring connection requests of a client side in parallel through the plurality of receiving threads, and storing the monitored connection requests into the connection queue;
reading the connection requests in the connection queue in parallel through the plurality of read request threads, packaging the read connection requests and storing the packaged connection requests into a request queue;
and reading the connection request in the request queue in parallel through the plurality of task processing threads, then performing request processing, and storing the message obtained after the request processing into the sending queue.
Optionally, the number of the request queues is one or more;
when the request queue is multiple, the queue number of the stored request queue is: mod (value of the source socket descriptor of the connection request, number of task processing threads).
Optionally, the sending the processed message to the client side in parallel through the created multiple sending threads specifically includes:
reading the processed messages in the sending queue in parallel through the plurality of sending threads, and writing the messages into a connection writing buffer area;
and sending the message in the connection writing buffer to the client through the plurality of sending threads.
Optionally, the method further comprises:
and detecting whether the connection write buffer area is full or not in real time through the plurality of read request threads, and informing the plurality of sending threads to stop writing operation to the connection write buffer area when the detection result is yes.
According to another aspect of the embodiments of the present invention, there is provided a server, including: memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when being executed by the processor, realizes the following method steps:
monitoring, reading and processing connection requests sent by a client side in parallel through a plurality of created receiving threads, a plurality of read request threads and a plurality of task processing threads, and sending processed messages to the client side in parallel through a plurality of created sending threads;
the monitored connection requests are stored in a connection queue for the multiple read request threads to read information; the queue number of the connection queue stored is: mod (the value of the source socket descriptor of the connection request, the number of read request threads), which is a remainder;
storing the processed messages into a sending queue for the plurality of sending threads to read information; the queue number of the stored send queue is: mod (the value of the source socket descriptor of the connection request, the number of sending threads).
Optionally, the number of the read request threads is the same as the number of the sending threads;
and/or the plurality of read request threads correspond to the plurality of connection queues one to one; the plurality of sending threads correspond to the plurality of sending columns one to one.
Optionally, when the processor executes a plurality of receiving threads, a plurality of read request threads, and a plurality of task processing threads created, and concurrently monitors, reads, and processes a connection request sent by a client in sequence, the method specifically includes:
monitoring connection requests of a client side in parallel through the plurality of receiving threads, and storing the monitored connection requests into the connection queue;
reading the connection requests in the connection queue in parallel through the plurality of read request threads, packaging the read connection requests and storing the packaged connection requests into a request queue;
and reading the connection request in the request queue in parallel through the plurality of task processing threads, then performing request processing, and storing the message obtained after the request processing into the sending queue.
Optionally, the number of the request queues is one or more; when the request queue is multiple, the queue number of the stored request queue is: mod (value of the source socket descriptor of the connection request, number of task processing threads).
Optionally, when the processor executes the multiple created sending threads to send the processed message to the client in parallel, the method specifically includes:
reading the processed messages in the sending queue in parallel through the plurality of sending threads, and writing the messages into a connection writing buffer area;
and sending the message in the connection writing buffer to the client through the plurality of sending threads.
Optionally, the computer program, when executed by the processor, further comprises:
and detecting whether the connection write buffer area is full or not in real time through the plurality of read request threads, and informing the plurality of sending threads to stop writing operation to the connection write buffer area when the detection result is yes.
According to a third aspect of embodiments of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method steps of:
monitoring, reading and processing connection requests sent by a client side in parallel through a plurality of created receiving threads, a plurality of read request threads and a plurality of task processing threads, and sending processed messages to the client side in parallel through a plurality of created sending threads;
the monitored connection requests are stored into a connection queue for the multiple read request threads to read information; the queue number of the connection queue stored is: mod (the value of the source socket descriptor of the connection request, the number of read request threads), which is a remainder;
storing the processed messages into a sending queue for the plurality of sending threads to read information; the queue number of the stored sending queue is: mod (the value of the source socket descriptor of the connection request, the number of sending threads).
The method, the server and the storage medium provided by the embodiment of the invention subdivide different stages of server processing, carry out parallel processing through different threads aiming at each stage, the threads are not influenced mutually, the read request thread and the sending thread only process own connection and response, unnecessary waiting is not needed among the threads, queue locking is not needed, the handling of the server can be greatly improved, the unnecessary waiting is prevented, and therefore, the response time and the handling capacity of the server are improved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the embodiments of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the embodiments of the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart of a request processing method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a request processing method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a request processing method according to a third embodiment of the present invention;
FIG. 4 is a schematic diagram of an operating mode of a server architecture according to a fourth embodiment of the present invention;
fig. 5 is a block diagram of a server according to a fifth embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In a first embodiment of the present invention, a request processing method is provided, applied to a server, as shown in fig. 1, the method includes the following steps:
step S101, monitoring, reading and processing connection requests sent by a client side in parallel through a plurality of created receiving threads, a plurality of created reading request threads and a plurality of created task processing threads;
and step S102, the processed message is sent to the client side in parallel through the created multiple sending threads.
In the embodiment of the invention, each processing stage of the server is subdivided into a request receiving stage, a request reading stage, a request processing stage and a response sending stage, different parallel threads are used for processing aiming at each stage, and adjacent stages are mutually connected but not influenced, so that full asynchronization, no waiting and no waste are achieved, and the response efficiency of connection is further improved.
Specifically, in the embodiment of the present invention, the creating of the multiple receiving threads, the multiple read request threads, and the multiple task processing threads is used to monitor, read, and process the connection request sent by the client in parallel, and specifically includes:
monitoring connection requests of a client side in parallel through the plurality of receiving threads, and storing the monitored connection requests into the connection queue;
reading the connection requests in the connection queue in parallel through the plurality of read request threads, packaging the read connection requests and storing the packaged connection requests into a request queue;
and reading the connection request in the request queue in parallel through the plurality of task processing threads, then performing request processing, and storing the message obtained after the request processing into the sending queue.
In the embodiment of the invention, the connection queue and the sending queue are multiple. Specifically, the queue number of the connection queue into which the monitored connection request is stored is: mod (the value of the source socket descriptor of the connection request, the number of read request threads), where mod is a remainder;
the queue number of the sending queue into which the message obtained after the request processing is stored is: mod (the value of the source socket descriptor of the connection request, the number of sending threads).
In the embodiment of the invention, the number of the read request threads is the same as that of the sending threads. Moreover, a plurality of read request threads correspond to a plurality of connection queues one by one; the plurality of transmission threads correspond to the plurality of transmission columns one to one.
According to the embodiment of the invention, the queue for storing the information is determined in a mode of taking the balance between the value of the source socket descriptor of the connection request and the number of the threads, so that the queue is not locked.
In a word, in the embodiment, each link processed by the server is subdivided, and the read request thread and the sending thread only process own connection and response without locking, so that the throughput of the server can be greatly improved, unnecessary waiting is prevented, and computing resources are maximally utilized, so that the response time and the throughput capacity of the server are improved, and the method has a good guiding significance as a universal model for server design.
In a second embodiment of the present invention, a request processing method is provided, which is applied to a server, and as shown in fig. 2, the method includes the following steps:
step S201, monitoring connection requests of a client side in parallel through the plurality of receiving threads, and storing the monitored connection requests into the connection queue; wherein, the queue number of the stored connection queue is: mod (value of source socket descriptor of the connection request, number of read request threads);
step S202, reading connection requests in the connection queue in parallel through the plurality of read request threads, packaging the read connection requests and storing the packaged connection requests into a request queue; the queue number of the stored request queue is: mod (value of source socket descriptor of the connection request, number of task processing threads);
step S203, reading the connection request in the request queue in parallel through the plurality of task processing threads, then performing request processing, and storing the message obtained after the request processing into the sending queue; the queue number of the stored sending queue is: mod (value of source socket descriptor of the connection request, number of sending threads);
and step S204, the processed message is sent to the client side in parallel through the created multiple sending threads.
Compared with the first embodiment, the difference is that a plurality of request queues are provided, and the problem that processing efficiency is affected by the fact that one request queue is adopted and a task processing thread takes a request from the request queue in a mutually exclusive manner is avoided. Therefore, the connection response efficiency is further improved.
In a third embodiment of the present invention, a request processing method is provided, which is applied to a server, and as shown in fig. 3, the method includes the following steps:
step S301, monitoring connection requests of clients in parallel through the plurality of receiving threads, and storing the monitored connection requests into the connection queue; wherein, the queue number of the stored connection queue is: mod (value of source socket descriptor of the connection request, number of read request threads);
step S302, reading connection requests in the connection queue in parallel through the plurality of read request threads, packaging the read connection requests and storing the packaged connection requests into a request queue; the queue number of the stored request queue is: mod (value of source socket descriptor of the connection request, number of task processing threads);
step S303, reading the connection request in the request queue in parallel through the plurality of task processing threads, then performing request processing, and storing the message obtained after the request processing into the sending queue; the queue number of the stored sending queue is: mod (value of source socket descriptor of the connection request, number of sending threads);
step S304, reading the processed messages in the sending queue in parallel through the plurality of sending threads, and writing the messages into a connection writing buffer area;
step S305, sending the message in the connection write buffer to the client through the plurality of sending threads.
In the embodiment of the invention, the plurality of read request threads also detect whether the connection write buffer area is full in real time, and when the detection result is yes, the plurality of sending threads are informed to stop the writing operation of the connection write buffer area.
Compared with the second embodiment, the embodiment of the invention realizes the mutual communication between the read request receiving thread and the sending thread when the buffer area is full, avoids the buffer area overflow and provides important support for normal processing of the request.
In a fourth embodiment of the present invention, a request processing method is provided, which is applied to a server, and this embodiment explains implementation procedures of the present invention in more detail through a specific application example, so as to better support implementation procedures of the first, second, and third embodiments. It should be noted that the technical details of the embodiment are used for explaining the present invention, but not for limiting the present invention only.
The request processing method of the embodiment of the invention has the core idea that: each processing stage of the server is subdivided, each stage uses different thread groups for processing, the number of threads in the thread groups can allocate thread resources according to the amount of calculation, and the stages are not affected with each other but are mutually connected, so that the full asynchrony, no waiting and no waste of the server architecture are achieved.
As shown in fig. 4, which is a schematic diagram of a working mode of a server architecture, in particular, in this embodiment, a server is divided into a request receiving stage, a read request stage, a request processing stage, and a response sending stage.
The server creates a certain number of parallel threads at each stage, specifically: the receive phase includes a plurality of receive threads in parallel, the request phase includes a plurality of read request threads in parallel, the request processing phase includes a plurality of task processing threads in parallel, and the send response phase includes a plurality of send threads in parallel. The number of the read request threads is equal to the number of the sending threads.
Meanwhile, a plurality of queues are also maintained in the server, namely a connection queue, a request queue and a sending queue; the number of the connection queue and the sending queue is multiple, and the number of the request queues may be one or multiple.
In this embodiment, each read request thread corresponds to one connection queue uniquely, and each send thread corresponds to one send queue uniquely.
It should be noted that fig. 4 shows a case where there is one request queue, and fig. 4 does not show "connection queue" or "sending queue", in the figure, although the receiving thread and the read request thread are directly connected, it does not describe that the two threads directly communicate with each other, a connection queue is further provided between the two threads, and the queue number of the stored connection queue is: fd mod n, where Fd represents the value of the source socket descriptor of the connection request and n represents the read request thread. Similarly, a sending queue is also arranged between the task processing thread and the sending thread.
The following explains an implementation process of the request processing method provided in the embodiment of the present invention, specifically:
in this embodiment, the server creates a certain number of receiving threads as needed, and is responsible for monitoring the request from the client. After a client has a request, a receiving thread creates a structure body for describing the connection of the client and stores the structure body into a connection queue, the number of the connection queues is multiple, and the queue serial number stored in each connection is mod (the value of a source socket descriptor of the connection request, the number of threads of the read request), so that the connection queue does not need to be locked.
In the embodiment of the invention, the read request threads have own unique connection queues, and the read request threads read all the requests in all the connections in the own connection queues, package the requests into packets and store the packets into the request queues to be used as the input of task processing. Specifically, when the request queue is listed as one, the request queue is directly stored; when a plurality of request queues are available, and a read request thread stores a request into the request queue, the queue number of the stored request queue is mod (the value of the source socket descriptor of the connection request, the number of task processing threads), and the purpose is that locking of the request queue is not needed.
In the embodiment of the invention, when the request queue is one, the task processing threads mutually exclusive take the requests from the request queue, when the request queue is multiple, each task processing thread corresponds to one request queue, and the task processing threads take the requests from the own request queue without mutually exclusive access queues.
In the embodiment of the invention, each task processing thread concurrently sends a processing request, the request is processed completely, and the processing result is packaged and stored into a corresponding sending queue according to the source socket descriptor of the request. The queue number of the transmission queue stored is mod (the value of the source socket descriptor of the connection request, the number of transmission threads), and the purpose is to eliminate the need to lock the transmission queue.
In the embodiment of the invention, each sending thread corresponds to one sending queue, and the sending threads take the processing result from the respective sending queue and send the processing result to the client.
In the embodiment of the invention, after the sending thread takes the processing result from the sending queue, the processing result is stored in the connection write buffer area, the read request thread detects whether the connection write buffer area is full, and informs the sending thread whether the buffer area can be written according to the detection result.
In conclusion, in the embodiment of the invention, the connection does not need to wait for the request processing, the client request is read all the time, the response does not need to be concerned about being sent to the client after the task processing is finished, and a separate sending thread sends the response; the read request thread and the sending thread do not need to carry out exclusive access to the connection queue and the sending queue, and each thread has a corresponding queue; each task processing thread may have a corresponding request queue, with different task processing threads processing requests in parallel.
Therefore, each link processed by the server is subdivided, and the receiving and sending thread only processes own connection and response without locking, so that the embodiment of the invention can greatly improve the throughput of the server, prevent unnecessary waiting and maximally utilize computing resources, thereby improving the response time and throughput capacity of the server and having good guiding significance as a universal model designed by the server.
In a fifth embodiment of the present invention, there is provided a server, as shown in fig. 5, including: a memory 510, a processor 520 and a computer program stored on the memory 510 and executable on the processor 520, the computer program realizing the following method steps when executed by the processor 520:
step 2, the processed message is sent to the client side in parallel through the created multiple sending threads;
the monitored connection requests are stored in a connection queue for the multiple read request threads to read information; the queue number of the connection queue stored is: mod (the value of the source socket descriptor of the connection request, the number of read request threads), which is a remainder;
storing the processed messages into a sending queue for the plurality of sending threads to read information; the queue number of the stored send queue is: mod (the value of the source socket descriptor of the connection request, the number of sending threads).
In the embodiment of the invention, the number of the read request threads is the same as that of the sending threads; the plurality of read request threads correspond to the plurality of connection queues one by one; the plurality of transmission threads correspond to the plurality of transmission columns one to one.
Optionally, in this embodiment of the present invention, when the processor 520 executes a plurality of receiving threads, a plurality of read request threads, and a plurality of task processing threads that are created, and concurrently monitors, reads, and processes a connection request sent by a client in sequence, specifically, the method includes:
monitoring connection requests of a client side in parallel through the plurality of receiving threads, and storing the monitored connection requests into the connection queue;
reading the connection requests in the connection queue in parallel through the plurality of read request threads, packaging the read connection requests and storing the packaged connection requests into a request queue;
and reading the connection request in the request queue in parallel through the plurality of task processing threads, then performing request processing, and storing the message obtained after the request processing into the sending queue.
When a plurality of request queues are available, the queue number of the stored request queue is: mod (value of the source socket descriptor of the connection request, number of task processing threads).
Optionally, in this embodiment of the present invention, when the processor 520 executes to send the processed message to the client in parallel through the created multiple sending threads, specifically, the method includes:
reading the processed messages in the sending queue in parallel through the plurality of sending threads, and writing the messages into a connection writing buffer area;
and sending the message in the connection writing buffer to the client through the plurality of sending threads.
Optionally, in this embodiment of the present invention, when executed by the processor, the computer program further includes:
and detecting whether the connection write buffer area is full or not in real time through the plurality of read request threads, and informing the plurality of sending threads to stop writing operation to the connection write buffer area when the detection result is yes.
For the specific implementation process of the embodiment of the present invention, reference may be made to the first to fourth embodiments, which are not described again.
In a word, the server provided by the embodiment subdivides different stages of server processing, performs parallel processing through different threads for each stage, the threads are not affected with each other, the read request thread and the sending thread only process connection and response belonging to the server, unnecessary waiting is not needed between the threads, queue locking is not needed, throughput of the server can be greatly improved, unnecessary waiting is prevented, response time and throughput capacity of the server are improved, and the server has good guiding significance as a universal model for server design.
In a sixth embodiment of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the request processing method according to the first, second, third or fourth embodiment.
Since the first to fourth embodiments have already described the detailed process of request processing, this embodiment is not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In short, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (13)
1. A request processing method is applied to a server side and comprises the following steps:
monitoring, reading and processing connection requests sent by a client side in parallel through a plurality of created receiving threads, a plurality of read request threads and a plurality of task processing threads, and sending processed messages to the client side in parallel through a plurality of created sending threads;
the monitored connection requests are stored into a connection queue for the multiple read request threads to read information; the connection queue is multiple, and the queue number of the connection queue into which the connection request is stored is: mod (the value of the source socket descriptor of the connection request, the number of read request threads), which is a remainder;
storing the processed messages into a sending queue for the plurality of sending threads to read information; the number of the sending queues is multiple, and the queue number of the sending queue into which the message is stored is: mod (the value of the source socket descriptor of the connection request, the number of sending threads), where mod is the remainder operation.
2. The method of claim 1, wherein the number of read request threads is the same as the number of send threads;
and/or the plurality of read request threads correspond to the plurality of connection queues one by one; the plurality of sending threads correspond to the plurality of sending queues one to one.
3. The method of claim 1, wherein the creating of the multiple receiving threads, the multiple reading request threads and the multiple task processing threads is used for performing snooping, reading and processing on the connection requests sent by the client in parallel, and the method comprises the following steps:
monitoring connection requests of a client side in parallel through the plurality of receiving threads, and storing the monitored connection requests into the connection queue;
reading the connection requests in the connection queue in parallel through the plurality of read request threads, packaging the read connection requests and storing the packaged connection requests into a request queue;
and reading the connection request in the request queue in parallel through the plurality of task processing threads, then performing request processing, and storing the message obtained after the request processing into the sending queue.
4. The method of claim 3, wherein the request queue is one or more;
when the request queue is multiple, the queue number of the stored request queue is: mod (the value of the source socket descriptor of the connection request, the number of task processing threads), where mod is the remainder operation.
5. The method according to any one of claims 1 to 4, wherein the sending the processed message to the client side in parallel through the created multiple sending threads specifically comprises:
reading the processed messages in the sending queue in parallel through the plurality of sending threads, and writing the messages into a connection writing buffer area;
and sending the message in the connection writing buffer to the client through the plurality of sending threads.
6. The method of claim 5, wherein the method further comprises:
and detecting whether the connection write buffer area is full or not in real time through the plurality of read request threads, and informing the plurality of sending threads to stop writing operation to the connection write buffer area when the detection result is yes.
7. A server, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when being executed by the processor, realizes the following method steps:
monitoring, reading and processing connection requests sent by a client side in parallel through a plurality of created receiving threads, a plurality of read request threads and a plurality of task processing threads, and sending processed messages to the client side in parallel through a plurality of created sending threads;
the monitored connection requests are stored in a connection queue for the multiple read request threads to read information; the connection queue is multiple, and the queue number of the connection queue into which the connection request is stored is: mod (the value of the source socket descriptor of the connection request, the number of read request threads), which is a remainder;
storing the processed messages into a sending queue for the plurality of sending threads to read information; the number of the sending queues is multiple, and the queue number of the sending queue into which the message is stored is: mod (the value of the source socket descriptor of the connection request, the number of sending threads), where mod is the remainder operation.
8. The server according to claim 7, wherein the number of the read request threads is the same as the number of the send threads;
and/or the plurality of read request threads correspond to the plurality of connection queues one to one; the plurality of sending threads correspond to the plurality of sending queues one to one.
9. The server according to claim 7, wherein the processor, when executing the created multiple receiving threads, multiple read request threads, and multiple task processing threads, and concurrently listening, reading, and processing connection requests sent by the client in sequence, specifically includes:
monitoring connection requests of a client side in parallel through the plurality of receiving threads, and storing the monitored connection requests into the connection queue;
reading the connection requests in the connection queue in parallel through the plurality of read request threads, packaging the read connection requests and storing the packaged connection requests into a request queue;
and reading the connection request in the request queue in parallel through the plurality of task processing threads, then performing request processing, and storing the message obtained after the request processing into the sending queue.
10. The server of claim 9, wherein the request queue is one or more; when the request queue is multiple, the queue number of the stored request queue is: mod (the value of the source socket descriptor of the connection request, the number of task processing threads), where mod is the remainder operation.
11. The server according to any one of claims 7 to 10, wherein the processor, when executing the parallel transmission of the processed message to the client by the created multiple transmission threads, specifically includes:
reading the processed messages in the sending queue in parallel through the plurality of sending threads, and writing the messages into a connection writing buffer area;
and sending the message in the connection writing buffer to the client through the plurality of sending threads.
12. The server of claim 11, wherein the computer program, when executed by the processor, further comprises:
and detecting whether the connection write buffer area is full or not in real time through the plurality of read request threads, and informing the plurality of sending threads to stop writing operation to the connection write buffer area when the detection result is yes.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the request processing method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810460711.5A CN108647104B (en) | 2018-05-15 | 2018-05-15 | Request processing method, server and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810460711.5A CN108647104B (en) | 2018-05-15 | 2018-05-15 | Request processing method, server and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108647104A CN108647104A (en) | 2018-10-12 |
CN108647104B true CN108647104B (en) | 2022-05-31 |
Family
ID=63755616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810460711.5A Active CN108647104B (en) | 2018-05-15 | 2018-05-15 | Request processing method, server and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108647104B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109495303A (en) * | 2018-11-19 | 2019-03-19 | 广州开信通讯系统有限公司 | Obtain the method, device network managing device and system, electronic device and storage medium of equipment parameter information |
CN109492016A (en) | 2018-11-19 | 2019-03-19 | 中国银联股份有限公司 | A kind of exchange settlement method and device |
CN109992433B (en) * | 2019-04-11 | 2021-06-29 | 苏州浪潮智能科技有限公司 | A distributed tgt communication optimization method, device, equipment and storage medium |
CN110532088B (en) * | 2019-07-15 | 2022-09-23 | 金蝶汽车网络科技有限公司 | Connection processing method and device, computer equipment and storage medium |
CN110493311B (en) * | 2019-07-17 | 2022-04-19 | 视联动力信息技术股份有限公司 | Service processing method and device |
CN110851246A (en) * | 2019-09-30 | 2020-02-28 | 天阳宏业科技股份有限公司 | Batch task processing method, device and system and storage medium |
CN113014528B (en) * | 2019-12-19 | 2022-12-09 | 厦门网宿有限公司 | Message processing method, processing unit and virtual private network server |
CN111343239B (en) * | 2020-02-10 | 2022-11-04 | 中国银联股份有限公司 | Communication request processing method, communication request processing device and transaction system |
CN111737030A (en) * | 2020-06-24 | 2020-10-02 | 广东浪潮大数据研究有限公司 | A control instruction processing method, apparatus, device and computer storage medium |
CN112380028A (en) * | 2020-10-26 | 2021-02-19 | 上汽通用五菱汽车股份有限公司 | Asynchronous non-blocking response type message processing method |
CN112749028B (en) * | 2021-01-11 | 2024-06-07 | 科大讯飞股份有限公司 | Network traffic processing method, related equipment and readable storage medium |
CN113051243A (en) * | 2021-03-31 | 2021-06-29 | 上海阵量智能科技有限公司 | Log processing method, device, system, chip, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101217464A (en) * | 2007-12-28 | 2008-07-09 | 北京大学 | A transmission method of UDP data packets |
CN101741746A (en) * | 2009-12-11 | 2010-06-16 | 四川长虹电器股份有限公司 | Method for realizing communication between two-way CAS gateway and user terminal based on IOCP |
CN101982955A (en) * | 2010-11-19 | 2011-03-02 | 深圳华大基因科技有限公司 | High-performance file transmission system and method thereof |
CN102916953A (en) * | 2012-10-12 | 2013-02-06 | 青岛海信传媒网络技术有限公司 | Method and device for realizing concurrent service on basis of TCP (transmission control protocol) connection |
CN103533025A (en) * | 2013-09-18 | 2014-01-22 | 北京航空航天大学 | Method for reducing lock contention during TCP (transmission control protocol) connection building on multi-core system |
CN103677853A (en) * | 2013-12-30 | 2014-03-26 | 哈尔滨工业大学 | Method for achieving HIT-TENA middleware in DM642 type DSP |
CN106790022A (en) * | 2016-12-14 | 2017-05-31 | 福建天泉教育科技有限公司 | Communication means and its system based on many inquiry threads |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7127520B2 (en) * | 2002-06-28 | 2006-10-24 | Streamserve | Method and system for transforming input data streams |
-
2018
- 2018-05-15 CN CN201810460711.5A patent/CN108647104B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101217464A (en) * | 2007-12-28 | 2008-07-09 | 北京大学 | A transmission method of UDP data packets |
CN101741746A (en) * | 2009-12-11 | 2010-06-16 | 四川长虹电器股份有限公司 | Method for realizing communication between two-way CAS gateway and user terminal based on IOCP |
CN101982955A (en) * | 2010-11-19 | 2011-03-02 | 深圳华大基因科技有限公司 | High-performance file transmission system and method thereof |
CN102916953A (en) * | 2012-10-12 | 2013-02-06 | 青岛海信传媒网络技术有限公司 | Method and device for realizing concurrent service on basis of TCP (transmission control protocol) connection |
CN103533025A (en) * | 2013-09-18 | 2014-01-22 | 北京航空航天大学 | Method for reducing lock contention during TCP (transmission control protocol) connection building on multi-core system |
CN103677853A (en) * | 2013-12-30 | 2014-03-26 | 哈尔滨工业大学 | Method for achieving HIT-TENA middleware in DM642 type DSP |
CN106790022A (en) * | 2016-12-14 | 2017-05-31 | 福建天泉教育科技有限公司 | Communication means and its system based on many inquiry threads |
Also Published As
Publication number | Publication date |
---|---|
CN108647104A (en) | 2018-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108647104B (en) | Request processing method, server and computer readable storage medium | |
US8112559B2 (en) | Increasing available FIFO space to prevent messaging queue deadlocks in a DMA environment | |
US8250164B2 (en) | Query performance data on parallel computer system having compute nodes | |
US8478926B1 (en) | Co-processing acceleration method, apparatus, and system | |
CN110677277B (en) | Data processing method, device, server and computer readable storage medium | |
CN103942178A (en) | Communication method between real-time operating system and non-real-time operating system on multi-core processor | |
US9535756B2 (en) | Latency-hiding context management for concurrent distributed tasks in a distributed system | |
AU2020214661B2 (en) | Handling an input/output store instruction | |
US20110107344A1 (en) | Multi-core apparatus and load balancing method thereof | |
AU2020213829B2 (en) | Handling an input/output store instruction | |
US8631086B2 (en) | Preventing messaging queue deadlocks in a DMA environment | |
US9507637B1 (en) | Computer platform where tasks can optionally share per task resources | |
CN111831408A (en) | Asynchronous task processing method and device, electronic equipment and medium | |
US20200059427A1 (en) | Integrating a communication bridge into a data processing system | |
WO2020156797A1 (en) | Handling an input/output store instruction | |
CN113885945A (en) | Calculation acceleration method, equipment and medium | |
CN107632890B (en) | Dynamic node distribution method and system in data stream architecture | |
CN104572315A (en) | Inter-subsystem communication method, communication entities and distributed communication system | |
CN110018782B (en) | Data reading/writing method and related device | |
US9509780B2 (en) | Information processing system and control method of information processing system | |
CN111163158B (en) | Data processing method and electronic equipment | |
CN117271165A (en) | Message processing method, device, electronic equipment and readable storage medium | |
WO2015182122A1 (en) | Information-processing device, information-processing system, memory management method, and program-recording medium | |
CN119621368A (en) | Message transmission method, device, equipment and storage medium | |
CN117851333A (en) | Inter-core data communication method of multi-core operating system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |