CN117472597B - Input/output request processing method, system, electronic device and storage medium - Google Patents
Input/output request processing method, system, electronic device and storage medium Download PDFInfo
- Publication number
- CN117472597B CN117472597B CN202311826299.1A CN202311826299A CN117472597B CN 117472597 B CN117472597 B CN 117472597B CN 202311826299 A CN202311826299 A CN 202311826299A CN 117472597 B CN117472597 B CN 117472597B
- Authority
- CN
- China
- Prior art keywords
- request
- input
- output
- processing
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 382
- 238000000034 method Methods 0.000 claims abstract description 185
- 230000008569 process Effects 0.000 claims description 106
- 238000012544 monitoring process Methods 0.000 claims description 38
- 230000001360 synchronised effect Effects 0.000 claims description 29
- 230000002159 abnormal effect Effects 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000000977 initiatory effect Effects 0.000 claims description 4
- 238000013500 data storage Methods 0.000 abstract description 7
- 238000007726 management method Methods 0.000 description 24
- 238000010586 diagram Methods 0.000 description 18
- 238000004891 communication Methods 0.000 description 17
- 238000013523 data management Methods 0.000 description 10
- 238000011161 development Methods 0.000 description 9
- 230000018109 developmental process Effects 0.000 description 9
- 230000007246 mechanism Effects 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000002950 deficient Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides an input/output request processing method, an input/output request processing system, electronic equipment and a storage medium, and relates to the technical field of computer data storage, wherein the method comprises the following steps: when the currently received target task request is determined to be an input/output request, judging the data length of target data corresponding to the target task request, if judging that the data length of the target data meets the preset condition, slicing the target task request, and generating a plurality of corresponding input/output request subtasks according to slicing results; based on preset polling conditions, obtaining subtask processing results corresponding to all the input/output request subtasks, and combining all the subtask processing results after determining that all the input/output request subtasks corresponding to the target data are processed completely, so as to obtain target task processing results of the target task requests. The invention improves the input and output throughput rate of the system and the overall performance of the storage system.
Description
Technical Field
The present invention relates to the field of computer data storage technologies, and in particular, to a method, a system, an electronic device, and a storage medium for processing an input/output request.
Background
Factors influencing the performance of the data storage system mainly comprise two aspects of hardware and software, wherein in the aspect of hardware, the data storage performance mainly depends on three physical components, namely a central processing unit (Central Processing Unit, CPU for short), a memory and a hard disk; in terms of software, the performance of data storage mainly depends on the software system architecture, and Input/Output (i/o) processing related algorithms, system communication mechanisms and other elements.
With the rapid development of semiconductor technology, the computing capability of a CPU is more and more powerful, the capacities of a memory and a hard disk are more and more large, the speed is also more and more rapid, the iterative update of software cannot keep pace with the rapid development of semiconductor hardware, and particularly, in the computer data storage field scene with superior disk performance, abundant CPU computing resources, large data throughput and large IO concurrency, the performance of a storage system is reduced due to insufficient processing capability on a software level.
Accordingly, there is a need for a method, a system, an electronic device, and a storage medium for processing an input/output request.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides an input/output request processing method, an input/output request processing system, electronic equipment and a storage medium.
The invention provides a method for processing an input/output request, which comprises the following steps:
when the currently received target task request is determined to be an input/output request, judging the data length of target data corresponding to the target task request, if the data length of the target data is judged to meet the preset condition, slicing the target task request, and generating a plurality of corresponding input/output request subtasks according to slicing results;
based on a preset polling condition, acquiring subtask processing results corresponding to the input and output request subtasks, and combining all the subtask processing results after determining that all the input and output request subtasks corresponding to the target data are processed completely, so as to acquire target task processing results of the target task request.
According to the method for processing the input/output request provided by the invention, before judging the data length of the target data corresponding to the target task request when the currently received target task request is determined to be the input/output request, the method further comprises:
Receiving the target task request sent by a client;
judging the task event type of the target task request, and if the task event type of the target task request is a login task event, monitoring the data volume of the target data corresponding to the target task request to obtain a target data real-time monitoring result;
judging whether the real-time monitoring result of the target data is larger than a first preset threshold value, and if so, determining that the target task request is the input/output request;
and if the event receiving mode at the current moment is a passive event receiving notification mode, switching the passive event receiving notification mode into a active polling event mode.
According to the method for processing the input/output request provided by the invention, after judging whether the real-time monitoring result of the target data is greater than the first preset threshold, the method further comprises:
if the real-time monitoring result of the target data is smaller than or equal to the first preset threshold, judging whether an event receiving mode at the current moment is the active polling event mode, and if so, switching the active polling event mode into the passive receiving event notification mode, wherein the passive receiving event notification mode is used for directly processing the target task request through a task processor and returning the processing result to the client;
And if the event receiving mode is not the active polling event mode, determining that the event receiving mode is the passive event receiving notification mode, and receiving a next task request sent by the client based on the passive event receiving notification mode.
According to the method for processing the input/output request provided by the invention, after the data volume of the target data corresponding to the target task request is monitored and the real-time monitoring result of the target data is obtained, the method further comprises the following steps:
judging the task event type of the next task request sent by the client, if the task event type of the next task request is a cancellation task event corresponding to the target task request, stopping monitoring the data volume of the target data, and switching the active polling event mode into the passive receiving event notification mode when determining that the event receiving mode at the current moment is the active polling event mode.
According to the method for processing the input/output request provided by the invention, when the currently received target task request is determined to be the input/output request, the data length of the target data corresponding to the target task request is judged, and the method comprises the following steps:
When the target task request is determined to belong to an input/output read/write command type request, judging whether the data length of the target data is larger than a second preset threshold value, and if so, determining that the data length of the target data meets the preset condition.
According to the method for processing the input/output request, the target task request is subjected to slicing processing, and a plurality of corresponding input/output request subtasks are generated according to slicing processing results, and the method comprises the following steps:
constructing a corresponding target task identification number according to the target task request;
based on the second preset threshold, the target task request is subjected to segmentation processing, and a plurality of input and output sub-requests are obtained;
and distributing corresponding sequence codes to the input/output sub-requests according to the segmentation sequence based on the target task identification numbers, and generating a target task linked list according to the input/output sub-requests distributed with the sequence codes according to the sequence codes, wherein the target task linked list comprises the input/output request sub-tasks corresponding to the input/output sub-requests.
According to the input/output request processing method provided by the invention, the preset polling conditions comprise a first polling process and a second polling process;
The obtaining, based on a preset polling condition, a subtask processing result corresponding to each input/output request subtask includes:
after determining that the event receiving mode at the current moment is a master polling event mode, judging whether a new task request sent by a client exists or not through the first polling process;
and if the new task request does not exist, judging whether subtask processing results corresponding to the subtask of each input/output request exist through the second polling process, and if so, acquiring the subtask processing results corresponding to the subtask of each input/output request based on a plurality of second polling processes.
According to the input/output request processing method provided by the invention, the method further comprises the following steps:
if the new task request sent by the client exists, judging the input/output command type of the new task request according to field information in the new task request;
if the new task request is the input/output read/write command type request, judging whether the data length corresponding to the new task request is larger than the second preset threshold value, if so, determining that the data length corresponding to the new task request meets the preset condition, and slicing the data corresponding to the new task request;
And if the new task request is an input/output management command type request, directly carrying out corresponding input/output processing on the new task request, judging whether a new task processing result corresponding to the new task request exists or not through the second polling process, and if so, returning the new task processing result to the client.
According to the method for processing the input/output request provided by the invention, when the target task request is determined to belong to an input/output read/write command type request, whether the data length of the target data is larger than a second preset threshold value is judged, and the method further comprises:
if the data length of the target data is smaller than or equal to the second preset threshold value, directly performing corresponding input and output processing on the target task request, judging whether the target task processing result corresponding to the target task request exists or not through the second polling process, and if so, returning the target task processing result to the client.
According to the method for processing the input/output request provided by the invention, after all the subtasks of the input/output request corresponding to the target data are determined to be processed completely, all the subtask processing results are combined to obtain the target task processing result of the target task request, and the method comprises the following steps:
Based on the second polling process, judging whether all the input/output request subtasks corresponding to the target data are processed completely according to the processing progress state of the target task request, and if so, combining all the subtask processing results to obtain the target task processing result.
According to the method for processing the input/output request provided by the invention, based on the second polling process, according to the processing progress state of the target task request, whether all the input/output request subtasks corresponding to the target data are processed completely is judged, and if all the processing is completed, all the subtask processing results are combined to obtain the target task processing result, including:
in the second polling process, if the initialization process of the input/output request resource corresponding to the target task request is completed, updating the target task request from an initialization processing progress state to an input/output processing state;
when the target task request is in the input/output processing state, processing a plurality of input/output request subtasks corresponding to the target task request, and acquiring input/output secondary processing states corresponding to the input/output request subtasks based on the second polling process;
If the input/output secondary processing states corresponding to the input/output request subtasks are determined to be completely processed, updating the target task request from the input/output processing states to a task completion state, and combining all the subtask processing results to obtain the target task processing result.
According to the input/output request processing method provided by the invention, the method further comprises the following steps:
when the target task request is in the input/output processing state, if the input/output secondary processing state corresponding to the input/output request subtask is marked as an abnormal state or an error state in the second polling process, updating the target task request from the input/output processing state to a retry state;
and processing the plurality of input/output request subtasks corresponding to the target task request again based on the retry state.
According to the method for processing the input/output request provided by the invention, after the target task request is updated from the input/output processing state to the retry state, the method further comprises the following steps:
And positioning the sub task of the input/output request, which fails in the input/output processing process, according to the input/output secondary processing state marked as the abnormal state or corresponding to the error state.
According to the input/output request processing method provided by the invention, the input/output secondary processing states comprise an uninitialized state, an executing state, a synchronous cache completion state, a processing state, a re-executing state and a subtask completion state;
the obtaining, based on the second polling procedure, the input/output secondary processing state corresponding to each of the input/output request subtasks includes:
after the target task request is determined to be in the input/output processing state, executing corresponding asynchronous processing operation on each input/output request subtask based on the hardware type of the storage system to be subjected to read/write operation of the target task request, and obtaining the input/output secondary processing state corresponding to each input/output request subtask.
According to the method for processing the input/output request provided by the invention, the hardware type of the storage system to be subjected to read/write operation is requested based on the target task, the corresponding asynchronous processing operation is executed for each of the input/output request subtasks, and the input/output secondary processing state corresponding to each of the input/output request subtasks is obtained, which comprises the following steps:
When the target task request is determined to be the input/output read/write command type request, if the hardware type of the storage system to be subjected to read/write operation is a full flash type, after the input/output request subtask is initiated, updating the input/output request subtask from the entering execution state to the synchronous cache completion state;
if the hardware type of the storage system to be subjected to read-write operation is a mixed flash type, and the request data corresponding to the input-output request subtask is hit in a system cache, updating the input-output request subtask from the entering execution state to the synchronous cache completion state;
if the hardware type of the storage system to be subjected to read-write operation is a mixed flash type and the request data corresponding to the input-output request subtask is not hit in a system cache, a data disk read request is initiated; updating the entering execution state into the synchronous cache state after the execution completion of the data disk reading request is determined based on the second polling process, and updating the synchronous cache state into the synchronous cache completion state after the data read by the data disk reading request is written into a system cache;
Updating the synchronous cache completion state into the processing state when all the input/output request subtasks are determined to be in the synchronous cache completion state, checking the processing results of the input/output request subtasks, and updating the processing state into the subtask completion state after the checking is determined to be completed and the processing results are normal;
and if the determining that all the input/output request subtasks have been completely processed according to the input/output secondary processing states corresponding to the input/output request subtasks comprises:
and when all the input/output request subtasks in the target task request are in the subtask completion state, determining that all the input/output request subtasks are completely processed.
According to the input/output request processing method provided by the invention, the method further comprises the following steps:
when the target task request is determined to be an input/output management command type request, after the input/output request subtask is initiated, updating the input/output request subtask from the entering execution state to the synchronous cache completion state, and initiating a request task for creating corresponding logic unit number metadata.
According to the method for processing the input/output request provided by the invention, after the target task linked list is generated according to the input/output sub-request after the sequence code is allocated, the method further comprises the following steps:
and deleting all the input and output sub-requests corresponding to the target task request in the target task linked list after the target task request is determined to be completed.
The invention also provides an input/output request processing system, which comprises:
the task processing module is used for judging the data length of target data corresponding to the target task request when the currently received target task request is determined to be an input/output request, slicing the target task request if the data length of the target data meets the preset condition, and generating a plurality of corresponding input/output request subtasks according to slicing results;
and the input/output processing module is used for acquiring subtask processing results corresponding to the input/output request subtasks based on preset polling conditions, and combining all the subtask processing results after determining that all the input/output request subtasks corresponding to the target data are processed completely, so as to acquire target task processing results of the target task request.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the input/output request processing method according to any one of the above when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of processing an input-output request as described in any of the above.
According to the input/output request processing method, the system, the electronic equipment and the storage medium, the data length of the target data corresponding to the received input/output request is judged, if the data length of the target data meets the preset condition, the target task request is sliced to obtain a plurality of input/output request subtasks, and based on the preset polling condition, after all the input/output request subtasks corresponding to the target data are determined to be processed completely, the target task processing result of the target task request is obtained, and the input/output throughput rate of the system is improved, and meanwhile the overall performance of the storage system is also improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an input/output request processing method provided by the invention;
fig. 2 is a schematic diagram of a switching flow of an event receiving mode according to the present invention;
FIG. 3 is a schematic diagram of a polling process provided by the present invention;
FIG. 4 is a schematic diagram of a two-stage state machine according to the present invention;
FIG. 5 is a schematic diagram of a two-stage state transition process according to the present invention;
FIG. 6 is a schematic diagram illustrating a transition of a two-level state machine according to the present invention;
FIG. 7 is a schematic slice diagram of an input/output request task provided by the present invention;
FIG. 8 is a schematic diagram of an input/output request processing system according to the present invention;
FIG. 9 is a schematic diagram of an overall structure of an I/O request processing system according to the present invention;
fig. 10 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
With the development of semiconductor technology, the development of CPU has undergone 4-bit processor age, 8-bit processor age, 16-bit processor age, 32-bit processor age, and current 64-bit processor age; memory development has undergone first generation SDR (Single Data Rate), second generation DDR (Double Data Rate), third generation DDR2, fourth generation DDR3, and current fifth generation DDR4; the development of Hard disks has undergone the first generation of mechanical Hard Disk (HDD), the second generation of solid state Disk (Solid State Drive SSD), and the current third generation of nonvolatile memory host controller interface specification (Non-Volatile Memory Express NVMe) storage interface. With the increasing and powerful operation capability of a CPU, the capacity of a memory and a hard disk is increased and the speed is increased, and the iterative update of software is not in keeping with the rapid development of semiconductor hardware, especially in the field of computer data storage, the performance bottleneck of a storage system is gradually transferred from hardware to a software layer.
In the related art, data communication is mainly performed by an event notification manner of system interruption. For example, a client (client) sends a command request, and an operating system generates an interrupt signal after receiving the command request, and then actively notifies a back-end storage system; similarly, after the back-end storage system issues the disk IO operation, it also waits for event notification that the disk IO is completed. Under the scene of low data throughput and low IO concurrency, the performance of the storage system mainly depends on the performance of a disk, and with the development of hardware, the read-write performance of an NVMe disk can reach 3500Mb/s, which is far superior to the read-write speed of a conventional HDD disk 130Mb/s, and the CPU computing resource and the memory capacity resource in the system are rich; on the other hand, with the development of internet services, data throughput and IO concurrency also increase greatly. In contrast, the interrupted resources of operating systems and storage systems, as well as their processing power, have become bottlenecks that limit the performance of the storage systems. In addition, the untimely response of the interrupt also affects other business and services on the operating system.
Aiming at the scenes of superior disk performance, abundant CPU computing resources, large data throughput and large IO concurrency at present, the invention provides a single-machine asynchronous IO management method based on polling (poller) and state machine conversion, thereby improving the IO performance of the back-end storage service and enhancing the business stability on an operating system.
Fig. 1 is a flow chart of an input/output request processing method provided by the present invention, and as shown in fig. 1, the present invention provides an input/output request processing method, including:
step 101, when it is determined that the currently received target task request is an input/output request, judging a data length of target data corresponding to the target task request, if it is judged that the data length of the target data meets a preset condition, slicing the target task request, and generating a plurality of corresponding input/output request subtasks according to slicing results.
In the present invention, it is necessary to determine whether or not the currently received target task (task) request is an input-output request. In an embodiment, the interface for receiving the target task request may be monitored in real time by the flow monitor, so as to obtain the data volume of the target task request, and further determine that the target task request is an input/output request according to a comparison result between the data volume monitored in real time and a preset threshold (i.e., a first preset threshold).
Further, for the target task request determined as the input/output request, the data length of the target data corresponding to the target task request is determined, and once the data length of the target data is known, the target task request can be compared with a preset condition. In the present invention, the preset condition may be set to a preset data length (i.e., a second preset threshold) for determining whether the slicing process is required for the target task request. If the judging result shows that the data length of the target data meets the preset condition (the data length is larger than the second preset threshold), the slicing processing can be performed on the target task request.
In the slicing process, a plurality of corresponding input/output request subtasks are generated, each of which represents a data segment in the original target task request and has corresponding input/output request information. Through the process, a large target task request can be split into a plurality of smaller input/output request subtasks so as to be processed and executed more efficiently in the system, and the parallelism, response speed and resource utilization rate of IO processing in the system can be improved.
Step 102, based on a preset polling condition, obtaining subtask processing results corresponding to the input/output request subtasks, and after determining that all the input/output request subtasks corresponding to the target data are processed completely, combining all the subtask processing results to obtain target task processing results of the target task request.
In the invention, preset polling conditions are used for determining when and how to acquire the processing results of the subtasks, including polling time intervals and trigger conditions. In the invention, the polling process comprises a big polling process (namely a first polling process) and a small polling process (namely a second polling process), a sock_poller thread is started to receive a new task request sent by a client and IO request processing results corresponding to the IO request, and whether the new task request comes or not is polled through each big polling process, and then whether the IO request processing results come or not is polled through the small polling process. The invention adopts a mode of two-stage polling, which can reduce the occupation of interrupt resources of the operating system and improve the overall performance of the operating system.
Further, for each input/output request subtask, according to a preset polling condition, the system periodically queries or checks the processing state of the subtask to obtain a processing result. When the processing results corresponding to the input and output request subtasks are obtained, the system stores or caches the processing results, and waits for subsequent processing and combination. After all the input and output request subtasks corresponding to the target data are determined to be processed completely, the processing results of all the subtasks are combined to obtain the final processing result of the target task request, and the final processing result can be used for reporting, storing, transmitting or other subsequent processing to a user, so that the integrity and accuracy of the tasks are ensured, and meanwhile, the reliability and efficiency of the system are improved.
According to the input/output request processing method provided by the invention, the data length of the target data corresponding to the received input/output request is judged, if the data length of the target data is judged to meet the preset condition, the target task request is sliced to obtain a plurality of input/output request subtasks, and based on the preset polling condition, after all the input/output request subtasks corresponding to the target data are determined to be processed completely, the target task processing result of the target task request is obtained, and the input/output throughput rate of the system is improved, and the overall performance of the storage system is also improved.
On the basis of the foregoing embodiment, before the determining, when determining that the currently received target task request is an input/output request, a data length of target data corresponding to the target task request, the method further includes:
receiving the target task request sent by a client;
judging the task event type of the target task request, and if the task event type of the target task request is a login task event, monitoring the data volume of the target data corresponding to the target task request to obtain a target data real-time monitoring result;
judging whether the real-time monitoring result of the target data is larger than a first preset threshold value, and if so, determining that the target task request is the input/output request;
and if the event receiving mode at the current moment is a passive event receiving notification mode, switching the passive event receiving notification mode into a active polling event mode.
In the invention, when the back-end storage system is initialized, a first preset threshold value is required to be set, and the first preset threshold value is used for providing a judgment basis for a dynamic switching Event (Event) receiving mode. Specifically, when the system interrupt resource is deficient, the CPU computing resource is richer, the disk performance is better, and the first preset threshold value should be set to a smaller value in the scene of more disks, whereas for the scene of more system interrupt resource, the CPU computing resource is deficient, the disk performance is worse, and the disk number is less, the first preset threshold value should be set to a larger value. The determining process of the first preset threshold value can be carried out through multiple performance bottoming tests, the value with the best performance is selected, the first preset threshold value is continuously adjusted, the multiple performance bottoming tests are carried out, and the first preset threshold value with the best matching degree with the environment and the best performance is selected. In the invention, the first preset threshold value can be a preset data size, and then whether the task request is an input/output request is judged according to the data size corresponding to the task request monitored in real time. In some embodiments, the first preset threshold may be set to a preset flow rate according to an actual scene requirement, so as to determine whether the task request is an input/output request according to a flow rate of data corresponding to the task request monitored in real time in a period of time.
Further, after receiving the target task request sent by the client, the task event type needs to be judged. If the judging result shows that the task event type of the target task request is a login task event, the flow monitor can be started to monitor the target data of the target task request in real time at the moment so as to obtain the corresponding data size, and then the real-time monitoring result of the target data is obtained. The data amount monitoring result of the target data will be used for subsequent judgment, and if the judgment result shows that the data amount of the target data is greater than the first preset threshold, the system can determine that the target task request is an input/output request.
After determining that the target task request is an input/output request, if the event receiving mode at the current moment is a passive event receiving notification mode, the system needs to switch the event receiving mode to a polling event mode. The passive reception event notification mode generally refers to that the system responds when a task arrives, while the active polling event mode refers to that the system actively inquires the task state or result.
The invention judges and processes according to the task event type and the data volume of the target data, thereby determining whether the target task request is regarded as an input/output request, and switching the event receiving mode when necessary, thereby dynamically adjusting the event receiving mode to meet the utilization rate of system interrupt resources and CPU resources, furthest playing the system performance and reducing the bottleneck point of IO performance.
On the basis of the foregoing embodiment, after the determining whether the real-time monitoring result of the target data is greater than a first preset threshold, the method further includes:
if the real-time monitoring result of the target data is smaller than or equal to the first preset threshold, judging whether an event receiving mode at the current moment is the active polling event mode, and if so, switching the active polling event mode into the passive receiving event notification mode, wherein the passive receiving event notification mode is used for directly processing the target task request through a task processor and returning the processing result to the client;
and if the event receiving mode is not the active polling event mode, determining that the event receiving mode is the passive event receiving notification mode, and receiving a next task request sent by the client based on the passive event receiving notification mode.
In the invention, if the real-time monitoring result of the target data is smaller than or equal to the first preset threshold value, the event receiving mode at the current moment needs to be further judged. If the event receiving mode at the current moment is a active polling event mode, the system needs to switch the active polling event mode into a passive receiving event notification mode, wherein the active polling event mode actively inquires the task state or result through the system, and the passive receiving event notification mode directly processes the target task request through the task processor and returns the processing result to the client.
In the passive receive event notification mode, the system may process the target task request directly and return the processing result to the client, which means that the processing procedure of the task request does not need to undergo additional polling or query operations. If the event receiving mode at the current moment is not the active polling event mode, the event receiving mode is determined to be a passive event receiving notification mode, and the system can continuously receive the next task request sent by the client based on the passive event receiving notification mode, so that the task demand and the event processing mode can be effectively adapted, and the efficiency and the response capability of the system are improved.
On the basis of the foregoing embodiment, after the monitoring the data size of the target data corresponding to the target task request, and obtaining a real-time monitoring result of the target data, the method further includes:
judging the task event type of the next task request sent by the client, if the task event type of the next task request is a cancellation task event corresponding to the target task request, stopping monitoring the data volume of the target data, and switching the active polling event mode into the passive receiving event notification mode when determining that the event receiving mode at the current moment is the active polling event mode.
In the invention, in the processing process of the input/output request, the system can judge the task event type of the task request sent by the client each time so as to determine the nature or purpose of the task request. At the present moment, if the task event type of the next task request is the cancellation task event corresponding to the target task request, the system needs to stop monitoring the data volume of the target data, which means that the system does not need to continuously monitor and process the data volume of the target data.
Further, if the event receiving mode at the current moment is determined to be the active polling event mode, the system needs to switch the active polling event mode to the passive receiving event notification mode so as to ensure that the system can switch the event receiving mode in time when the cancellation task event occurs, and the cancellation task event is directly processed through the task processor.
Fig. 2 is a schematic diagram of a switching flow of an event receiving mode provided in the present invention, and referring to fig. 2, after a back-end storage system is initialized, the event receiving mode defaults to a passive event receiving notification mode, and at this time, a flow monitor defaults to a closed state, and it should be noted that, along with a subsequent periodic input/output request processing, the event receiving mode also controls an open state and a closed state of the flow monitor according to an actually received task request.
Further, after receiving the task request event, it is determined whether a login (logic) task event is performed, if yes, the flow monitor is started. In the invention, for the subsequent input/output request process, when a task request event is received, if the task request event is a log-off (logo) task event, the flow monitor is stopped to monitor the corresponding task request, and at the same time, the event receiving mode is restored to a default passive receiving notification mode (if the current event receiving mode is a case of a active polling event mode), so that the monitoring of the corresponding task request is ended.
Further, upon receiving a login task event, a flow monitor is activated to monitor the amount of data (or flow rate) requested by the task in real time at the task interface. If the real-time data monitoring result is larger than the first preset threshold value, further judging whether the event receiving mode at the current moment is a passive receiving notification mode, if so, switching to a driving polling event mode, and if not, continuing to receive the next new event in the current event receiving mode (namely, the passive receiving notification mode).
If the real-time data monitoring result is smaller than or equal to the first preset threshold value, further judging whether the event receiving mode at the current moment is a active polling event mode, if so, switching to a passive event receiving notification mode, otherwise, continuing to receive the next new event in the current event receiving mode (namely, the active polling event mode), dynamically switching the event receiving mode according to the real-time data quantity, and fully playing the performance of the back-end storage system.
On the basis of the foregoing embodiment, when determining that the currently received target task request is an input/output request, determining a data length of target data corresponding to the target task request includes:
when the target task request is determined to belong to an input/output read/write command type request, judging whether the data length of the target data is larger than a second preset threshold value, and if so, determining that the data length of the target data meets the preset condition.
In the invention, when the back-end storage system is initialized, a second preset threshold value is required to be set, and the second preset threshold value is used for providing a judgment basis for the slicing process of the IO task request of the input/output read/write command type, so as to improve the concurrency of IO processing, reduce the delay of IO processing and fully play the performance of the back-end storage system. In the invention, the second preset threshold value should be consistent with the size of the minimum data management unit of the back-end storage system, and the minimum data management unit of the back-end storage system can refer to the concept of a disk sector, and is the minimum unit of data operation, for example, the size of the minimum data management unit of the back-end storage system is 4MB, if an IO task request needs to read 40MB of target data, if the IO task request is received without fragmentation, if the IO task request is received, the disk read-write operation needs to be sequentially performed for at least 10 times by taking 4MB as a unit according to the sequence; in the case of splitting an IO task request, an IO request task that needs to read 40MB of target data is split into 10 4MB of IO request subtasks, and it needs to be explained that, when splitting a task request, a disk space of a minimum data management unit needs to be considered at the same time, that is, a last subtask in a previous IO request task occupies a part of disk space, for example, 2MB, at this time, for 40MB of target data, a first subtask corresponding to split data is first filled into a disk space that is not occupied by a last subtask in the IO request task, and then subtasks corresponding to target data are sequentially filled according to the 4MB of disk space, and at this time, the IO request task of 40MB of target data is split into 11 subtasks. In the invention, 10 IO request subtasks are obtained by segmentation for explanation, each IO request subtask corresponds to one data management unit, so that 10 disk operations are executed concurrently, and the IO performance can be improved by 10 times theoretically.
On the basis of the foregoing embodiment, the slicing processing is performed on the target task request, and a plurality of corresponding input/output request subtasks are generated according to a slicing processing result, including:
constructing a corresponding target task identification number according to the target task request;
based on the second preset threshold, the target task request is subjected to segmentation processing, and a plurality of input and output sub-requests are obtained;
and distributing corresponding sequence codes to the input/output sub-requests according to the segmentation sequence based on the target task identification numbers, and generating a target task linked list according to the input/output sub-requests distributed with the sequence codes according to the sequence codes, wherein the target task linked list comprises the input/output request sub-tasks corresponding to the input/output sub-requests.
According to the target task request, a unique target task identification number is required to be constructed and used for identifying the uniqueness and identification of the task request; then, based on a second preset threshold, the target task request is subjected to segmentation processing, and the large task is segmented into a plurality of input and output sub-requests so as to facilitate batch processing and management. Meanwhile, a corresponding sequence code is allocated to each input/output sub-request, and each sub-request is allocated with a sequence code according to the segmentation order, so that the sub-requests can be executed in the correct order in the subsequent processing.
Further, according to the assigned sequence code, a target task linked list is generated, wherein the target task linked list comprises input/output request subtasks corresponding to the input/output subtasks to form a task execution sequence, and the subtasks are ensured to be executed according to the correct sequence.
On the basis of the foregoing embodiment, after the generating the target task linked list according to the input/output sub-request after the allocating the sequence code, the method further includes:
and deleting all the input and output sub-requests corresponding to the target task request in the target task linked list after the target task request is determined to be completed.
In the invention, the task linked list is repeatedly polled based on the second polling process, and corresponding asynchronous operation requests are arranged according to the state conditions of each request on the task linked list, and the task state of each request is updated in time when one processing process is completed. When a new IO task request is received, the new IO task request is directly added to the tail part of the task linked list. When a certain IO request task is determined to be completed, deleting the IO request task which is determined to be completed on the task linked list, ensuring that only task requests which are not completed are in the task linked list, and facilitating management and processing of subsequent tasks.
On the basis of the above embodiment, the preset polling condition includes a first polling procedure and a second polling procedure;
the obtaining, based on a preset polling condition, a subtask processing result corresponding to each input/output request subtask includes:
after determining that the event receiving mode at the current moment is a master polling event mode, judging whether a new task request sent by a client exists or not through the first polling process;
and if the new task request does not exist, judging whether subtask processing results corresponding to the subtask of each input/output request exist through the second polling process, and if so, acquiring the subtask processing results corresponding to the subtask of each input/output request based on a plurality of second polling processes.
In the invention, a sock_poler thread is started to receive the issued new task and IO request processing result, and whether the new task arrives or not is polled through a first polling process, and then whether the IO request processing result arrives or not is polled through a second polling process. Fig. 3 is a schematic diagram of a polling procedure provided in the present invention, and may refer to fig. 3, in the present invention, in a first polling procedure, it is mainly polling whether a new task is received, if the new task is a login task or a logout task, the task is handed to a pre-task processor to process, mainly recording a session (session) state, so as to execute a corresponding login or logout operation.
Further, after determining that the target task request is an input/output request, taking the target task request as a new task and putting the new task into a task list, wherein each task request has a unique identification ID (i.e. a task identification number) for distinguishing each requested task in the task list; then judging whether the task request belongs to a read-write IO request task, if so, further judging whether the target data length of the IO request task is larger than a second preset threshold value, and if so, slicing the IO request task according to the second preset threshold value so as to obtain a plurality of IO request subtasks; if the task request does not belong to the read-write type IO request task, the task request belongs to the management type IO request task, for example, the IO request task such as creation, deletion, modification, inquiry and the like, slicing is not needed to be considered, and the IO request task can be directly and correspondingly processed. And for the IO request task needing to be sliced, after slicing, carrying out corresponding processing on each IO request subtask, and after all IO request subtasks are polled to be finished through a second polling process, combining IO request results to obtain a final IO request task processing result.
On the basis of the above embodiment, the method further includes:
if the new task request sent by the client exists, judging the input/output command type of the new task request according to field information in the new task request;
if the new task request is the input/output read/write command type request, judging whether the data length corresponding to the new task request is larger than the second preset threshold value, if so, determining that the data length corresponding to the new task request meets the preset condition, and slicing the data corresponding to the new task request;
and if the new task request is an input/output management command type request, directly carrying out corresponding input/output processing on the new task request, judging whether a new task processing result corresponding to the new task request exists or not through the second polling process, and if so, returning the new task processing result to the client.
In the invention, for a new task request sent by a client, firstly, judging the input/output command type of the new task request according to field information in the request, and if the new task request is an input/output read/write command type request, judging whether the data length corresponding to the new task request is greater than a second preset threshold value. And when the data length is larger than a second preset threshold value, determining that the data length corresponding to the new task request meets a preset condition. And further, after the data length is determined to meet the preset condition, slicing the new task request so as to facilitate subsequent processing and management.
If the new task request is an input/output management command type request, the system directly performs corresponding input/output processing on the new task request; then, through a second polling process, whether a task processing result corresponding to the new task request exists or not is judged. If a task processing result exists in the polling process, the processing result is returned to the client so that the client can acquire the latest task processing result, and the new task request sent by the client can be effectively processed and fed back in time.
On the basis of the foregoing embodiment, after determining whether the data length of the target data is greater than a second preset threshold when the target task request is determined to belong to an input/output read/write command type request, the method further includes:
if the data length of the target data is smaller than or equal to the second preset threshold value, directly performing corresponding input and output processing on the target task request, judging whether the target task processing result corresponding to the target task request exists or not through the second polling process, and if so, returning the target task processing result to the client.
In the invention, when the target data is an input/output read/write command type request and the data length is smaller than or equal to a second preset threshold value, the corresponding input/output processing is directly carried out on the target task request; and then, judging whether a target task processing result corresponding to the target task request exists or not through a second polling process, and if the target task processing result is polled, returning the processing result to the client so as to ensure that the client can acquire the processing result of the target task request in time, thereby realizing effective feedback of task processing.
On the basis of the above embodiment, after determining that all the input/output request subtasks corresponding to the target data have been completely processed, combining all the subtask processing results to obtain a target task processing result of the target task request, including:
based on the second polling process, judging whether all the input/output request subtasks corresponding to the target data are processed completely according to the processing progress state of the target task request, and if so, combining all the subtask processing results to obtain the target task processing result.
In the invention, if a new task is not received or the pre-processing work of the new task is finished in the first polling process, further, continuing to poll whether an IO request task processing result is returned or not through the second polling process, and if not, entering the next round of big polling; if so, judging whether the target data length of the father task corresponding to the IO request task processing result is larger than a second preset threshold (refer to fig. 3, judging whether the target data of the task corresponding to the received processing result is larger than the second preset threshold), if so, indicating that the IO request task processing result is a sub-task processing result, combining the sub-task processing result with other corresponding sub-task processing results, otherwise, not considering the IO request result combination, namely, the father task corresponding to the IO request task processing result is not subjected to segmentation processing.
Then judging whether the task is completed, regarding the task which does not carry out IO request slicing processing, considering that the task is completed after the processing result is obtained, and regarding the task which carries out IO request slicing (such as a target task request), determining that the task is completed after the processing results of all IO request subtasks are obtained, deleting the task from a task list, and continuing the next polling; if the task is not completed, the next polling is continued.
The invention monitors the processing progress state of the target task request by utilizing the second polling process, when all the input and output request subtasks are judged to be processed completely in the polling process, the processing results of the subtasks are combined to obtain the final processing result of the target task, and the system can combine the processing results and timely return the complete processing result of the target task to the client after all the subtasks of the target task are completed.
On the basis of the foregoing embodiment, the determining, based on the second polling procedure, whether all the input/output request subtasks corresponding to the target data are processed completely according to the processing progress status of the target task request, and if all the processing is completed, combining all the subtask processing results to obtain the target task processing result includes:
in the second polling process, if the initialization process of the input/output request resource corresponding to the target task request is completed, updating the target task request from an initialization processing progress state to an input/output processing state;
when the target task request is in the input/output processing state, processing a plurality of input/output request subtasks corresponding to the target task request, and acquiring input/output secondary processing states corresponding to the input/output request subtasks based on the second polling process;
If the input/output secondary processing states corresponding to the input/output request subtasks are determined to be completely processed, updating the target task request from the input/output processing states to a task completion state, and combining all the subtask processing results to obtain the target task processing result.
In the present invention, in the second polling procedure, if the initialization procedure (INIT state) of the input-output request resource corresponding to the target task request is found to be completed, the system will update the processing progress state of the target task request from the initialization state to the input-output processing state (IO state).
When the target task request is in the IO state, the system processes a plurality of input/output request subtasks corresponding to the target task request. And acquiring the secondary processing states of the input and output corresponding to the subtasks of the input and output requests through a second polling process. And then, judging whether all the input/output request subtasks are completely processed according to the input/output secondary processing states corresponding to the input/output request subtasks.
If it is determined that all the input/output request subtasks have been processed, the processing state of the target task request is updated from the input/output processing state to a task completion state (FINISH state), and then the processing results of all the subtasks are combined to obtain the final processing result of the target task.
Specifically, in the invention, the state value of the actual processing progress of the current IO task request is recorded through the primary state machine, and the state value is updated in time. For the primary state machine, when the system receives an IO task request, the state machine is set to be in an INIT state, and related resources are initialized; after the related resource initialization of the IO task request is finished, updating the state into the IO state, and simultaneously issuing an asynchronous IO task; after all asynchronous IO task operations of the IO task request are polled, updating the state into a FINISH state, and returning a processing result, wherein the IO processing is finished, namely the conversion sequence of the primary state machine under the normal condition is as follows: INIT state-IO state-FINISH state.
In the second polling process, the processing state of the target task request is updated according to the initialization condition of the input/output request resource, so that the input/output request subtask is processed, and the input/output secondary processing state of the subtask is obtained. After all the subtasks are processed, the processing state of the target task is updated, and the processing results of all the subtasks are combined, so that the final processing result of the target task is obtained, the system is ensured to be capable of effectively monitoring the processing state of the task, and the complete task processing result is obtained after all the subtasks are completed.
On the basis of the above embodiment, the method further includes:
when the target task request is in the input/output processing state, if the input/output secondary processing state corresponding to the input/output request subtask is marked as an abnormal state or an error state in the second polling process, updating the target task request from the input/output processing state to a retry state;
and processing the plurality of input/output request subtasks corresponding to the target task request again based on the retry state.
In the invention, in the processing process of the IO state, based on the second polling process, when an error or abnormality occurs in a subtask link of the IO task request, a RETRY state record occurs, the state is set to be a RETRY state (namely, RETRY state), and IO operation is re-executed on the IO subtask which fails to execute.
On the basis of the above embodiment, after the updating of the target task request from the input-output processing state to the retry state, the method further includes:
and positioning the sub task of the input/output request, which fails in the input/output processing process, according to the input/output secondary processing state marked as the abnormal state or corresponding to the error state.
In the invention, if an abnormal state or an error state occurs in a certain input/output request subtask, the abnormal state or the error state is marked. Then, based on the subtasks marked as abnormal or erroneous, the system will perform a positioning operation to determine the specific cause of the processing failure. The locating process may include looking at an error log, analyzing error information, or investigating the environment in which the error occurred, etc. By positioning operation, the system can determine the specific position and reason of the input/output request subtask which fails in the input/output processing process, the fault tolerance and stability of the system can be improved, and the correctness and accuracy of the input/output processing process are ensured.
On the basis of the above embodiment, the input/output secondary processing state includes an uninitialized state, an entry execution state, a synchronous cache completion state, a processing state, a re-execution state and a subtask completion state;
the obtaining, based on the second polling procedure, the input/output secondary processing state corresponding to each of the input/output request subtasks includes:
after the target task request is determined to be in the input/output processing state, executing corresponding asynchronous processing operation on each input/output request subtask based on the hardware type of the storage system to be subjected to read/write operation of the target task request, and obtaining the input/output secondary processing state corresponding to each input/output request subtask.
Fig. 4 is a schematic diagram of a two-stage state machine according to the present invention, and fig. 5 is a schematic diagram of a two-stage state transition process according to the present invention, and reference may be made to fig. 4 and fig. 5, in which state value statistics is performed by the two-stage state machine: namely a primary state and a secondary state, wherein the primary state comprises four state phases of an INIT state, an IO state, a FINISH state and a RETRY state. Further, the IO state further refines the two-level state, including seven state stages of an uninitialized state (UNINIT state), an entered execution state (GETED_ENTRY state), a synchronized CACHE state (SYNC_CACHE state), a synchronized CACHE completion state (SYNC_CACHE_DONE state), a processing state (PROCESS state), a re-execution state (RETRY state, which is a RETRY state in the two-level state), and a sub-task completion state (DONE state). When the secondary state is updated to be the DONE state, the secondary state and relevant processing thereof are all completed, the completion of the IO state of the primary state is further determined, after the processing of the current IO task request is completed (namely, the current IO task request is in the FINISH state), the state of the next IO task request is subjected to polling record, and at the moment, the primary state machine is set to be in the INIT state.
In the invention, a state machine conversion mechanism is introduced, so that asynchronous IO processing can be conveniently realized, namely after an asynchronous IO task is issued, the task of the link is not required to be synchronously waited for to finish, a state value is directly updated, the corresponding task completion mark is checked for next polling, if the task completion mark is not finished yet, polling is continued, and if the task completion mark is finished, the processing of the next state link is continued to be executed; meanwhile, a certain IO request can be queried in real time, and the state stage is currently executed, so that once the problem of locking of the IO request occurs, the problem positioning efficiency is greatly improved.
On the basis of the foregoing embodiment, the executing, based on the hardware type of the storage system to which the target task requests to perform the read-write operation, a corresponding asynchronous processing operation on each of the input-output request subtasks, to obtain an input-output secondary processing state corresponding to each of the input-output request subtasks, includes:
when the target task request is determined to be the input/output read/write command type request, if the hardware type of the storage system to be subjected to read/write operation is a full flash type, after the input/output request subtask is initiated, updating the input/output request subtask from the entering execution state to the synchronous cache completion state;
If the hardware type of the storage system to be subjected to read-write operation is a mixed flash type, and the request data corresponding to the input-output request subtask is hit in a system cache, updating the input-output request subtask from the entering execution state to the synchronous cache completion state;
if the hardware type of the storage system to be subjected to read-write operation is a mixed flash type and the request data corresponding to the input-output request subtask is not hit in a system cache, a data disk read request is initiated; updating the entering execution state into the synchronous cache state after the execution completion of the data disk reading request is determined based on the second polling process, and updating the synchronous cache state into the synchronous cache completion state after the data read by the data disk reading request is written into a system cache;
updating the synchronous cache completion state into the processing state when all the input/output request subtasks are determined to be in the synchronous cache completion state, checking the processing results of the input/output request subtasks, and updating the processing state into the subtask completion state after the checking is determined to be completed and the processing results are normal;
And if the determining that all the input/output request subtasks have been completely processed according to the input/output secondary processing states corresponding to the input/output request subtasks comprises:
and when all the input/output request subtasks in the target task request are in the subtask completion state, determining that all the input/output request subtasks are completely processed.
FIG. 6 is a schematic diagram of a conversion of a second-level state machine according to the present invention, and referring to FIG. 6, in the present invention, when an IO task request enters an asynchronous IO task (i.e., IO state) in a first-level state, the second-level state machine is set to a UNINIT state, and metadata-related information of a data management unit where target data is located is initiated to be acquired; in the second polling process, when the IO task request is polled to successfully acquire the metadata related information, the state is updated to be the GETED_ENTRY state, and meanwhile, the information is processed according to the back-end storage system information and the IO task request type information.
On the basis of the above embodiment, the method further includes:
when the target task request is determined to be an input/output management command type request, after the input/output request subtask is initiated, updating the input/output request subtask from the entering execution state to the synchronous cache completion state, and initiating a request task for creating corresponding logic unit number metadata.
In the invention, when the IO task request enters the IO state of the first-level state, the state in the IO state link is further subdivided through the second-level state machine. Specifically, different state transition procedures are adopted for the input-output management command type request and the input-output read-write command type request. When the IO task request is an input/output management command type request, if the IO task request is an input/output management command type request of a query class, recording the acquired information such as metadata and the like into a Buffer corresponding to the IO task request, and updating the state into a DONE state; if the IO task request is an input/output management command type request (such as create, delete, modify, etc.) except for the query class, the state is updated to be the SYNC_CACHE_DONE state, and a disk IO request task for creating metadata of a logical unit number (Logical Unit Number, LUN for short) is initiated.
In the invention, when the IO task request is an input/output read/write command type request, if the hardware type of the back-end storage system corresponding to the IO task request is a full flash type (namely, only SSD in the storage system), the read/write disk IO task request is initiated, and the state is updated to be SYNC_CACHE_DONE state; if the hardware type of the back-end storage system corresponding to the IO task request is a mixed flash type (the storage system has an HDD and an SSD), and the request data corresponding to the input/output request subtask is hit in a system CACHE (CACHE), directly updating the state into a SYNC_CACHE_DONE state; if the hardware type of the back-end storage system corresponding to the IO task request is CACHE miss, a request for reading data (data) disk data (namely a data disk read request) is initiated, the state is not required to be updated at the moment and is still in a GETED_ENTRY state, when the second polling process polls the unique data disk data of the IO task request to be completed, the state is updated into a SYNC_CACHE state, meanwhile, a request for writing the data read from the data disk into the CACHE disk is initiated, and when the data of the IO task request is polled to be written into the CACHE disk, the SYNC_CACHE state is updated into a SYNC_CACHE_DONE state, and meanwhile, a target data read-write disk IO task request is initiated. And finally, updating the state into a PROCESS state after the IO task request of the target data read-write disk is polled, checking results at the same time, and updating the state into a RETRY state when the IO task request result is polled to be checked and the processing result is abnormal. And when the completion of the checking of the IO task request result is polled and the processing result is normal, updating the state into the DONE state, and completing all asynchronous IO tasks. The invention adopts a mode of two-stage state machine conversion, realizes asynchronous IO processing capability under a high concurrency scene, and can effectively reduce IO request processing time delay.
Further, the method for processing the input and output requests is integrally described, in the method, a first threshold value is continuously adjusted in the early stage, performance bottoming tests are conducted for a plurality of times, and a first preset threshold value with the best matching degree with the environment of a server and the best performance is selected according to system interrupt resources, CPU computing resources, disk performance, disk number and other resources; meanwhile, the minimum data management unit size of the back-end storage system is determined, and in the present invention, the minimum data management unit is set to 4MB and set to a second preset threshold.
Further, after the back-end storage system is initialized, an event receiving mode of passively receiving the event notification is adopted by default, and at the moment, the flow monitor is in a closed state by default. And meanwhile, a sock_poler thread is started and used for issuing new tasks and IO task request processing results, and each time of large polling, whether the new tasks arrive or not is firstly polled, and then whether IO request processing results arrive or not is polled.
Further, when the client sends a login task request to the back-end storage system, the back-end storage system receives a login task event in a manner of passively receiving an event notification, and the flow monitor is started. When the flow monitor is started, the data volume and the data rate requested by the command can be monitored in real time. In the polling process, when a new task is polled, namely a login task event arrives, the task is directly handed to a front-end task processor for processing, the current session state is recorded, after the task is completed, a processing result is returned to the client through a corresponding kernel communication module, so that the client receives the processing result of the login task request, and the session is successfully established between the client and the back-end storage system.
Further, the client sends a command request for creating a logical unit number (create lun) to the back-end storage system, at this time, the back-end storage system receives the event for creating the logical unit number in a manner of passively receiving the event notification, when the event for creating the logical unit number by the new task is polled, the task is judged to be the IO related task, a new task is allocated and put into the task list, and meanwhile, an identity ID is allocated: 0x11111111, and corresponding field information, it is determined that the task belongs to a management type IO task request, and IO request slicing is not needed, so that the IO request is directly processed. And then, continuing to poll whether the returned IO processing result exists or not, and if the returned IO processing result does not exist, directly entering the next round of big polling.
In the invention, when an IO task request is received, a primary state machine is set to be in an INIT state, and related resources are initialized; after the initialization of the related resources of the IO task request is finished, the state is updated to the IO state, an asynchronous IO task is issued at the same time, the management flow of a secondary state machine is entered, the secondary state machine is set to the UNINIT state, and the metadata related information of the data management unit where the target data are located is initiated to be acquired. When the IO task request is polled and the related information of the metadata is not obtained, because the corresponding logic unit number does not exist in the back-end storage system, the related information such as the metadata is not yet updated, the state is still updated to be a GETED_ENTRY state, meanwhile, the IO task request is judged to be a management type IO task request, the state is updated to be a SYNC_CACHE_DONE state, and a disk IO request for creating the metadata of the logic unit number is initiated; after the disk IO request for creating the logical unit number metadata is polled, the state is updated to the PROCESS state, and meanwhile, the result is checked.
In the invention, when the polling IO request result is checked and the processing result is normal, the state is updated to the DONE state, the asynchronous IO task is completely completed, the management flow of the secondary state machine is exited, and the management flow of the primary state machine is returned. When the asynchronous IO task operation of the IO task request is all completed, the state is updated to the FINISH state, the processing result is returned, and the IO task request is deleted from the request linked list.
In the invention, when the polling has returned processing results and the IO request slice is not performed before the task (ID: 0x 11111111) is identified, the task is considered to be completed, the processing results of the task are returned, the task is deleted from the task list, and the next round of polling process is continued.
When the client continues to issue a 'data read-write' command request to the back-end storage system with high concurrency, the flow monitor finds that the data volume and concurrency of the current command request are obviously improved and are higher than a first preset threshold, and at this time, the current event receiving mode is still a passive receiving notification mode, and the receiving mode has become a bottleneck affecting the performance of the storage system, so that the event receiving mode needs to be adjusted to a master polling event mode.
Further, taking an example that the client sends a "read 40MB data" command request to the back-end storage system, the whole IO flow of the command request is described in detail. In the active polling event mode, the rear-end storage system continuously polls the kernel-mode and user-mode data communication interfaces by a poller thread, when a new task request 'read 40MB data' is polled, the task is judged to be an IO related task, a new task is allocated and put into a task list, and an identity ID is allocated at the same time: 0x12345678. Further, the task is considered to belong to a read-write IO task request, and the target data length of the task is 40MB and is larger than a second preset threshold value of 4MB, so that the IO task request is sliced according to the second preset threshold value and is sliced into a plurality of sub IO request tasks.
Fig. 7 is a schematic slice diagram of an input/output request task provided by the present invention, and referring to fig. 7, the uppermost request data is 40MB of data buffer to be read, the lowermost disk area is the position on the corresponding disk of the target data, each 4MB of block is the minimum management unit of the back-end storage system, and since some of the 1 st area of the disk area is the data corresponding to the previous task request, the target data is immediately next to the previous data in the 1 st area, when the target data is split, according to the second preset threshold, 11 IO read requests "read data" are split, and each split IO read request is allocated with a unique sequence code (sequence), and the sequence codes of the 11 IO read requests are sequentially 0x00000001, 0x00000002, 0x00000003, …, and 0x0000000b.
Further, after the slicing processing of the IO task request is completed, the 11 read data IO read requests are received through one or more polls, added to the tail part of the request linked list, and meanwhile, the primary state machine is set to be in INIT state, and related resources are initialized.
Further, with task identification ID:0x12345678, sequential code seq: an IO request subtask of 0x00000003 is illustrated as an example. When the primary polling machine polls the related resource initialization of the IO request to be completed, the state is updated to be the IO state, an asynchronous IO task is issued at the same time, a secondary state machine management flow is entered, the secondary state machine is set to be the UNINIT state, and meanwhile, the metadata related information of the data management unit where the target data are is located is initiated to be acquired. When the IO request is polled to acquire metadata related information, updating the state into a GETED_ENTRY state, judging that the IO request is a read-write type IO and the back-end storage system is a full flash type, initiating a disk IO asynchronous read request, and updating the state into a SYNC_CACHE_DONE state; when the disk asynchronous IO request is polled to be completed, the state is updated to the PROCESS state and the result is checked. When the polling of the IO request result is completed and the processing result is normal, the state is updated to the DONE state, the asynchronous IO tasks are completed completely, the management flow of the secondary state machine is exited, and the management flow of the primary state machine is returned. When the asynchronous IO task operation of the IO request is all completed, the state is updated to the FINISH state, the processing result is returned to the task processing module, and the IO request is deleted from the request linked list.
In the second polling process, when the IO processing result is polled, the target data length of the parent task (ID: 0x 12345678) of the task (seq: 0x 00000003) is identified to be larger than a second preset threshold value, namely the IO request is subjected to slicing processing, the sliced IO request subtasks are not completed completely, the processing result of the IO request subtasks and the processing result of other completed IO request subtasks are combined, and then the next polling is continued, and the processing result of other IO request subtasks is waited. And after all the subsequent other IO request subtasks are completed, obtaining a final processing result through combination.
And finally, when the user side sends a task cancellation request to the back-end storage system, the back-end storage system receives a task cancellation event, at the moment, the flow monitor is closed, and meanwhile, whether the current event receiving mode is a active polling event mode is judged, and if not, the active polling event mode is switched to a passive event receiving notification mode.
The following describes the input/output request processing system provided by the present invention, and the input/output request processing system described below and the input/output request processing method described above may be referred to correspondingly.
Fig. 8 is a schematic structural diagram of an input/output request processing system provided by the present invention, and as shown in fig. 8, the present invention provides an input/output request processing system, which includes a task processing module 801 and an input/output processing module 802, where the task processing module 801 is configured to determine a data length of target data corresponding to a target task request when determining that the target task request received currently is an input/output request, and if it is determined that the data length of the target data meets a preset condition, perform slicing processing on the target task request, and generate a plurality of corresponding input/output request subtasks according to a slicing processing result; the input/output processing module 802 is configured to obtain subtask processing results corresponding to the input/output request subtasks based on a preset polling condition, and combine all the subtask processing results after determining that all the input/output request subtasks corresponding to the target data have been processed completely, so as to obtain a target task processing result of the target task request.
In the present invention, corresponding modules (such as a task processing module 801 and an input/output processing module 802) are constructed in the back-end storage system, so as to form an input/output request processing system with corresponding functions in the back-end storage system. Specifically, in the present invention, the input-output request processing system needs to determine whether the currently received target task request is an input-output request. In an embodiment, the task processing module 801 may monitor, through the traffic monitor, the interface where the target task request is received in real time, so as to obtain the data volume of the target task request, and further determine that the target task request is an input/output request according to a comparison result between the data volume monitored in real time and a preset threshold (i.e., a first preset threshold).
Further, the task processing module 801 determines, for a target task request determined as an input/output request, a data length of target data corresponding to the target task request, and once the data length of the target data is known, the target task request can be compared with a preset condition. In the present invention, the preset condition may be set to a preset data length (i.e., a second preset threshold) for determining whether the slicing process is required for the target task request. If the judging result shows that the data length of the target data meets the preset condition (the data length is larger than the second preset threshold), the slicing processing can be performed on the target task request.
In the slicing process, a plurality of corresponding input/output request subtasks are generated, each of which represents a data segment in the original target task request and has corresponding input/output request information. Through this process, the task processing module 801 can split a large target task request into multiple smaller input/output request subtasks, so as to more efficiently process and execute in the system, and can improve the parallelism, response speed and resource utilization of the IO processing in the system.
Further, the preset polling conditions in the input/output processing module 802 are used to determine when and how to acquire the processing results of the subtasks, including the polling time interval and the trigger conditions. In the present invention, the polling process includes a big polling process (i.e., a first polling process) and a small polling process (i.e., a second polling process), and by starting a sock_poll thread, the polling process is used for receiving a new task request sent by a client and an IO request processing result corresponding to the IO request, and the input/output processing module 802 first polls whether a new task request arrives through each big polling process, and then polls whether an IO request processing result arrives through the small polling process. The invention adopts a mode of two-stage polling, which can reduce the occupation of interrupt resources of the operating system and improve the overall performance of the operating system.
Further, the input/output processing module 802 periodically performs query or check on the processing status of the subtasks according to the preset polling condition for each input/output request subtask, so as to obtain the processing result. When the input/output processing module 802 obtains the processing results corresponding to the input/output request subtasks, the processing results are stored or cached, and waiting for subsequent processing and combination. After determining that all the input/output request subtasks corresponding to the target data have been completely processed, the input/output processing module 802 starts to combine the processing results of all the subtasks to obtain a final processing result of the target task request, where the final processing result can be used for reporting, storing, transmitting or other subsequent processing to the user, so as to help ensure the integrity and accuracy of the task, and improve the reliability and efficiency of the system.
On the basis of the above embodiment, the system further includes an event receiving module, where the event receiving module is specifically configured to:
receiving the target task request sent by a client;
judging the task event type of the target task request, and if the task event type of the target task request is a login task event, monitoring the data volume of the target data corresponding to the target task request to obtain a target data real-time monitoring result;
judging whether the real-time monitoring result of the target data is larger than a first preset threshold value, and if so, determining that the target task request is the input/output request;
if the event receiving mode at the current moment is a passive event receiving notification mode, switching the passive event receiving notification mode into a active polling event mode
Fig. 9 is a schematic diagram of an overall structure of an input/output request processing system provided by the present invention, and referring to fig. 9, a back-end storage system may be divided into an event receiving module, a task processing module and an input/output processing module according to main functions of functions. The client and the back-end storage system carry out data communication through the kernel communication module, after the kernel communication module receives the command request of the client, the command request is transmitted to the back-end storage system, the back-end storage system provides two message communication mechanisms, one is that an operating system generates system interrupt and notifies an event receiving module of the back-end storage system to receive new messages in an event notification mode, and the mechanism has little use on CPU resources of the system and has more use on interrupt resources of the system; the other is that the event receiving module of the back-end storage system provides a poler thread to continuously poll the data communication interfaces of the kernel mode and the user mode to check whether new information arrives, if so, the new information is taken out and processed, otherwise, the polling is continued, the mechanism does not occupy the interrupt resource of the operation system, and the CPU resource of the system is used more.
In the invention, the event receiving module of the back-end storage system also comprises a flow monitor which is mainly used for monitoring the flow of the entrance command request of the back-end storage system in real time and dynamically adjusting the message communication mechanism of the back-end storage system based on the flow monitor, thereby optimizing the system performance. When the event receiving module of the back-end storage system receives the command request, the command request is transmitted to the task processing module for message processing. Specifically, the task manager analyzes the request command, divides the command request into two major types, respectively processes the command request, wherein one type is a command request which is irrelevant to disk IO, such as a logic request, a logo request and a management type request which is relevant to communication, processes the command request directly in the front task manager, and directly returns a processing result after processing; the other type is a command request directly related to disk IO, for example, a small computer system interface (Small Computer System Interface, abbreviated as SCSI) request command, a task manager allocates a task to each IO task request and puts the task into a task list for processing.
In the invention, the IO processing module bears the disk IO execution work of the core in the back-end storage system, and the IO concurrency in the link is the largest. After the IO processing module receives the IO task request, the IO task request is added to an IO request linked list, a poll processor is used for continuously polling all IO requests on the IO request linked list, the execution progress and the state of the IO request are recorded in a state machine mode, if the execution of the IO request is completed, a processing result is returned, and otherwise, the IO request is continuously polled and processed.
The input/output request processing system provided by the invention judges the data length of the target data corresponding to the received input/output request, if judging that the data length of the target data meets the preset condition, carries out slicing processing on the target task request to obtain a plurality of input/output request subtasks, and based on the preset polling condition, obtains the target task processing result of the target task request after all the input/output request subtasks corresponding to the target data are determined to be processed completely, and improves the overall performance of the storage system while improving the input/output throughput rate of the system.
The system provided by the invention is used for executing the method embodiments, and specific flow and details refer to the embodiments and are not repeated herein.
Fig. 10 is a schematic structural diagram of an electronic device according to the present invention, as shown in fig. 10, the electronic device may include: a Processor (Processor) 1001, a communication interface (Communications Interface) 1002, a Memory (Memory) 1003, and a communication bus 1004, wherein the Processor 1001, the communication interface 1002, and the Memory 1003 perform communication with each other through the communication bus 1004. The processor 1001 may call logic instructions in the memory 1003 to perform an input-output request processing method, the method comprising: when the currently received target task request is determined to be an input/output request, judging the data length of target data corresponding to the target task request, if the data length of the target data is judged to meet the preset condition, slicing the target task request, and generating a plurality of corresponding input/output request subtasks according to slicing results; based on a preset polling condition, acquiring subtask processing results corresponding to the input and output request subtasks, and combining all the subtask processing results after determining that all the input and output request subtasks corresponding to the target data are processed completely, so as to acquire target task processing results of the target task request.
Further, the logic instructions in the memory 1003 described above may be implemented in the form of software functional units and sold or used as a separate product, and may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are capable of performing the method of processing input and output requests provided by the above methods, the method comprising: when the currently received target task request is determined to be an input/output request, judging the data length of target data corresponding to the target task request, if the data length of the target data is judged to meet the preset condition, slicing the target task request, and generating a plurality of corresponding input/output request subtasks according to slicing results; based on a preset polling condition, acquiring subtask processing results corresponding to the input and output request subtasks, and combining all the subtask processing results after determining that all the input and output request subtasks corresponding to the target data are processed completely, so as to acquire target task processing results of the target task request.
In still another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the input-output request processing method provided by the above embodiments, the method comprising: when the currently received target task request is determined to be an input/output request, judging the data length of target data corresponding to the target task request, if the data length of the target data is judged to meet the preset condition, slicing the target task request, and generating a plurality of corresponding input/output request subtasks according to slicing results; based on a preset polling condition, acquiring subtask processing results corresponding to the input and output request subtasks, and combining all the subtask processing results after determining that all the input and output request subtasks corresponding to the target data are processed completely, so as to acquire target task processing results of the target task request.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (19)
1. An input/output request processing method, comprising:
when the currently received target task request is determined to be an input/output request, judging the data length of target data corresponding to the target task request, if the data length of the target data is judged to meet the preset condition, slicing the target task request, and generating a plurality of corresponding input/output request subtasks according to slicing results;
based on a preset polling condition, acquiring subtask processing results corresponding to the input and output request subtasks, and combining all the subtask processing results after determining that all the input and output request subtasks corresponding to the target data are processed completely, so as to acquire target task processing results of the target task request;
before determining the data length of the target data corresponding to the target task request when the currently received target task request is determined to be the input/output request, the method further comprises:
receiving the target task request sent by a client;
judging the task event type of the target task request, and if the task event type of the target task request is a login task event, monitoring the data volume of the target data corresponding to the target task request to obtain a target data real-time monitoring result;
Judging whether the real-time monitoring result of the target data is larger than a first preset threshold value, and if so, determining that the target task request is the input/output request;
and if the event receiving mode at the current moment is a passive event receiving notification mode, switching the passive event receiving notification mode into a active polling event mode.
2. The method according to claim 1, wherein after said determining whether the real-time monitoring result of the target data is greater than a first preset threshold, the method further comprises:
if the real-time monitoring result of the target data is smaller than or equal to the first preset threshold, judging whether an event receiving mode at the current moment is the active polling event mode, and if so, switching the active polling event mode into the passive receiving event notification mode, wherein the passive receiving event notification mode is used for directly processing the target task request through a task processor and returning the processing result to the client;
and if the event receiving mode is not the active polling event mode, determining that the event receiving mode is the passive event receiving notification mode, and receiving a next task request sent by the client based on the passive event receiving notification mode.
3. The method according to claim 2, wherein after monitoring the data amount of the target data corresponding to the target task request and obtaining a real-time monitoring result of the target data, the method further comprises:
judging the task event type of the next task request sent by the client, if the task event type of the next task request is a cancellation task event corresponding to the target task request, stopping monitoring the data volume of the target data, and switching the active polling event mode into the passive receiving event notification mode when determining that the event receiving mode at the current moment is the active polling event mode.
4. The method for processing an input/output request according to claim 1, wherein when determining that a currently received target task request is an input/output request, determining a data length of target data corresponding to the target task request includes:
when the target task request is determined to belong to an input/output read/write command type request, judging whether the data length of the target data is larger than a second preset threshold value, and if so, determining that the data length of the target data meets the preset condition.
5. The method according to claim 4, wherein slicing the target task request and generating a plurality of corresponding input/output request subtasks according to slicing results, comprises:
constructing a corresponding target task identification number according to the target task request;
based on the second preset threshold, the target task request is subjected to segmentation processing, and a plurality of input and output sub-requests are obtained;
and distributing corresponding sequence codes to the input/output sub-requests according to the segmentation sequence based on the target task identification numbers, and generating a target task linked list according to the input/output sub-requests distributed with the sequence codes according to the sequence codes, wherein the target task linked list comprises the input/output request sub-tasks corresponding to the input/output sub-requests.
6. The input/output request processing method according to claim 5, wherein the preset polling condition includes a first polling process and a second polling process;
the obtaining, based on a preset polling condition, a subtask processing result corresponding to each input/output request subtask includes:
After determining that the event receiving mode at the current moment is a master polling event mode, judging whether a new task request sent by a client exists or not through the first polling process;
and if the new task request does not exist, judging whether subtask processing results corresponding to the subtask of each input/output request exist through the second polling process, and if so, acquiring the subtask processing results corresponding to the subtask of each input/output request based on a plurality of second polling processes.
7. The input-output request processing method according to claim 6, characterized in that the method further comprises:
if the new task request sent by the client exists, judging the input/output command type of the new task request according to field information in the new task request;
if the new task request is the input/output read/write command type request, judging whether the data length corresponding to the new task request is larger than the second preset threshold value, if so, determining that the data length corresponding to the new task request meets the preset condition, and slicing the data corresponding to the new task request;
And if the new task request is an input/output management command type request, directly carrying out corresponding input/output processing on the new task request, judging whether a new task processing result corresponding to the new task request exists or not through the second polling process, and if so, returning the new task processing result to the client.
8. The method according to claim 6, wherein after determining whether the target task request belongs to an input-output read-write command type request, the method further comprises:
if the data length of the target data is smaller than or equal to the second preset threshold value, directly performing corresponding input and output processing on the target task request, judging whether the target task processing result corresponding to the target task request exists or not through the second polling process, and if so, returning the target task processing result to the client.
9. The method for processing an input/output request according to claim 8, wherein after determining that all of the input/output request subtasks corresponding to the target data have been processed completely, combining all of the subtask processing results to obtain a target task processing result of the target task request, comprises:
Based on the second polling process, judging whether all the input/output request subtasks corresponding to the target data are processed completely according to the processing progress state of the target task request, and if so, combining all the subtask processing results to obtain the target task processing result.
10. The method of claim 9, wherein the determining, based on the second polling procedure, whether all the input/output request subtasks corresponding to the target data are all processed according to the processing progress status of the target task request, and if all the processing is completed, combining all the subtask processing results to obtain the target task processing result includes:
in the second polling process, if the initialization process of the input/output request resource corresponding to the target task request is completed, updating the target task request from an initialization processing progress state to an input/output processing state;
when the target task request is in the input/output processing state, processing a plurality of input/output request subtasks corresponding to the target task request, and acquiring input/output secondary processing states corresponding to the input/output request subtasks based on the second polling process;
If the input/output secondary processing states corresponding to the input/output request subtasks are determined to be completely processed, updating the target task request from the input/output processing states to a task completion state, and combining all the subtask processing results to obtain the target task processing result.
11. The input-output request processing method according to claim 10, characterized in that the method further comprises:
when the target task request is in the input/output processing state, if the input/output secondary processing state corresponding to the input/output request subtask is marked as an abnormal state or an error state in the second polling process, updating the target task request from the input/output processing state to a retry state;
and processing the plurality of input/output request subtasks corresponding to the target task request again based on the retry state.
12. The input-output request processing method according to claim 11, wherein after the updating of the target task request from the input-output processing state to a retry state, the method further comprises:
And positioning the sub task of the input/output request, which fails in the input/output processing process, according to the input/output secondary processing state marked as the abnormal state or corresponding to the error state.
13. The method according to claim 10, wherein the input-output secondary processing state includes an uninitialized state, an enter-to-execute state, a synchronized cache complete state, a process state, a re-execute state, and a sub-task complete state;
the obtaining, based on the second polling procedure, the input/output secondary processing state corresponding to each of the input/output request subtasks includes:
after the target task request is determined to be in the input/output processing state, executing corresponding asynchronous processing operation on each input/output request subtask based on the hardware type of the storage system to be subjected to read/write operation of the target task request, and obtaining the input/output secondary processing state corresponding to each input/output request subtask.
14. The method for processing an input/output request according to claim 13, wherein the executing a corresponding asynchronous processing operation on each of the input/output request subtasks based on a hardware type of a storage system to which the target task requests a read/write operation, and obtaining an input/output secondary processing state corresponding to each of the input/output request subtasks, includes:
When the target task request is determined to be the input/output read/write command type request, if the hardware type of the storage system to be subjected to read/write operation is a full flash type, after the input/output request subtask is initiated, updating the input/output request subtask from the entering execution state to the synchronous cache completion state;
if the hardware type of the storage system to be subjected to read-write operation is a mixed flash type, and the request data corresponding to the input-output request subtask is hit in a system cache, updating the input-output request subtask from the entering execution state to the synchronous cache completion state;
if the hardware type of the storage system to be subjected to read-write operation is a mixed flash type and the request data corresponding to the input-output request subtask is not hit in a system cache, a data disk read request is initiated; updating the entering execution state into the synchronous cache state after the execution completion of the data disk reading request is determined based on the second polling process, and updating the synchronous cache state into the synchronous cache completion state after the data read by the data disk reading request is written into a system cache;
Updating the synchronous cache completion state into the processing state when all the input/output request subtasks are determined to be in the synchronous cache completion state, checking the processing results of the input/output request subtasks, and updating the processing state into the subtask completion state after the checking is determined to be completed and the processing results are normal;
and if the determining that all the input/output request subtasks have been completely processed according to the input/output secondary processing states corresponding to the input/output request subtasks comprises:
and when all the input/output request subtasks in the target task request are in the subtask completion state, determining that all the input/output request subtasks are completely processed.
15. The input-output request processing method according to claim 13, characterized in that the method further comprises:
when the target task request is determined to be an input/output management command type request, after the input/output request subtask is initiated, updating the input/output request subtask from the entering execution state to the synchronous cache completion state, and initiating a request task for creating corresponding logic unit number metadata.
16. The method according to claim 5, wherein after generating the target task linked list according to the input/output sub-requests after allocating the sequential codes, the method further comprises:
and deleting all the input and output sub-requests corresponding to the target task request in the target task linked list after the target task request is determined to be completed.
17. An input-output request processing system, comprising:
the task processing module is used for judging the data length of target data corresponding to the target task request when the currently received target task request is determined to be an input/output request, slicing the target task request if the data length of the target data meets the preset condition, and generating a plurality of corresponding input/output request subtasks according to slicing results;
the input/output processing module is used for acquiring subtask processing results corresponding to the input/output request subtasks based on preset polling conditions, and combining all the subtask processing results after determining that all the input/output request subtasks corresponding to the target data are processed completely, so as to acquire target task processing results of the target task request;
The system also comprises an event receiving module, wherein the event receiving module is specifically used for:
receiving the target task request sent by a client;
judging the task event type of the target task request, and if the task event type of the target task request is a login task event, monitoring the data volume of the target data corresponding to the target task request to obtain a target data real-time monitoring result;
judging whether the real-time monitoring result of the target data is larger than a first preset threshold value, and if so, determining that the target task request is the input/output request;
and if the event receiving mode at the current moment is a passive event receiving notification mode, switching the passive event receiving notification mode into a active polling event mode.
18. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the input-output request processing method of any of claims 1 to 16 when the computer program is executed by the processor.
19. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the input-output request processing method according to any of claims 1 to 16.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311826299.1A CN117472597B (en) | 2023-12-28 | 2023-12-28 | Input/output request processing method, system, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311826299.1A CN117472597B (en) | 2023-12-28 | 2023-12-28 | Input/output request processing method, system, electronic device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117472597A CN117472597A (en) | 2024-01-30 |
CN117472597B true CN117472597B (en) | 2024-03-15 |
Family
ID=89629709
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311826299.1A Active CN117472597B (en) | 2023-12-28 | 2023-12-28 | Input/output request processing method, system, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117472597B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118502680B (en) * | 2024-07-18 | 2024-10-18 | 济南浪潮数据技术有限公司 | IO scheduling method, electronic device, storage medium and program product |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6789133B1 (en) * | 2001-12-28 | 2004-09-07 | Unisys Corporation | System and method for facilitating use of commodity I/O components in a legacy hardware system |
CN108459826A (en) * | 2018-02-01 | 2018-08-28 | 杭州宏杉科技股份有限公司 | A kind of method and device of processing I/O Request |
CN111796936A (en) * | 2020-06-29 | 2020-10-20 | 平安普惠企业管理有限公司 | Request processing method and device, electronic equipment and medium |
CN111950988A (en) * | 2020-08-18 | 2020-11-17 | 北京字节跳动网络技术有限公司 | Distributed workflow scheduling method and device, storage medium and electronic equipment |
CN115970295A (en) * | 2022-12-09 | 2023-04-18 | 网易(杭州)网络有限公司 | Request processing method and device and electronic equipment |
CN116055429A (en) * | 2023-01-17 | 2023-05-02 | 杭州鸿钧微电子科技有限公司 | Communication data processing method, device, device and storage medium based on PCIE |
-
2023
- 2023-12-28 CN CN202311826299.1A patent/CN117472597B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6789133B1 (en) * | 2001-12-28 | 2004-09-07 | Unisys Corporation | System and method for facilitating use of commodity I/O components in a legacy hardware system |
CN108459826A (en) * | 2018-02-01 | 2018-08-28 | 杭州宏杉科技股份有限公司 | A kind of method and device of processing I/O Request |
CN111796936A (en) * | 2020-06-29 | 2020-10-20 | 平安普惠企业管理有限公司 | Request processing method and device, electronic equipment and medium |
CN111950988A (en) * | 2020-08-18 | 2020-11-17 | 北京字节跳动网络技术有限公司 | Distributed workflow scheduling method and device, storage medium and electronic equipment |
CN115970295A (en) * | 2022-12-09 | 2023-04-18 | 网易(杭州)网络有限公司 | Request processing method and device and electronic equipment |
CN116055429A (en) * | 2023-01-17 | 2023-05-02 | 杭州鸿钧微电子科技有限公司 | Communication data processing method, device, device and storage medium based on PCIE |
Also Published As
Publication number | Publication date |
---|---|
CN117472597A (en) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10620832B1 (en) | Method and apparatus to abort a command | |
US10261853B1 (en) | Dynamic replication error retry and recovery | |
US7383290B2 (en) | Transaction processing systems and methods utilizing non-disk persistent memory | |
CN112596960B (en) | Distributed storage service switching method and device | |
US11360705B2 (en) | Method and device for queuing and executing operation commands on a hard disk | |
US9026630B2 (en) | Managing resources in a distributed system using dynamic clusters | |
CN101776983B (en) | The synchronous method of information of double controllers in disk array and disc array system | |
US10474496B1 (en) | Dynamic multitasking for distributed storage systems by detecting events for triggering a context switch | |
CN106406981A (en) | Disk data reading/writing method and virtual machine monitor | |
JP2006323826A (en) | System for log writing in database management system | |
US11340806B2 (en) | Meta data processing during startup of storage devices | |
CN114265792A (en) | Flat-based queue configuration for AIPR capable drives | |
CN112039999A (en) | Method and system for accessing distributed block storage system in kernel mode | |
US11429500B2 (en) | Selective utilization of processor cores while rebuilding data previously stored on a failed data storage drive | |
CN117472597B (en) | Input/output request processing method, system, electronic device and storage medium | |
WO2024212783A1 (en) | Data write method and apparatus, and solid-state disk, electronic device and non-volatile readable storage medium | |
US10846094B2 (en) | Method and system for managing data access in storage system | |
WO2024113702A1 (en) | Data storage method and related device | |
CN108733585B (en) | Cache system and related method | |
US12045464B2 (en) | Data read method, data write method, device, and system | |
CN119088724A (en) | Data replay method, data processor, network interface card, device and storage medium | |
US12223171B2 (en) | Metadata processing method to improve data read/write efficiency of a storage device | |
US11662955B2 (en) | Direct memory access data path for RAID storage | |
CN117666931A (en) | A data processing method and related equipment | |
CN114489465A (en) | Method, network device and computer system for data processing using network card |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |