Disclosure of Invention
The invention provides a log processing method, a log processing device, a log processing system, a log processing chip, log processing equipment and a log processing storage medium, which are used for solving the defects in the related art.
According to a first aspect of the embodiments of the present invention, there is provided a log processing method, applied to a server, including:
receiving a log sent by at least one client, and storing the log into the log storage area, wherein the log storage area is pre-created and has a plurality of storage paths, and different logs sent by the clients are stored in different storage paths;
and receiving a query request sent by a host terminal, and returning state information to the host terminal according to the log stored in the log storage area.
In combination with any embodiment provided by the present disclosure, the receiving a log sent by at least one client, and storing the log in the log storage area includes:
receiving registration information sent by the client, and creating a service thread corresponding to the client according to the registration information;
and receiving the log sent by the client through the service thread, and storing the log to at least one storage path of the log storage area.
In combination with any one of the embodiments provided by the present disclosure, the receiving, by the service thread, the log sent by the client includes:
and receiving a plurality of log fragments sent by the client through the service thread.
In connection with any embodiment provided by the present disclosure, further comprising:
and receiving report information sent by the client, and closing the service thread corresponding to the client, wherein the report information represents that the running thread corresponding to the client is closed.
In combination with any one of the embodiments provided by the present disclosure, a plurality of the clients concurrently store the log to the corresponding storage path.
In connection with any embodiment provided by the present disclosure, further comprising: and monitoring the log sending request of the client through a socket thread.
In combination with any one of the embodiments provided by the present disclosure, the receiving an inquiry request sent by a host end includes:
receiving a query request sent by the host end through a monitoring thread;
the monitoring thread is created in advance, and the monitoring thread comprises at least one or more of the following: the system comprises a high-speed serial computer expansion bus standard thread, a high-speed serial computer expansion bus network thread and a regional network thread.
In connection with any embodiment provided by the disclosure, the status information includes at least one or more of:
memory usage, processor usage, interrupt information, drive information, and log storage usage information.
In connection with any embodiment provided by the present disclosure, further comprising:
acquiring state information of sub-processors of the server;
and sending alarm information to the host end in response to the condition that the state information of at least one sub-processor meets the preset condition.
According to a second aspect of the embodiments of the present invention, there is provided a log processing method, applied to a client, including:
generating a log;
writing the log into a cache queue through a pre-established running thread;
and responding to the condition that the log written into the cache queue meets the preset condition, and sending the log to a server.
In combination with any one of the embodiments provided by the present disclosure, the sending the log to the server further includes:
sending registration information to the server through the running thread;
the writing of the log into a cache queue through the created running thread comprises:
writing the log segments corresponding to the logs into at least one idle-state first cache path of the cache queue through the running thread, wherein the cache queue is provided with a plurality of first cache paths;
the responding to the log written into the cache queue meeting the preset condition, and sending the log to a server side comprises the following steps:
and responding to the fact that the first cache path is fully written by the log fragments, and sending the log fragments in the first cache path to a server side.
In connection with any embodiment provided by the present disclosure, further comprising:
in response to the first cache path being written full by a log segment, setting the first cache path to a saturated state;
and responding to the log fragments in the first cache path sent to a server, and setting the first cache path to be in an idle state.
In connection with any embodiment provided by the present disclosure, further comprising:
and closing the running thread, and sending report information to the server, wherein the report information represents that the running thread corresponding to the client is closed.
In connection with any embodiment provided by the present disclosure, further comprising:
outputting the log to at least one of: hardware terminals and log files;
wherein the log is output to a log file in the following manner:
outputting the log to a second cache path corresponding to the log file;
and in response to the second cache path being completely written by the log segment, transferring the log segment in the second cache path to the log file.
According to a third aspect of the embodiments of the present invention, there is provided a log processing method, applied to a host side, including:
sending a query request to a server;
and receiving state information sent by the server, wherein the state information is generated by the server according to the log stored in the log storage area.
In combination with any one of the embodiments provided by the present disclosure, the sending a query request to a server includes:
sending a query request to a listening thread of the server through a channel matched with the listening thread, wherein the query request comprises at least one or more of the following: the system comprises a high-speed serial computer expansion bus standard thread, a high-speed serial computer expansion bus network thread and a regional network thread.
According to a fourth aspect of the embodiments of the present invention, there is provided a log processing server apparatus, including:
the storage module is used for receiving logs sent by at least one client and storing the logs into the log storage area, wherein the log storage area is pre-created and provided with a plurality of storage paths, and different logs sent by the clients are stored in different storage paths;
and the query module is used for receiving a query request sent by the host end and returning state information to the host end according to the log stored in the log storage area.
In combination with any one of the embodiments provided by the present disclosure, the storage module is specifically configured to:
receiving registration information sent by the client, and creating a service thread corresponding to the client according to the registration information;
and receiving the log sent by the client through the service thread, and storing the log to at least one storage path of the log storage area.
In combination with any embodiment provided by the present disclosure, when the storage module is configured to receive, through the service thread, the log sent by the client, the storage module is specifically configured to:
and receiving a plurality of log fragments sent by the client through the service thread.
In combination with any embodiment provided by the present disclosure, further comprising a first shutdown module configured to:
and receiving report information sent by the client, and closing the service thread corresponding to the client, wherein the report information represents that the running thread corresponding to the client is closed.
In combination with any one of the embodiments provided by the present disclosure, a plurality of the clients concurrently store the log to the corresponding storage path.
In conjunction with any embodiment provided by the present disclosure, the system further includes a listening module configured to: and monitoring the log sending request of the client through a socket thread.
In combination with any one of the embodiments provided by the present disclosure, the query module is specifically configured to:
receiving a query request sent by the host end through a monitoring thread;
the monitoring thread is created in advance, and the monitoring thread comprises at least one or more of the following: the system comprises a high-speed serial computer expansion bus standard thread, a high-speed serial computer expansion bus network thread and a regional network thread.
In connection with any embodiment provided by the disclosure, the status information includes at least one or more of:
memory usage, processor usage, interrupt information, drive information, and log storage usage information.
In combination with any one of the embodiments provided by the present disclosure, further comprising an alarm module configured to:
acquiring state information of sub-processors of the server;
and sending alarm information to the host end in response to the condition that the state information of at least one sub-processor meets the preset condition.
According to a fifth aspect of an embodiment of the present invention, there is provided a log processing client apparatus including:
the generating module is used for generating a log;
the cache module is used for writing the log into a cache queue through a pre-established running thread;
and the sending module is used for responding to the condition that the log written into the cache queue meets the preset condition and sending the log to the server.
In conjunction with any embodiment provided by the present disclosure, the system further includes a registration module configured to: sending registration information to the server through the running thread;
the cache module is specifically configured to:
writing the log segments corresponding to the logs into at least one idle-state first cache path of the cache queue through the running thread, wherein the cache queue is provided with a plurality of first cache paths;
the sending module is specifically configured to:
and responding to the fact that the first cache path is fully written by the log fragments, and sending the log fragments in the first cache path to a server side.
In combination with any one of the embodiments provided by the present disclosure, further comprising a setting module configured to:
in response to the first cache path being written full by a log segment, setting the first cache path to a saturated state;
and responding to the log fragments in the first cache path sent to a server, and setting the first cache path to be in an idle state.
In connection with any embodiment provided by the present disclosure, further comprising a second shutdown module:
and closing the running thread, and sending report information to the server, wherein the report information represents that the running thread corresponding to the client is closed.
In combination with any one of the embodiments provided by the present disclosure, further comprising an output module configured to:
outputting the log to at least one of: hardware terminals and log files;
wherein the log is output to a log file in the following manner:
outputting the log to a second cache path corresponding to the log file;
and in response to the second cache path being full of log segments, unloading the log segments in the second cache path to the log file.
According to a sixth aspect of the embodiments of the present invention, there is provided a log processing host apparatus, including:
the request module is used for sending a query request to the server;
and the state module is used for receiving the state information sent by the server, wherein the state information is generated by the server according to the log stored in the log storage area.
In combination with any one of the embodiments provided by the present disclosure, the request module is specifically configured to:
sending a query request to a listening thread of the server through a channel matched with the listening thread, wherein the query request comprises at least one or more of the following: the system comprises a high-speed serial computer expansion bus standard thread, a high-speed serial computer expansion bus network thread and a regional network thread.
According to a seventh aspect of an embodiment of the present invention, there is provided a log processing system including:
the client is used for generating logs, writing the logs into a cache queue through a pre-established running thread, responding that the logs written into the cache queue meet preset conditions, and responding that the logs written into the cache queue meet the preset conditions, and sending the logs to the server;
the server is used for receiving logs sent by at least one client, storing the logs into the log storage area, receiving an inquiry request sent by a host, and returning state information to the host according to the logs stored in the log storage area, wherein the log storage area is pre-created and provided with a plurality of storage paths, and different logs sent by the clients are stored in different storage paths;
the system comprises a host side and a server side, wherein the host side is used for sending a query request to the server side and receiving state information sent by the server side, and the state information is generated by the server side according to logs stored in a log storage area.
According to an eighth aspect of the embodiments of the present invention, there is provided a chip, including: a plurality of client modules and/or server modules;
the client module is used for generating a log, writing the log into a cache queue through a pre-established running thread, and sending the log to the server module in response to the fact that the log written into the cache queue meets a preset condition;
the server module is used for receiving logs sent by at least one client module, storing the logs into a log storage area, receiving a query request sent by a host, and returning state information to the host according to the logs stored in the log storage area, wherein the log storage area is pre-created and provided with a plurality of storage paths, and different logs sent by the client modules are stored in different storage paths.
According to a ninth aspect of embodiments of the present invention, there is provided an electronic device, the device comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of the first, second or third aspect when executing the computer instructions.
According to a tenth aspect of embodiments of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first, second or third aspect.
According to the embodiment, firstly, the log sent by at least one client is received, the log is stored in the log storage area which is created in advance, the logs sent by different clients are stored in different storage paths, then, the query request sent by the host is received, and the state information is returned to the host according to the log stored in the log storage area. The log storage area is provided with a plurality of storage paths, and logs sent by different clients are stored in different storage paths, so that the processing efficiency of the logs can be improved, and the state information can be returned in response to the query request of the host, thereby further improving the processing efficiency of the logs.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In a first aspect, at least one embodiment of the present invention provides a log processing method applied to a server, please refer to fig. 1, which illustrates a flow of the method, including steps S101 to S103.
The method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA) handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server, which may be a local server, a cloud server, or the like.
In step S101, a log sent by at least one client is received, and the log is stored in the log storage area, where the log storage area is created in advance and has a plurality of storage paths, and logs sent by different clients are stored in different storage paths.
If the log storage area can be a storage space in the hard disk, the storage path is a subspace in the storage space pointed by the storage path. The log storage area may also be a circular queue in the buffer, and the storage path is a buffer in the circular queue. The log storage area is created, which may be a storage space divided from a hard disk or a cache and marked as a log storage area.
The plurality of storage paths may have a certain order or priority, and when data is written into the log storage area, the data may be written into the storage path with the first order or higher priority (and in the accessible state), and then written into the storage path with the second order or lower priority (and in the accessible state). The multiple storage paths may also be unordered or equal priority, and when data is written into the log storage area, the storage path (in an accessible state) in the log storage area may be randomly selected for writing.
The client side can generate corresponding logs while running so as to record data processing, information interaction and other processes in the running process of the client side and information such as memory occupation, processor use, interrupt information, drive information, cache information and the like. The client runs on different modules such as a hardware driving module, a video processing module, a communication module and the like, so that the log information of the client comprises the state information of each module.
When the log of a client is written into a storage path of the log storage area, the storage path is occupied by the client, and the storage path is in an inaccessible state and cannot be accessed by other clients, namely, the operation of writing the log cannot be executed. When writing the log into the log storage area, other clients can select one from the storage paths which are not occupied, namely in an accessible state, to write.
It should be noted that a plurality of clients may concurrently store the log to the corresponding storage path.
In step S102, a query request sent by the host is received, and status information is returned to the host according to the log stored in the log storage area.
After receiving the query request, the server determines query items according to the query item information, searches the log stored in the log storage area for information related to each query item, and obtains a query result of each query item; and the state information may include the query results.
The status information may include at least one or more of: memory usage, processor usage, interrupt information, drive information, and log storage usage information.
After receiving the status information, the host terminal can provide the status information to the user through display, or can send the status information to an interface needing the status information.
According to the embodiment, firstly, the log sent by at least one client is received, the log is stored in the log storage area which is created in advance, the logs sent by different clients are stored in different storage paths, then, the query request sent by the host is received, and the state information is returned to the host according to the log stored in the log storage area. The log storage area is provided with a plurality of storage paths, and logs sent by different clients are stored in different storage paths, so that the processing efficiency of the logs can be improved, and the state information can be returned in response to the query request of the host, thereby further improving the processing efficiency of the logs.
In some embodiments of the present disclosure, the server may receive a log sent by at least one client, and store the log in the log storage area, according to the following manner: firstly, receiving registration information sent by the client, and creating a service thread corresponding to the client according to the registration information; and then, receiving the log sent by the client through the service thread, and storing the log to at least one storage path of the log storage area.
In addition, the server can also return response information to the client, wherein the response information represents that the service thread is successfully created.
The service thread corresponding to the client can establish stable communication connection with the client, for example, the service thread can establish communication connection with the running thread of the client, so that log transmission is facilitated, and the efficiency of log transmission is improved.
After the server side creates the service thread corresponding to the client side, response information representing that the service thread is successfully created can be returned to the client side.
When the client sends the log to the service thread, the client can successively send the log fragments to the service thread, and the service thread receives the log, which can be a plurality of log fragments sent by the client. The reason is that the log is generated continuously, real-time transmission has high requirement on transmission stability, and the log is transmitted after a complete log is formed, so that the validity of log information is reduced, and therefore, a plurality of nodes appear in the log generation process, and a log fragment can be transmitted once every node.
Optionally, when the client runs, a cache queue having a plurality of first cache paths may be created, and each first cache path is set to an idle state; then creating a running thread, wherein the running thread sends registration information to the server so that the server creates a corresponding service thread; then the running thread writes the log into at least one first cache path in an idle state of the cache queue, and after the first cache path is completely written by the log segment, the first cache path is set to be in a saturated state; and then the running thread receives response information sent by the server, sends the log fragments in the first cache path in the saturated state to the server, and sets the first cache path to be in an idle state, wherein the response information represents that the server has successfully created the service thread corresponding to the client.
In addition, the log sending request of the client can also be monitored through a Socket thread. Namely, after monitoring the log sending request of the client, receiving the log sent by the client.
When the client stops running and the running thread of the client is closed, report information can be sent to the server to report that the running thread of the server is closed. And after the server receives the report information sent by the client, the server can close the service thread corresponding to the client.
In some embodiments of the present disclosure, the server may receive the query request sent by the host end in the following manner: receiving a query request sent by the host end through a monitoring thread; the monitoring thread is created in advance, and the monitoring thread comprises at least one or more of the following: a high speed serial computer expansion bus standard (PCIE) thread, a PCIE net (PCIE net) thread, and an ethernet (Ethemet) thread.
When the host sends the query request to the listening thread, the query request may be sent through a channel matched with the listening thread of the server, for example, the query request may be sent to a high-speed serial computer expansion bus standard (PCIE) thread through a PCIE lane, the query request may be sent to a PCIE net thread through a PCIE net lane, and the query request may be sent to an ethernet (Ethemet) thread through an ethernet (Ethemet) lane.
By receiving the query request through the monitoring thread, the effects of high speed and reliability can be achieved.
In some embodiments of the present disclosure, the server provided in the embodiments of the present application may be applied to a chip, the chip may include a plurality of sub-processors, and the server may also monitor status information of the sub-processors, and once an abnormality occurs in a certain sub-processor, the abnormality may be sent to the host. The log processing method can also monitor the state of a sub-processor of the server in the following way: firstly, acquiring state information of sub-processors of the server; and then, responding to the condition that the state information of at least one sub-processor meets the preset condition, and sending alarm information to the host terminal.
By monitoring the state information of each sub-processor in real time, the host side can be informed when the sub-processors fail, so that the failure can be timely processed.
In a second aspect, at least one embodiment of the present invention provides a log processing method applied to a client, please refer to fig. 2, which shows a flow of the method, including steps S201 to S203.
The method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA) handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server, which may be a local server, a cloud server, or the like.
In step S201, a log is generated.
The client side can generate corresponding logs in the running process to record the data processing, information interaction and other processes in the running process of the client side and information such as memory occupation, processor use, interrupt information, drive information, cache information and the like. The client runs on different modules such as a hardware driving module, a video processing module, a communication module and the like, so that the log information of the client comprises the state information of each module.
In step S202, the log is written into the cache queue by the pre-created running thread.
The log segments corresponding to the logs can be written into at least one idle-state first cache path of the cache queue through the running thread, wherein the cache queue has a plurality of first cache paths. The cache queue is created in advance, and after the cache queue is created, registration information can be sent to the server through the running thread, so that the server receives the registration information and then creates a corresponding service thread, and further log transmission with the client is realized.
The running thread may include a socket thread, among others.
In step S203, in response to that the log written into the cache queue meets a preset condition, the log is sent to a server.
The transmission of the log can be completed by sequentially sending the log fragments to the server. After receiving the log, the server stores the log into a corresponding storage path for the query of the host.
For example, the log segment in the first cache path may be sent to the server in response to the first cache path being filled with the log segment. After the first cache path is completely written by the log fragments, the first cache path can be set to be in a saturated state, the running thread monitors that the first cache path in the saturated state exists, and the log fragments in the first cache path are sent to the server side; moreover, after the log segment in the first cache path in the saturated state is sent to the server, the first cache path may be set to an idle state, that is, the first cache path recovers a state in which the log segment can be written. Through the alternate switching of the idle state and the saturated state, the first cache path is repeatedly utilized, and the transmission efficiency and the reliability between the running thread and the service thread are high.
In one embodiment, the client may send the log to the server in the following manner: firstly, creating a cache queue with a plurality of first cache paths, and setting each first cache path to be in an idle state; then, creating a running thread, wherein the running thread sends registration information to the server; next, the running thread writes the log into at least one idle first cache path of the cache queue, and after the first cache path is completely written by the log segment, the first cache path is set to be in a saturated state; and finally, the running thread receives response information sent by the server, sends the log fragments in the first cache path in the saturated state to the server, and sets the first cache path to be in an idle state, wherein the response information represents that the server has successfully created the service thread corresponding to the client.
After receiving the registration information sent by the running thread, the server can create a service thread corresponding to the client according to the registration information and return response information to the client.
When the client stops running and the running thread of the client is closed, report information can be sent to the server to report that the running thread of the server is closed. And after the server receives the report information sent by the client, the server can close the service thread corresponding to the client.
In some embodiments of the present disclosure, after the log is generated, the log may be further output to at least one of the following locations: hardware terminals and log files.
The hardware terminal can be video processing hardware such as a camera and a display, and sends the log to the hardware, so that the hardware can execute corresponding tasks according to the operation of the client.
The log file is a formatted file for storing the log, and has a corresponding cache path (for example, it may be referred to as a second cache path, and the second cache path is used to represent the cache path corresponding to the log file). Based on this, the log can be output to the log file in the following way: the log is firstly output to a second cache path, and then the log segment in the second cache path is transferred to the log file after responding that the second cache path is fully written by the log segment, namely, the log file is written into the second cache path and the log is sequentially transferred to the log file by the second cache path, so that the output of the log to the log file is completed.
In a third aspect, at least one embodiment of the present invention provides a log processing method applied to a host, and referring to fig. 3, a flow of the method is shown, which includes steps S301 to S302.
The method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA) handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server, which may be a local server, a cloud server, or the like.
In step S301, an inquiry request is sent to the server.
The host side can send a query request to the server side according to the following modes: sending a query request to a listening thread of the server through a channel matched with the listening thread, wherein the query request comprises at least one of the following: the system comprises a high-speed serial computer expansion bus standard thread, a high-speed serial computer expansion bus network thread and a regional network thread.
In step S302, the status information sent by the server is received.
And the state information is generated by the server according to the log stored in the log storage area.
Please refer to fig. 4 to consider the following chip application scenarios: on the chip, there are various hardware driver modules, such as: the video processing module, the communication module and the like can generate logs in the running process, the chip can provide log processing service, the collection of the state information of each hardware driving module is realized through the log processing service, and a host where the chip is located can perform log query on various modules running on the chip through the log processing service of the chip. In such a scenario, the log processing method provided by the present application may be used to use the chip as a server, use each hardware driver module as multiple clients, and use the host where the chip is located as a host.
The following describes in detail a complete flow of the method for processing the running logs of the client, the server and the host with reference to fig. 4.
Firstly, S401, a client generates a log; then, S402, the client outputs the log to at least one of the following positions: hardware terminals and log files; then, S403, the client creates a cache queue with a plurality of first cache paths, and sets each first cache path to be in an idle state; then, S404, the client creates a running thread, and the running thread sends registration information to the server; then, executing S405-1 and S405-2, because S405-1 and S405-2 are executed by the server and the client, respectively, there is no sequence, that is, S405-1 may be executed first, then S405-2 may be executed, S405-2 may be executed first, then S405-1 may be executed, and S405-1 and S405-2 may be executed simultaneously, where S405-1: a server receives registration information sent by a client, creates a service thread corresponding to the client according to the registration information, and returns response information to the client, S405-2, the running thread writes the log into at least one first cache path in an idle state of the cache queue, and the first cache path is set to be in a saturated state after the first cache path is completely written by log fragments; then, S406: the running thread receives response information sent by the server, sends log fragments in the first cache path in a saturated state to the server, and sets the first cache path to be in an idle state; then, S407: the service thread receives the log sent by the client and stores the log to at least one storage path of the log storage area; then, S408: the client closes the running thread and sends report information to the server, wherein the report information represents that the running thread corresponding to the client is closed; then, S409: the server receives the report information sent by the client and closes the service thread corresponding to the client; then, S410, the server creates a listening thread, wherein the listening thread comprises at least one of the following: the system comprises a high-speed serial computer expansion bus standard thread, a high-speed serial computer expansion bus network thread and a regional network thread; then, S411, the host sends a query request to the monitoring thread through a channel matched with the monitoring thread of the server; then, S412, the server receives the query request sent by the host and returns the state information to the host according to the log stored in the log storage area; then, S413, the host receives the state information sent by the server; then, S414, the server side obtains the status information of the sub-processors of the server side; and S415, the server side responds to the condition that the state information of at least one sub-processor meets the preset condition and sends alarm information to the host side.
According to a fourth aspect of the embodiments of the present invention, there is provided a log processing server apparatus, please refer to fig. 5, which shows a schematic structural diagram of the apparatus, including:
the storage module 501 is configured to receive a log sent by at least one client, and store the log into the log storage area, where the log storage area is pre-created and has multiple storage paths, and logs sent by different clients are stored in different storage paths;
the query module 502 is configured to receive a query request sent by a host, and return status information to the host according to the log stored in the log storage area.
In combination with any one of the embodiments provided by the present disclosure, the storage module is specifically configured to:
receiving registration information sent by the client, and creating a service thread corresponding to the client according to the registration information;
and receiving the log sent by the client through the service thread, and storing the log to at least one storage path of the log storage area.
In combination with any embodiment provided by the present disclosure, when the storage module is configured to receive, through the service thread, the log sent by the client, the storage module is specifically configured to:
and receiving a plurality of log fragments sent by the client through the service thread.
In combination with any embodiment provided by the present disclosure, further comprising a first shutdown module configured to:
and receiving report information sent by the client, and closing the service thread corresponding to the client, wherein the report information represents that the running thread corresponding to the client is closed.
In combination with any one of the embodiments provided by the present disclosure, a plurality of the clients concurrently store the log to the corresponding storage path.
In conjunction with any embodiment provided by the present disclosure, the system further includes a listening module configured to: and monitoring the log sending request of the client through a socket thread.
In combination with any one of the embodiments provided by the present disclosure, the query module is specifically configured to:
receiving a query request sent by the host end through a monitoring thread;
the monitoring thread is created in advance, and the monitoring thread comprises at least one or more of the following: the system comprises a high-speed serial computer expansion bus standard thread, a high-speed serial computer expansion bus network thread and a regional network thread.
In connection with any embodiment provided by the disclosure, the status information includes at least one or more of:
memory usage, processor usage, interrupt information, drive information, and log storage usage information.
In combination with any one of the embodiments provided by the present disclosure, further comprising an alarm module configured to:
acquiring state information of sub-processors of the server;
and sending alarm information to the host end in response to the condition that the state information of at least one sub-processor meets the preset condition.
According to a fifth aspect of the embodiments of the present invention, there is provided a log processing client apparatus, please refer to fig. 6, which shows a schematic structural diagram of the apparatus, including:
a generating module 601, configured to generate a log;
the cache module 602 is configured to write the log into a cache queue through the created running thread;
and the sending module is used for responding to the condition that the log written into the cache queue meets the preset condition and sending the log to the server.
In conjunction with any embodiment provided by the present disclosure, the system further includes a registration module configured to: sending registration information to the server through the running thread;
the cache module is specifically configured to:
writing the log segments corresponding to the logs into at least one idle-state first cache path of the cache queue through the running thread, wherein the cache queue is provided with a plurality of first cache paths;
the sending module is specifically configured to:
and responding to the fact that the first cache path is fully written by the log fragments, and sending the log fragments in the first cache path to a server side.
In combination with any one of the embodiments provided by the present disclosure, further comprising a setting module configured to:
in response to the first cache path being written full by a log segment, setting the first cache path to a saturated state;
and responding to the log fragments in the first cache path sent to a server, and setting the first cache path to be in an idle state.
In connection with any embodiment provided by the present disclosure, further comprising a second shutdown module:
and closing the running thread, and sending report information to the server, wherein the report information represents that the running thread corresponding to the client is closed.
In combination with any one of the embodiments provided by the present disclosure, further comprising an output module configured to:
outputting the log to at least one of: hardware terminals and log files;
wherein the log is output to a log file in the following manner:
outputting the log to a second cache path corresponding to the log file;
and in response to the second cache path being full of log segments, unloading the log segments in the second cache path to the log file.
According to a sixth aspect of the embodiments of the present invention, there is provided a log processing host apparatus, please refer to fig. 7, which shows a schematic structural diagram of the apparatus, including:
a request module 701, configured to send a query request to a server;
a status module 702, configured to receive status information sent by the server, where the status information is generated by the server according to a log stored in a log storage area.
In combination with any one of the embodiments provided by the present disclosure, the request module is specifically configured to:
sending a query request to a listening thread of the server through a channel matched with the listening thread, wherein the query request comprises at least one or more of the following: the system comprises a high-speed serial computer expansion bus standard thread, a high-speed serial computer expansion bus network thread and a regional network thread.
According to a seventh aspect of the embodiments of the present invention, there is provided a log processing system, please refer to fig. 8, which shows a schematic structural diagram of the system, including:
the client 801 is used for generating logs, writing the logs into a cache queue through the created running thread, responding that the logs written into the cache queue meet preset conditions, and responding that the logs written into the cache queue meet the preset conditions, and sending the logs to a server;
the server 802 is configured to receive a log sent by at least one client, store the log in the log storage area, receive an inquiry request sent by a host, and return status information to the host according to the log stored in the log storage area, where the log storage area is pre-created and has multiple storage paths, and logs sent by different clients are stored in different storage paths;
the host 803 is configured to send a query request to a server and receive state information sent by the server, where the state information is generated by the server according to a log stored in a log storage area.
According to an eighth aspect of the embodiments of the present invention, there is provided a chip, please refer to fig. 9, which shows a schematic structural diagram of the chip, including: a plurality of client modules 901 and/or server modules 902;
the client module 901 is configured to generate a log, write the log into a cache queue through the created running thread, and send the log to the server module in response to that the log written into the cache queue meets a preset condition;
the server module 902 is configured to receive a log sent by at least one client module, store the log in a log storage area, receive a query request sent by a host, and return status information to the host according to the log stored in the log storage area, where the log storage area is pre-created and has multiple storage paths, and logs sent by different client modules are stored in different storage paths.
The chip provided by the embodiment of the application can comprise an AI chip.
In a ninth aspect, at least one embodiment of the present invention provides an electronic device, please refer to fig. 10, which shows a structure of the device, where the device includes a memory for storing computer instructions executable on a processor, and the processor is configured to process a log based on the method of the first aspect, the second aspect, or the third aspect when executing the computer instructions.
In a tenth aspect, at least one embodiment of the invention provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, performs the method of the first, second or third aspect.
In the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.