CN111723727A - Cloud monitoring method and device based on edge computing, electronic equipment and storage medium - Google Patents
Cloud monitoring method and device based on edge computing, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111723727A CN111723727A CN202010557709.7A CN202010557709A CN111723727A CN 111723727 A CN111723727 A CN 111723727A CN 202010557709 A CN202010557709 A CN 202010557709A CN 111723727 A CN111723727 A CN 111723727A
- Authority
- CN
- China
- Prior art keywords
- media frame
- model
- monitoring
- edge
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 154
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000004458 analytical method Methods 0.000 claims abstract description 214
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 45
- 238000004590 computer program Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000011218 segmentation Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 230000004913 activation Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000007726 management method Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000003672 processing method Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention relates to a cloud monitoring technology, and discloses a cloud monitoring method based on edge computing, which comprises the following steps: the method comprises the steps of receiving a media stream data packet sent by a video recording device, sequentially analyzing the media stream data packet to obtain a media frame queue of a preset type, adding the media frame queue to a preset cache space, reading media frames to be processed from the media frame queue, analyzing and processing the media frames to be processed by utilizing a pre-established analysis model to obtain characteristic information corresponding to the media frames to be processed, searching characteristic information meeting preset conditions in the characteristic information, taking the media frames to be processed corresponding to the searched characteristic information as target media frames, and finally generating and sending monitoring and analyzing results to a cloud device based on the target media frames. The invention also relates to a block chain technology, and the monitoring analysis result can be stored in the block chain. The invention improves the monitoring and analyzing efficiency.
Description
Technical Field
The present invention relates to cloud monitoring technologies, and in particular, to a cloud monitoring method and apparatus based on edge computing, an electronic device, and a computer-readable storage medium.
Background
The monitoring system is a physical foundation for real-time monitoring of specific targets (such as specific places, people, articles and the like) in various industries, and management departments can obtain effective data, image video or sound information through the monitoring system to timely monitor and memorize the process of sudden abnormal events. However, the conventional monitoring system still needs a lot of manpower to identify and analyze the monitoring data, thereby resulting in low efficiency of obtaining the monitoring analysis result.
Therefore, how to improve the monitoring and analyzing efficiency becomes a technical problem to be solved urgently.
Disclosure of Invention
The invention mainly aims to provide a cloud monitoring method and device based on edge computing, electronic equipment and a computer readable storage medium, and aims to improve monitoring and analyzing efficiency.
In order to achieve the above object, the present invention provides a cloud monitoring method based on edge computing, which is applied to an edge device, and includes:
receiving a media stream data packet sent by a video recording device;
analyzing the media stream data packets according to the receiving sequence to obtain a media frame queue of a preset type, and adding the media frame queue to a preset buffer space;
reading a media frame to be processed from the media frame queue;
analyzing each to-be-processed media frame by utilizing a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searching the characteristic information meeting preset conditions in the characteristic information, and taking the to-be-processed media frame corresponding to the searched characteristic information as a target media frame;
and generating a monitoring analysis result based on the target media frame, and sending the monitoring analysis result to the cloud end equipment.
Optionally, before the receiving of the media stream data packet sent by the camcorder, the method includes:
sending a registration request carrying equipment identification information to the cloud end equipment, so that the cloud end equipment executes registration operation based on the equipment identification information;
receiving equipment operation parameters, model reasoning parameters and analysis models which are sent by the cloud equipment after the registration operation is finished;
configuring operation parameters based on the equipment operation parameters;
and saving the analysis model, and generating and saving the preset condition for searching the characteristic information based on the model reasoning parameter.
Optionally, when the target media frame is a video frame, the generating a monitoring analysis result based on the target media frame includes:
storing the target media frame as a screenshot, and generating the monitoring analysis result based on the screenshot;
or intercepting a media frame sub-queue containing the target media frame from the media frame queue, and generating the monitoring analysis result based on the media frame sub-queue.
Optionally, the pre-established analysis model includes one or more of a face recognition model, a voiceprint recognition model, a character recognition model, and an object recognition model.
In order to solve the above problem, the present invention further provides a cloud monitoring apparatus based on edge computing, which is applied to a cloud device, and includes:
receiving and storing a monitoring analysis result sent by the edge device, wherein the monitoring analysis result is obtained by analyzing a media stream data packet sent by the video recording device by the edge device;
responding to a query request carrying query conditions sent by a user terminal, and querying monitoring analysis results meeting the query conditions from the stored monitoring analysis results to obtain query results;
and sending the query result to the user terminal.
Optionally, before the receiving and storing the monitoring analysis result sent by the edge device, the method includes:
receiving a registration request sent by the edge device, wherein the registration request comprises device identification information;
performing a registration operation based on the device identification information;
and after the registration operation is finished, sending the predetermined equipment operation parameters, the model reasoning parameters and the analysis model to the edge equipment.
Optionally, the method further comprises:
receiving a model training request sent by a user terminal, wherein the model training request comprises a first preset training data set;
and obtaining an analysis model obtained by training based on a second preset training data set in advance, and training the analysis model by using the first preset training set to obtain a new analysis model.
In order to solve the above problem, the present invention further provides an edge computing-based cloud monitoring apparatus, including:
the receiving module is used for receiving the media stream data packet sent by the shooting and recording equipment;
the buffer module is used for analyzing the media stream data packets according to the receiving sequence to obtain a media frame queue of a preset type and adding the media frame queue to a preset buffer space;
the extraction module is used for reading the media frame to be processed from the media frame queue;
the analysis module is used for analyzing and processing each to-be-processed media frame by utilizing a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searching the characteristic information meeting preset conditions in the characteristic information, and taking the to-be-processed media frame corresponding to the searched characteristic information as a target media frame;
and the sending module is used for generating a monitoring analysis result based on the target media frame and sending the monitoring analysis result to the cloud end equipment.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement any one of the above cloud monitoring methods based on edge computing.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, where at least one instruction is stored, and the at least one instruction is executed by a processor in an electronic device to implement any one of the above cloud monitoring methods based on edge computing.
The embodiment of the invention receives a media stream data packet sent by a shooting device, analyzes the media stream data packet according to a receiving sequence to obtain a media frame queue of a preset type, adds the media frame queue to a preset cache space, then reads a to-be-processed media frame from the media frame queue, analyzes and processes each to-be-processed media frame by using a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searches characteristic information meeting preset conditions in the obtained characteristic information, uses the searched to-be-processed media frame corresponding to the characteristic information as a target media frame, and finally generates a monitoring analysis result based on the target media frame and sends the monitoring analysis result to a cloud device. Compared with the prior art, the embodiment analyzes the media stream data packet through the edge device to obtain the monitoring analysis result, so that the monitoring analysis efficiency is improved, and in addition, the edge device generates the monitoring analysis result only based on the target media frame and sends the monitoring analysis result to the cloud device for storage, so that the data volume uploaded to the cloud device is small, the uploading speed of the monitoring analysis result is improved, and the high bandwidth and the storage space occupation are reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a cloud monitoring method based on edge computing according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a cloud monitoring method based on edge computing according to another embodiment of the present invention;
fig. 3 is a schematic block diagram of a cloud monitoring apparatus based on edge computing according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a cloud monitoring apparatus based on edge computing according to another embodiment of the present invention;
fig. 5 is a schematic internal structural diagram of an electronic device implementing a cloud monitoring method based on edge computing according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
The invention provides a cloud monitoring method based on edge computing. Fig. 1 is a schematic flow chart of a cloud monitoring method based on edge computing according to an embodiment of the present invention. The method can be applied to edge equipment which can be respectively in communication connection with a shooting device and cloud equipment. It should be emphasized that the edge device may be any device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, for example, an edge calculation server, a desktop computer, a notebook, a palm computer, a mobile phone, a smart watch, and the like, which is not limited in this respect. Furthermore, the method may be performed by one device or multiple devices, and the devices may be implemented by software and/or hardware.
In this embodiment, the cloud monitoring method based on edge computing includes:
and step S11, receiving the media stream data packet sent by the video recording device.
In detail, the video recording device performs media data acquisition in real time or at regular time to obtain a plurality of media frames, and the media frames are rendered, encoded and packaged to generate corresponding media stream data packets. The media stream, also called streaming media, refers to a media format played on the internet by streaming, and includes an audio stream, a video stream, a text stream, an image stream, a motion picture stream, and the like.
The video recording device sends the media stream data packet to the edge device, and the edge device receives the media stream data packet sent by the video recording device.
Step S12, parsing the media stream packets according to the receiving order to obtain a media frame queue of a preset type, and adding the media frame queue to a preset buffer space.
The preset type of media frame queue comprises a video frame queue and/or an audio frame queue and the like. For example, the media stream data packet is parsed according to the packaging format of the media stream data packet, when the media stream is a video stream capable of playing audio, the media stream data packet can be parsed to obtain an audio frame queue and a video frame queue, and if only the video frame queue needs to be processed, the video frame queue is added to a preset buffer space as a preset type of media frame queue.
Step S13, reading the media frame to be processed from the media frame queue.
In this embodiment, according to the specific application scenario requirement, the media frame may be read from the media frame queue as the media frame to be processed according to the preset sampling rule. For example, a media frame may be read as a pending media frame in the media frame queue every first predetermined number of media frames (e.g., twenty media frames) or every first predetermined duration (e.g., 10 seconds).
Step S14, analyzing each to-be-processed media frame by using a pre-established analysis model to obtain feature information corresponding to each to-be-processed media frame, searching for feature information meeting a preset condition in the feature information, and taking the to-be-processed media frame corresponding to the found feature information as a target media frame.
In detail, the edge device performs analysis processing on each to-be-processed media frame by using the analysis model to obtain feature information corresponding to each to-be-processed media frame. The analysis model comprises one or more of a face recognition model, a voiceprint recognition model, a character recognition model and an object recognition model. The corresponding analysis model type may be selected according to the specific application scenario. For example, if the number of people in the monitored area is to be identified, the face recognition model may be selected, if the number of vehicles in the monitored area is to be identified, the object recognition model may be selected, and if both the number of people in the monitored area and the number of vehicles are to be identified, the face recognition model and the object recognition model may be selected.
In this embodiment, the analysis model may be sent to the edge device by the cloud device. For example, the cloud device receives a setting request sent by a user terminal, the setting request includes application service type information, the cloud device determines an analysis model corresponding to the application service type information according to a mapping relation between the application service type information and the analysis model which is established in advance, the analysis model is sent to the edge device, and the edge device receives and stores the analysis model. It is emphasized that in other embodiments, the analytical model may be imported from any other suitable device to the edge device, and the invention is not limited thereto.
The following describes how the analysis model specifically analyzes and processes the media frame to be processed to obtain the feature information by taking the object recognition model as an example. If the number of a certain target object is to be identified, the type of the acquired to-be-processed media frame is a video frame (namely, a frame image), and firstly, the to-be-processed media frame is segmented to obtain a plurality of characteristic images. The segmentation processing method includes a segmentation method based on edge monitoring, a segmentation method based on region growing, a segmentation method based on a neural network, a segmentation method based on a threshold, and the like, and the segmentation processing method may be selected as needed, which is not limited in the present invention. And then, carrying out similarity analysis on each characteristic image and a plurality of preset sample images respectively. Wherein, the similarity between the characteristic image and the preset sample image can be calculated based on parameters such as contour, color, texture and the like. The preset sample image is a sample image of the target object. Then, each time the similarity between a characteristic image and any preset sample image is greater than or equal to a preset similarity threshold value, the characteristic image is identified as a target object. And finally, counting the number of the characteristic images of the identified target object, and outputting the counted number as characteristic information.
For the feature information, when the specific application scenarios are different, the feature information may also be different. For example, if the number of persons and the number of vehicles in the monitored area are to be identified, the output characteristic information includes the number of persons and the number of vehicles. If it is desired to identify whether or not there is a target person in the monitored area, the output characteristic information includes the identified person information (e.g., name).
After the edge device obtains the feature information, the feature information meeting the preset condition is searched in the obtained feature information. For example, if the number of people and the number of vehicles in the monitored area are to be identified and an early warning is required when the number of people is greater than 4 or the number of vehicles is equal to 5, the preset condition may be set to be greater than 4 or the number of vehicles is equal to 5.
When the edge device finds the characteristic information meeting the preset condition, the edge device takes the media frame to be processed corresponding to the found characteristic information as a target media frame.
Step S15, generating a monitoring analysis result based on the target media frame, and sending the monitoring analysis result to a cloud device.
In this embodiment, the above-mentioned manner of generating the monitoring analysis result based on the target media frame may be various, for example, when the target media frame is a video frame, the monitoring analysis result may be generated in the following manner.
In the first mode, the target media frame is saved as a screenshot, and the monitoring analysis result is generated based on the screenshot. As for the first mode, in some application scenarios, the target media frame also carries time information (e.g., a timestamp), and the monitoring analysis result may be generated based on the screenshot and the time information.
In a second mode, a media frame sub-queue containing the target media frame is intercepted from the media frame queue, and the monitoring analysis result is generated based on the media frame sub-queue. For example, a second preset number of video frames arranged in front of the target media frame are obtained in the media frame queue, and a second preset number of video frames arranged behind the target media frame are obtained, that is, a media frame sub-queue formed by the obtained video frames and the target media frame is obtained. Or acquiring a second preset number of video frames arranged in front of the target media frame and acquiring a second preset number of video frames arranged behind the target media frame in the media frame queue, so as to obtain a media frame sub-queue formed by the acquired video frames and the target media frame. And generating the monitoring analysis result based on the media frame sub-queue. In some application scenarios, a video file with a second preset duration (e.g., 20 seconds) may be generated based on the media frame sub-queue and used as a monitoring analysis result.
The embodiment receives a media stream data packet sent by a shooting device, analyzes the media stream data packet according to a receiving sequence to obtain a media frame queue of a preset type, adds the media frame queue to a preset cache space, then reads a to-be-processed media frame from the media frame queue, analyzes and processes each to-be-processed media frame by using a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searches characteristic information meeting preset conditions in the obtained characteristic information, takes the to-be-processed media frame corresponding to the searched characteristic information as a target media frame when the characteristic information is found, finally generates a monitoring and analyzing result based on the target media frame, and sends the monitoring and analyzing result to a cloud device. Compared with the prior art, the embodiment analyzes the media stream data packet through the edge device to obtain the monitoring analysis result, so that the monitoring analysis efficiency is improved, and in addition, the edge device generates the monitoring analysis result only based on the target media frame and sends the monitoring analysis result to the cloud device for storage, so that the data volume uploaded to the cloud device is small, the uploading speed of the monitoring analysis result is improved, and the high bandwidth and the storage space occupation are reduced.
Further, before the step S11, the method further includes:
sending a registration request carrying equipment identification information to the cloud end equipment, so that the cloud end equipment executes registration operation based on the equipment identification information; receiving equipment operation parameters, model reasoning parameters and analysis models which are sent by the cloud equipment after the registration operation is finished; configuring operation parameters based on the equipment operation parameters; and saving the analysis model, and generating and saving the preset condition for searching the characteristic information based on the model reasoning parameter.
The device identification information includes a product model, a device name, and a device password.
The device operation parameters include a video streaming address, a video frame rate, a video scanning interval time (corresponding to the first preset time), a monitoring analysis result reporting type (e.g., screenshot, short video), a short video capture time (corresponding to the second preset time), and the like. According to a specific application scenario, corresponding device operation parameters may be set, which is not limited in the present invention.
In this embodiment, a user can send a setting request to the cloud device through the user terminal, so as to set the device operation parameters and the model inference parameters, thereby meeting diversified requirements of the user.
Fig. 2 is a schematic flow chart of a cloud monitoring method based on edge computing according to another embodiment of the present invention. The method can be applied to cloud equipment which can be in communication connection with the edge equipment and the user terminal respectively. It is emphasized that the method may be performed by one apparatus or by a plurality of apparatuses, and that the apparatuses may be implemented by software and/or hardware.
In this embodiment, the cloud monitoring method based on edge computing includes:
and step S21, receiving and storing the monitoring analysis result sent by the edge device.
And the monitoring analysis result is obtained by analyzing the media stream data packet sent by the video recording equipment by the edge equipment. For example, the edge device receives a media stream data packet sent by the video recording device, analyzes the media stream data packet according to a receiving sequence to obtain a media frame queue of a preset type, adds the media frame queue to a preset buffer space, reads a to-be-processed media frame from the media frame queue, analyzes each to-be-processed media frame by using a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searches for characteristic information meeting a preset condition in the obtained characteristic information, takes the to-be-processed media frame corresponding to the searched characteristic information as a target media frame when the characteristic information is found, and finally generates a monitoring analysis result based on the target media frame. For the specific description of the analysis method, reference may be made to the contents of the previous embodiment, which is not described herein again.
Step S22, responding to the query request carrying query conditions sent by the user terminal, and querying the monitoring analysis results meeting the query conditions from the stored monitoring analysis results to obtain query results.
In detail, the user may select a query condition as needed, for example, select to view the monitoring analysis result in a certain time interval, and set the query condition to a specific time interval. When the cloud device receives a query request carrying query conditions and sent by a user terminal, querying the monitoring analysis results meeting the query conditions from the stored monitoring analysis results to obtain query results.
And step S23, sending the query result to the user terminal.
The embodiment receives and stores the monitoring analysis result sent by the edge device, responds to the query request carrying the query condition sent by the user terminal, queries the monitoring analysis result meeting the query condition from the stored monitoring analysis result to obtain the query result, and finally sends the query result to the user terminal. Compared with the prior art, the embodiment analyzes the media stream data packet through the edge device to obtain the monitoring analysis result, so that the monitoring analysis efficiency is improved.
Further, before step S21, the method further includes:
receiving a registration request sent by the edge device, wherein the registration request comprises device identification information; performing a registration operation based on the device identification information; and after the registration operation is finished, sending the predetermined equipment operation parameters, the model reasoning parameters and the analysis model to the edge equipment.
In detail, the cloud device receives and responds to a registration request sent by the edge device, wherein the registration request includes device identification information, and the device identification information is used for uniquely identifying the edge device and includes a product model, a device name and a device password. The cloud device searches the received device identification information in a pre-established device information base (the device information base comprises the device identification information of all legal devices), if the received device identification information is found, the edge device is considered to be the legal device, a message of successful registration is returned to the edge device, the state of the edge device is marked to be an activated state, if the received device identification information is not found, the edge device is considered to be the illegal device, and a message of failed registration is returned to the edge device.
After the registration operation is finished, the user can set equipment operation parameters and model reasoning parameters through the user terminal and select an analysis model. For example, the cloud device receives a setting request sent by a user terminal, the setting request includes application service type information, the cloud device determines an analysis model corresponding to the application service type information according to a mapping relation between the application service type information and the analysis model which is established in advance, the analysis model is sent to the edge device, and the edge device receives and stores the analysis model. For another example, the cloud device receives a setting request sent by the user terminal, where the setting request includes device operation parameters and/or model inference parameters, and stores the device operation parameters and/or the model inference parameters in a preset storage space. After the setting is completed, the cloud device sends the device operation parameters, the model reasoning parameters and the analysis model to the edge device. And the edge equipment configures the operation parameters based on the equipment operation parameters, stores the analysis model, and generates and stores the preset conditions for searching the characteristic information based on the model reasoning parameters.
The device operation parameters include a video stream pulling address, a video stream pushing address, a video frame rate, a video scanning interval time (corresponding to the first preset time), a monitoring analysis result reporting type (e.g., screenshot, short video), a short video capture time (corresponding to the second preset time), and the like. According to a specific application scenario, corresponding device operation parameters may be set, which is not limited in the present invention.
The analysis model comprises one or more of a face recognition model, a voiceprint recognition model, a character recognition model and an object recognition model. The corresponding analysis model type may be selected according to the specific application scenario. For example, if the number of people in the monitored area is to be identified, the face recognition model may be selected, if the number of vehicles in the monitored area is to be identified, the object recognition model may be selected, and if both the number of people in the monitored area and the number of vehicles are to be identified, the face recognition model and the object recognition model may be selected.
In this embodiment, a user can send a setting request to the cloud device through the user terminal, so as to set the device operation parameters and the model inference parameters and select the analysis model, thereby meeting diversified requirements of the user.
Further, in this embodiment, the method further includes:
receiving a model training request sent by a user terminal, wherein the model training request comprises a first preset training data set;
and obtaining an analysis model obtained by training based on a second preset training data set in advance, and training the analysis model by using the first preset training set to obtain a new analysis model.
For example, if a user wants to train an object recognition model, a sample set can be set through a user terminal, the sample set comprises a plurality of sample pictures, each sample picture is labeled to obtain labeled data, and then the sample set and the labeled data are used as a first preset training set to be sent to cloud equipment and request the cloud equipment to train the model. The cloud device obtains an analysis model (for example, a basic article identification model) obtained by training based on a second preset training data set in advance from a preset storage space, and performs secondary training on the analysis model by using the first preset training set to generate a new analysis model (namely, a customized analysis model).
According to the embodiment, the user-defined analysis model training can be realized according to the user needs, the diversified requirements of the user can be met, the model development cost is saved for the user, and the model training efficiency is improved.
Further, in this embodiment, after obtaining the new analysis model, the method further includes:
and responding to a model updating request sent by a user terminal, generating model updating data based on the new analysis model, and sending the model updating data to the edge equipment so that the edge equipment can execute analysis model updating operation according to the model updating data.
In this embodiment, batch analysis model updating may be performed on the plurality of edge devices, for example, the model updating request includes device identification information of the plurality of edge devices, and the cloud device generates model updating data based on the new analysis model and sends the model updating data to the edge devices corresponding to the device identification information.
In this embodiment, remote updating of the analytical model of the edge device can be achieved.
Fig. 3 is a schematic block diagram of a cloud monitoring apparatus based on edge computing according to an embodiment of the present invention.
The cloud monitoring apparatus 100 based on edge computing according to the present invention may be installed in an electronic device. According to the implemented functions, the cloud monitoring apparatus based on edge computing may include a receiving module 101, a caching module 102, an extracting module 103, an analyzing module 104, and a sending module 105. A module according to the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the receiving module 101 is configured to receive a media stream data packet sent by a video recording device.
In detail, the video recording device performs media data acquisition in real time or at regular time to obtain a plurality of media frames, and the media frames are rendered, encoded and packaged to generate corresponding media stream data packets. The media stream, also called streaming media, refers to a media format played on the internet by streaming, and includes an audio stream, a video stream, a text stream, an image stream, a motion picture stream, and the like.
The recording device sends the media stream data packet to the receiving module 101, and the receiving module 101 receives the media stream data packet sent by the recording device.
The buffer module 102 is configured to analyze the media stream data packet according to a receiving sequence to obtain a media frame queue of a preset type, and add the media frame queue to a preset buffer space.
The preset type of media frame queue comprises a video frame queue and/or an audio frame queue. For example, the media stream data packet is parsed according to the packaging format of the media stream data packet, when the media stream is a video stream capable of playing audio, the media stream data packet can be parsed to obtain an audio frame queue and a video frame queue, and if only the video frame queue needs to be processed, the video frame queue is added to a preset buffer space as a preset type of media frame queue.
An extracting module 103, configured to read a to-be-processed media frame from the media frame queue.
In this embodiment, according to the specific application scenario requirement, the media frame may be read from the media frame queue as the media frame to be processed according to the preset sampling rule. For example, a media frame may be read as a pending media frame in the media frame queue every first predetermined number of media frames (e.g., twenty media frames) or every first predetermined duration (e.g., 10 seconds).
The analysis module 104 is configured to perform analysis processing on each to-be-processed media frame by using a pre-established analysis model to obtain feature information corresponding to each to-be-processed media frame, search feature information meeting a preset condition in the feature information, and use the to-be-processed media frame corresponding to the searched feature information as a target media frame.
In detail, the analysis module 104 performs analysis processing on each to-be-processed media frame by using the analysis model to obtain feature information corresponding to each to-be-processed media frame. The analysis model comprises one or more of a face recognition model, a voiceprint recognition model, a character recognition model and an object recognition model. The corresponding analysis model type may be selected according to the specific application scenario. For example, if the number of people in the monitored area is to be identified, the face recognition model may be selected, if the number of vehicles in the monitored area is to be identified, the object recognition model may be selected, and if both the number of people in the monitored area and the number of vehicles are to be identified, the face recognition model and the object recognition model may be selected. In this embodiment, the analysis model may be sent to the analysis module 104 by the cloud device. For example, the cloud device receives a setting request sent by a user terminal, where the setting request includes application service type information, determines an analysis model corresponding to the application service type information according to a mapping relationship between the application service type information and the analysis model, which is established in advance, and sends the analysis model to the analysis module 104, and the analysis module 104 receives and stores the analysis model. It is emphasized that in other application examples, the analysis model may be imported to the analysis module 104 by any other suitable device, and the invention is not limited thereto.
The following describes how the analysis model specifically analyzes and processes the media frame to be processed to obtain the feature information by taking the object recognition model as an example. If the number of a certain target object is to be identified, the type of the acquired to-be-processed media frame is a video frame (namely, a frame image), and firstly, the to-be-processed media frame is segmented to obtain a plurality of characteristic images. The segmentation processing method includes a segmentation method based on edge monitoring, a segmentation method based on region growing, a segmentation method based on a neural network, a segmentation method based on a threshold, and the like, and the segmentation processing method may be selected as needed, which is not limited in the present invention. And then, carrying out similarity analysis on each characteristic image and a plurality of preset sample images respectively. Wherein, the similarity between the characteristic image and the preset sample image can be calculated based on parameters such as contour, color, texture and the like. The preset sample image is a sample image of the target object. Then, each time the similarity between a characteristic image and any preset sample image is greater than or equal to a preset similarity threshold value, the characteristic image is identified as a target object. And finally, counting the number of the characteristic images of the identified target object, and outputting the counted number as characteristic information.
For the feature information, when the specific application scenarios are different, the feature information may also be different. For example, if the number of persons and the number of vehicles in the monitored area are to be identified, the output characteristic information includes the number of persons and the number of vehicles. If it is desired to identify whether or not there is a target person in the monitored area, the output characteristic information includes the identified person information (e.g., name).
After the feature information is obtained, the analysis module 104 searches for the feature information meeting the preset condition in the obtained feature information. For example, if the number of people and the number of vehicles in the monitored area are to be identified and an early warning is required when the number of people is greater than 4 or the number of vehicles is equal to 5, the preset condition may be set to be greater than 4 or the number of vehicles is equal to 5.
When the feature information meeting the preset condition is found, the analysis module 104 takes the to-be-processed media frame corresponding to the found feature information as a target media frame.
A sending module 105, configured to generate a monitoring analysis result based on the target media frame, and send the monitoring analysis result to a cloud device.
In this embodiment, the above-mentioned manner of generating the monitoring analysis result based on the target media frame may be various, for example, when the target media frame is a video frame, the monitoring analysis result may be generated in the following manner.
In the first mode, the target media frame is saved as a screenshot, and the monitoring analysis result is generated based on the screenshot. As for the first mode, in some application scenarios, the target media frame also carries time information (e.g., a timestamp), and the monitoring analysis result may be generated based on the screenshot and the time information.
In a second mode, a media frame sub-queue containing the target media frame is intercepted from the media frame queue, and the monitoring analysis result is generated based on the media frame sub-queue. For example, a second preset number of video frames arranged in front of the target media frame are obtained in the media frame queue, and a second preset number of video frames arranged behind the target media frame are obtained, that is, a media frame sub-queue formed by the obtained video frames and the target media frame is obtained. Or acquiring a second preset number of video frames arranged in front of the target media frame and acquiring a second preset number of video frames arranged behind the target media frame in the media frame queue, so as to obtain a media frame sub-queue formed by the acquired video frames and the target media frame. And generating the monitoring analysis result based on the media frame sub-queue. In some application scenarios, a video file with a second preset duration (e.g., 20 seconds) may be generated based on the media frame sub-queue and used as a monitoring analysis result.
The embodiment receives a media stream data packet sent by a shooting device, analyzes the media stream data packet according to a receiving sequence to obtain a media frame queue of a preset type, adds the media frame queue to a preset cache space, then reads a to-be-processed media frame from the media frame queue, analyzes and processes each to-be-processed media frame by using a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searches characteristic information meeting preset conditions in the obtained characteristic information, takes the to-be-processed media frame corresponding to the searched characteristic information as a target media frame when the characteristic information is found, finally generates a monitoring and analyzing result based on the target media frame, and sends the monitoring and analyzing result to a cloud device. Compared with the prior art, the embodiment analyzes the media stream data packet to obtain the monitoring analysis result, so that the monitoring analysis efficiency is improved, and in addition, the monitoring analysis result is generated only based on the target media frame and is sent to the cloud equipment for storage, so that the data volume uploaded to the cloud equipment is small, the uploading speed of the monitoring analysis result is improved, and the high bandwidth and the storage space occupation are reduced.
Further, the apparatus further comprises: a registration module, a setting module (not shown in the figure), wherein:
the registration module is used for sending a registration request carrying equipment identification information to the cloud end equipment so that the cloud end equipment can execute registration operation based on the equipment identification information;
the setting module is used for receiving the equipment operation parameters, the model reasoning parameters and the analysis model which are sent by the cloud equipment after the registration operation is finished; configuring operation parameters based on the equipment operation parameters; and saving the analysis model, and generating and saving the preset condition for searching the characteristic information based on the model reasoning parameter.
The device identification information includes a product model, a device name, and a device password.
The device operation parameters include a video streaming address, a video frame rate, a video scanning interval time (corresponding to the first preset time), a monitoring analysis result reporting type (e.g., screenshot, short video), a short video capture time (corresponding to the second preset time), and the like. According to a specific application scenario, corresponding device operation parameters may be set, which is not limited in the present invention.
In this embodiment, a user can send a setting request to the cloud device through the user terminal, so as to set the device operation parameters and the model inference parameters, thereby meeting diversified requirements of the user.
Fig. 4 is a schematic block diagram of a cloud monitoring apparatus based on edge computing according to another embodiment of the present invention.
The cloud monitoring apparatus 200 based on edge computing according to the present invention may be installed in an electronic device. According to the implemented functions, the cloud monitoring device based on edge computing may include a storage module 201, a query module 202, and a feedback module 203. A module according to the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the storage module 201 is configured to receive and store a monitoring analysis result sent by the edge device.
And the monitoring analysis result is obtained by analyzing the media stream data packet sent by the video recording equipment by the edge equipment. For example, the edge device receives a media stream data packet sent by the video recording device, analyzes the media stream data packet according to a receiving sequence to obtain a media frame queue of a preset type, adds the media frame queue to a preset buffer space, reads a to-be-processed media frame from the media frame queue, analyzes each to-be-processed media frame by using a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searches for characteristic information meeting a preset condition in the obtained characteristic information, takes the to-be-processed media frame corresponding to the searched characteristic information as a target media frame when the characteristic information is found, and finally generates a monitoring analysis result based on the target media frame. For the specific description of the analysis method, reference may be made to the contents of the above embodiments, which are not described herein again. In addition, it should be emphasized that, in order to further ensure the privacy and security of the monitoring analysis result, the monitoring analysis result may also be stored in a node of a block chain.
The query module 202 is configured to respond to a query request carrying a query condition sent by a user terminal, and query a monitoring analysis result meeting the query condition from the stored monitoring analysis results to obtain a query result.
In detail, the user may select a query condition as needed, for example, select to view the monitoring analysis result in a certain time interval, and set the query condition to a specific time interval. When the query module 202 receives a query request carrying query conditions sent by a user terminal, querying the monitoring analysis results meeting the query conditions from the stored monitoring analysis results to obtain query results.
And the feedback module 203 is configured to send the query result to the user terminal.
The embodiment receives and stores the monitoring analysis result sent by the edge device, responds to the query request carrying the query condition sent by the user terminal, queries the monitoring analysis result meeting the query condition from the stored monitoring analysis result to obtain the query result, and finally sends the query result to the user terminal. Compared with the prior art, the embodiment analyzes the media stream data packet through the edge device to obtain the monitoring analysis result, so that the monitoring analysis efficiency is improved.
Further, the apparatus comprises an activation module (not shown in the figures) for:
receiving a registration request sent by the edge device, wherein the registration request comprises device identification information; performing a registration operation based on the device identification information; and after the registration operation is finished, sending the predetermined equipment operation parameters, the model reasoning parameters and the analysis model to the edge equipment.
In detail, the activation module receives and responds to a registration request sent by the edge device, where the registration request includes device identification information, and the device identification information is used to uniquely identify the edge device and includes a product model, a device name, and a device password. The activation module searches the received equipment identification information in a pre-established equipment information base (the equipment information base comprises the equipment identification information of all legal equipment), if the equipment identification information is found, the edge equipment is considered as the legal equipment, a message of successful registration is returned to the edge equipment, the state of the edge equipment is marked as an activated state, if the equipment identification information is not found, the edge equipment is considered as the illegal equipment, and a message of failed registration is returned to the edge equipment.
After the registration operation is finished, the user can set equipment operation parameters and model reasoning parameters through the user terminal and select an analysis model. For example, the activation module receives a setting request sent by a user terminal, the setting request includes application service type information, the cloud device determines an analysis model corresponding to the application service type information according to a mapping relation between the application service type information and the analysis model which is established in advance, the analysis model is sent to the edge device, and the edge device receives and stores the analysis model. For another example, the activation module receives a setting request sent by the user terminal, where the setting request includes the device operation parameters and/or the model inference parameters, and stores the device operation parameters and/or the model inference parameters in a preset storage space. And after the setting is finished, the activation module sends the equipment operation parameters, the model reasoning parameters and the analysis model to the edge equipment. And the edge equipment configures the operation parameters based on the equipment operation parameters, stores the analysis model, and generates and stores the preset conditions for searching the characteristic information based on the model reasoning parameters.
The device operation parameters include a video stream pulling address, a video stream pushing address, a video frame rate, a video scanning interval time (corresponding to the first preset time), a monitoring analysis result reporting type (e.g., screenshot, short video), a short video capture time (corresponding to the second preset time), and the like. According to a specific application scenario, corresponding device operation parameters may be set, which is not limited in the present invention.
The analysis model comprises one or more of a face recognition model, a voiceprint recognition model, a character recognition model and an object recognition model. The corresponding analysis model type may be selected according to the specific application scenario. For example, if the number of people in the monitored area is to be identified, the face recognition model may be selected, if the number of vehicles in the monitored area is to be identified, the object recognition model may be selected, and if both the number of people in the monitored area and the number of vehicles are to be identified, the face recognition model and the object recognition model may be selected.
In this embodiment, a user can send a setting request to the cloud device through the user terminal, so as to set the device operation parameters and the model inference parameters and select the analysis model, thereby meeting diversified requirements of the user.
Further, in this embodiment, the apparatus further includes a model training module (not shown in the figure) for:
receiving a model training request sent by a user terminal, wherein the model training request comprises a first preset training data set;
and obtaining an analysis model obtained by training based on a second preset training data set in advance, and training the analysis model by using the first preset training set to obtain a new analysis model.
For example, if a user wants to train an object recognition model, a sample set can be set through a user terminal, the sample set comprises a plurality of sample pictures, each sample picture is labeled to obtain labeled data, and then the sample set and the labeled data are used as a first preset training set to be sent to a model training module and request the model training module to train the model. The model training module obtains an analysis model (for example, a basic article identification model) trained in advance based on a second preset training data set from a preset storage space, and performs secondary training on the analysis model by using the first preset training set to generate a new analysis model (namely, a customized analysis model).
According to the embodiment, the user-defined analysis model training can be realized according to the user needs, the diversified requirements of the user can be met, the model development cost is saved for the user, and the model training efficiency is improved.
Further, in this embodiment, the apparatus further includes an updating module (not shown in the figure) configured to:
and responding to a model updating request sent by a user terminal, generating model updating data based on the new analysis model, and sending the model updating data to the edge equipment so that the edge equipment can execute analysis model updating operation according to the model updating data.
In this embodiment, a batch analysis model update may be performed on the plurality of edge devices, for example, the model update request includes device identification information of the plurality of edge devices, and the update module generates model update data based on the new analysis model and sends the model update data to the edge devices corresponding to the respective device identification information.
In this embodiment, remote updating of the analytical model of the edge device can be achieved.
Fig. 4 is a schematic diagram of an internal structure of an electronic device implementing a cloud monitoring method based on edge computing according to an embodiment of the present invention.
The electronic device 1 may include a processor 10, a memory 11 and a bus, and may further include a computer program, such as a cloud monitoring program 12 based on edge computing, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc.
The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used to store not only application software installed in the electronic device 1 and various types of data, such as codes of a cloud monitoring program based on edge computing, but also temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., cloud monitoring programs based on edge computing, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 4 only shows an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The cloud monitor 12 based on edge computing stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, which when executed in the processor 10, can realize:
receiving a media stream data packet sent by a video recording device;
analyzing the media stream data packets according to the receiving sequence to obtain a media frame queue of a preset type, and adding the media frame queue to a preset buffer space;
reading a media frame to be processed from the media frame queue;
analyzing each to-be-processed media frame by utilizing a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searching the characteristic information meeting preset conditions in the characteristic information, and taking the to-be-processed media frame corresponding to the searched characteristic information as a target media frame;
and generating a monitoring analysis result based on the target media frame, and sending the monitoring analysis result to the cloud end equipment.
Alternatively, the cloud monitor 12 based on edge computing stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, and when running in the processor 10, can implement:
receiving and storing a monitoring analysis result sent by the edge device, wherein the monitoring analysis result is obtained by analyzing a media stream data packet sent by the video recording device by the edge device;
responding to a query request carrying query conditions sent by a user terminal, and querying monitoring analysis results meeting the query conditions from the stored monitoring analysis results to obtain query results;
and sending the query result to the user terminal.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, which is not described herein again. It is emphasized that, in order to further ensure the privacy and security of the monitoring analysis results, the monitoring analysis results may also be stored in a node of a block chain.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. A cloud monitoring method based on edge computing is applied to edge equipment and is characterized by comprising the following steps:
receiving a media stream data packet sent by a video recording device;
analyzing the media stream data packets according to the receiving sequence to obtain a media frame queue of a preset type, and adding the media frame queue to a preset buffer space;
reading a media frame to be processed from the media frame queue;
analyzing each to-be-processed media frame by utilizing a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searching the characteristic information meeting preset conditions in the characteristic information, and taking the to-be-processed media frame corresponding to the searched characteristic information as a target media frame;
and generating a monitoring analysis result based on the target media frame, and sending the monitoring analysis result to the cloud end equipment.
2. The edge-computing-based cloud monitoring method of claim 1, wherein before said receiving a media stream data packet sent by a camcorder device, the method further comprises:
sending a registration request carrying equipment identification information to the cloud end equipment, so that the cloud end equipment executes registration operation based on the equipment identification information;
receiving equipment operation parameters, model reasoning parameters and analysis models which are sent by the cloud equipment after the registration operation is finished;
configuring operation parameters based on the equipment operation parameters;
and saving the analysis model, and generating and saving the preset condition for searching the characteristic information based on the model reasoning parameter.
3. The edge-computing-based cloud monitoring method according to claim 1, wherein when the target media frame is a video frame, the generating a monitoring analysis result based on the target media frame includes:
storing the target media frame as a screenshot, and generating the monitoring analysis result based on the screenshot;
or intercepting a media frame sub-queue containing the target media frame from the media frame queue, and generating the monitoring analysis result based on the media frame sub-queue.
4. The edge-computing-based cloud monitoring method according to any one of claims 1 to 3, wherein the pre-established analysis model comprises one or more of a face recognition model, a voiceprint recognition model, a text recognition model, and an object recognition model.
5. A cloud monitoring method based on edge computing is applied to cloud equipment and is characterized by comprising the following steps:
receiving and storing a monitoring analysis result sent by the edge device, wherein the monitoring analysis result is obtained by analyzing a media stream data packet sent by the video recording device by the edge device;
responding to a query request carrying query conditions sent by a user terminal, and querying monitoring analysis results meeting the query conditions from the stored monitoring analysis results to obtain query results;
and sending the query result to the user terminal.
6. The edge-computing-based cloud monitoring method according to claim 5, wherein before the receiving and storing the monitoring analysis results sent by the edge device, the method comprises:
receiving a registration request sent by the edge device, wherein the registration request comprises device identification information;
performing a registration operation based on the device identification information;
and after the registration operation is finished, sending the predetermined equipment operation parameters, the model reasoning parameters and the analysis model to the edge equipment.
7. The edge-computing-based cloud monitoring method of claim 5, wherein said method further comprises:
receiving a model training request sent by a user terminal, wherein the model training request comprises a first preset training data set;
and obtaining an analysis model obtained by training based on a second preset training data set in advance, and training the analysis model by using the first preset training set to obtain a new analysis model.
8. An edge computing-based cloud monitoring apparatus, the apparatus comprising:
the receiving module is used for receiving the media stream data packet sent by the shooting and recording equipment;
the buffer module is used for analyzing the media stream data packets according to the receiving sequence to obtain a media frame queue of a preset type and adding the media frame queue to a preset buffer space;
the extraction module is used for reading the media frame to be processed from the media frame queue;
the analysis module is used for analyzing and processing each to-be-processed media frame by utilizing a pre-established analysis model to obtain characteristic information corresponding to each to-be-processed media frame, searching the characteristic information meeting preset conditions in the characteristic information, and taking the to-be-processed media frame corresponding to the searched characteristic information as a target media frame;
and the sending module is used for generating a monitoring analysis result based on the target media frame and sending the monitoring analysis result to the cloud end equipment.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the edge computing-based cloud monitoring method of any of claims 1 to 8.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the edge-computing-based cloud monitoring method according to any one of claims 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010557709.7A CN111723727B (en) | 2020-06-17 | 2020-06-17 | Cloud monitoring method and device based on edge computing, electronic equipment and storage medium |
PCT/CN2020/099092 WO2021151279A1 (en) | 2020-06-17 | 2020-06-30 | Method and apparatus for cloud monitoring based on edge computing, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010557709.7A CN111723727B (en) | 2020-06-17 | 2020-06-17 | Cloud monitoring method and device based on edge computing, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111723727A true CN111723727A (en) | 2020-09-29 |
CN111723727B CN111723727B (en) | 2024-07-16 |
Family
ID=72567434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010557709.7A Active CN111723727B (en) | 2020-06-17 | 2020-06-17 | Cloud monitoring method and device based on edge computing, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111723727B (en) |
WO (1) | WO2021151279A1 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112307120A (en) * | 2020-10-29 | 2021-02-02 | 腾讯科技(深圳)有限公司 | Information management server, information management method, and information management system |
CN112601106A (en) * | 2020-11-16 | 2021-04-02 | 北京都是科技有限公司 | Video image processing method and device and storage medium |
CN113435368A (en) * | 2021-06-30 | 2021-09-24 | 青岛海尔科技有限公司 | Monitoring data identification method and device, storage medium and electronic device |
CN113572997A (en) * | 2021-07-22 | 2021-10-29 | 中科曙光国际信息产业有限公司 | Video stream data analysis method, device, equipment and storage medium |
CN113938747A (en) * | 2021-10-15 | 2022-01-14 | 深圳市智此一游科技服务有限公司 | Video generation method, device and server |
CN113949925A (en) * | 2021-10-15 | 2022-01-18 | 深圳市智此一游科技服务有限公司 | Video generation method and device and server |
CN114143445A (en) * | 2021-10-15 | 2022-03-04 | 深圳市智此一游科技服务有限公司 | A video generation method |
CN114155570A (en) * | 2021-10-15 | 2022-03-08 | 深圳市智此一游科技服务有限公司 | Video generation method |
CN114173087A (en) * | 2021-11-02 | 2022-03-11 | 上海三旺奇通信息科技有限公司 | Video data acquisition and processing method, edge gateway and storage medium |
CN114245077A (en) * | 2021-12-17 | 2022-03-25 | 广州西麦科技股份有限公司 | Intelligent alarm method based on opencv |
CN114245078A (en) * | 2021-12-17 | 2022-03-25 | 广州西麦科技股份有限公司 | Method and device for controlling field operation safety by applying various Al identification algorithms |
WO2022096946A1 (en) * | 2021-06-16 | 2022-05-12 | Sensetime International Pte. Ltd. | Game state detection and configuration updating method and apparatus, device and storage medium |
CN114663424A (en) * | 2022-04-14 | 2022-06-24 | 联通(广东)产业互联网有限公司 | Endoscope video auxiliary diagnosis method, system, equipment and medium based on edge cloud cooperation |
CN115291967A (en) * | 2022-08-01 | 2022-11-04 | 中国人民解放军32039部队 | Space data analysis method and device and electronic equipment |
WO2022243735A1 (en) * | 2021-05-21 | 2022-11-24 | Sensetime International Pte. Ltd. | Edge computing-based control method and apparatus, edge device and storage medium |
CN116055078A (en) * | 2021-10-28 | 2023-05-02 | 北京金山云网络技术有限公司 | Media stream encryption method, device, storage medium and electronic equipment |
CN116594846A (en) * | 2023-07-14 | 2023-08-15 | 支付宝(杭州)信息技术有限公司 | Inference service monitoring method and device |
EP4250684A1 (en) * | 2022-03-22 | 2023-09-27 | Beijing Baidu Netcom Science And Technology Co. Ltd. | Method and apparatus for processing streaming media service, electronic device, and storage medium |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113610632B (en) * | 2021-08-11 | 2024-05-28 | 中国银行股份有限公司 | Bank outlet face recognition method and device based on blockchain |
CN113704063B (en) * | 2021-08-26 | 2024-05-14 | 北京百度网讯科技有限公司 | Performance monitoring method, device, equipment and storage medium of cloud mobile phone |
CN114461344A (en) * | 2021-09-17 | 2022-05-10 | 支付宝(杭州)信息技术有限公司 | Terminal keep-alive management method and device based on cloud edge terminal architecture |
CN115866162B (en) * | 2021-09-26 | 2024-07-16 | 中移雄安信息通信科技有限公司 | Video stream generation method and device, electronic equipment and storage medium |
CN114285837A (en) * | 2021-11-17 | 2022-04-05 | 杭州玖欣物联科技有限公司 | Edge-end multi-dimensional disordered data analysis method |
CN114363579B (en) * | 2022-01-21 | 2024-03-19 | 中国铁塔股份有限公司 | A monitoring video sharing method, device and electronic equipment |
CN114500536B (en) * | 2022-01-27 | 2024-03-01 | 京东方科技集团股份有限公司 | Cloud edge cooperation method, cloud edge cooperation system, cloud device, cloud platform equipment and cloud medium |
CN114860437B (en) * | 2022-04-29 | 2025-06-10 | 杭州义益钛迪信息技术有限公司 | Data acquisition method, edge computing host and computer readable storage medium |
CN115022660B (en) * | 2022-06-01 | 2024-03-19 | 上海哔哩哔哩科技有限公司 | Parameter configuration method and system for content distribution network |
CN115378888B (en) * | 2022-08-17 | 2023-08-08 | 深圳星云智联科技有限公司 | Data processing method, device, equipment and storage medium |
CN115361032B (en) * | 2022-08-17 | 2023-04-18 | 佛山市朗盛通讯设备有限公司 | Antenna unit for 5G communication |
CN115578815A (en) * | 2022-09-20 | 2023-01-06 | 京东方科技集团股份有限公司 | Access control management method, verification server, access control system and readable storage medium |
CN115798221B (en) * | 2022-11-11 | 2023-09-19 | 浙江特锐讯智能科技有限公司 | License plate rapid identification and analysis method and system based on edge calculation |
CN115934318B (en) * | 2022-11-16 | 2023-09-19 | 鹏橙网络技术(深圳)有限公司 | Staff file management method, system and device |
CN118075512A (en) * | 2022-11-22 | 2024-05-24 | 荣耀终端有限公司 | Method for calling algorithm, electronic equipment and readable storage medium |
CN115904719B (en) * | 2022-12-02 | 2023-12-08 | 杭州义益钛迪信息技术有限公司 | Data acquisition method and device, electronic equipment and storage medium |
CN116055686A (en) * | 2023-01-16 | 2023-05-02 | 北京智芯微电子科技有限公司 | Intelligent monitoring system and control method for transmission line scene |
CN116610834B (en) * | 2023-05-15 | 2024-04-12 | 三峡科技有限责任公司 | Monitoring video storage and quick query method based on AI analysis |
CN117097682B (en) * | 2023-10-19 | 2024-02-06 | 杭州义益钛迪信息技术有限公司 | Equipment access method, device, equipment and storage medium |
CN118784626B (en) * | 2024-06-11 | 2025-01-28 | 北京积加科技有限公司 | Security monitoring video transmission method, device, equipment and computer readable medium |
CN119052401B (en) * | 2024-08-07 | 2025-02-28 | 物链芯工程技术研究院(北京)股份有限公司 | A VR calling method and system based on blockchain technology |
CN118689654B (en) * | 2024-08-23 | 2024-11-15 | 北京云庐科技有限公司 | A method, system and medium for processing edge data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160372156A1 (en) * | 2015-06-18 | 2016-12-22 | Apple Inc. | Image fetching for timeline scrubbing of digital media |
CN110390246A (en) * | 2019-04-16 | 2019-10-29 | 江苏慧中数据科技有限公司 | A kind of video analysis method in side cloud environment |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018131561A1 (en) * | 2017-01-10 | 2018-07-19 | 日本電気株式会社 | Mode determining device, method, network system, and program |
CN107491728A (en) * | 2017-07-11 | 2017-12-19 | 安徽大学 | A kind of human face detection method and device based on edge calculations model |
CN108877915A (en) * | 2018-06-07 | 2018-11-23 | 合肥工业大学 | The intelligent edge calculations system of minimally invasive video processing |
CN109361902A (en) * | 2018-11-19 | 2019-02-19 | 河海大学 | A smart helmet wearing monitoring system based on edge computing |
CN110782639A (en) * | 2019-10-28 | 2020-02-11 | 深圳奇迹智慧网络有限公司 | Abnormal behavior warning method, device, system and storage medium |
CN111241975B (en) * | 2020-01-07 | 2023-03-31 | 华南理工大学 | Face recognition detection method and system based on mobile terminal edge calculation |
-
2020
- 2020-06-17 CN CN202010557709.7A patent/CN111723727B/en active Active
- 2020-06-30 WO PCT/CN2020/099092 patent/WO2021151279A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160372156A1 (en) * | 2015-06-18 | 2016-12-22 | Apple Inc. | Image fetching for timeline scrubbing of digital media |
CN110390246A (en) * | 2019-04-16 | 2019-10-29 | 江苏慧中数据科技有限公司 | A kind of video analysis method in side cloud environment |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112307120A (en) * | 2020-10-29 | 2021-02-02 | 腾讯科技(深圳)有限公司 | Information management server, information management method, and information management system |
CN112601106A (en) * | 2020-11-16 | 2021-04-02 | 北京都是科技有限公司 | Video image processing method and device and storage medium |
CN112601106B (en) * | 2020-11-16 | 2022-11-15 | 北京都是科技有限公司 | Video image processing method and device and storage medium |
WO2022243735A1 (en) * | 2021-05-21 | 2022-11-24 | Sensetime International Pte. Ltd. | Edge computing-based control method and apparatus, edge device and storage medium |
WO2022096946A1 (en) * | 2021-06-16 | 2022-05-12 | Sensetime International Pte. Ltd. | Game state detection and configuration updating method and apparatus, device and storage medium |
CN113435368A (en) * | 2021-06-30 | 2021-09-24 | 青岛海尔科技有限公司 | Monitoring data identification method and device, storage medium and electronic device |
CN113435368B (en) * | 2021-06-30 | 2024-03-22 | 青岛海尔科技有限公司 | Monitoring data identification method and device, storage medium and electronic device |
CN113572997A (en) * | 2021-07-22 | 2021-10-29 | 中科曙光国际信息产业有限公司 | Video stream data analysis method, device, equipment and storage medium |
CN113949925A (en) * | 2021-10-15 | 2022-01-18 | 深圳市智此一游科技服务有限公司 | Video generation method and device and server |
CN114155570A (en) * | 2021-10-15 | 2022-03-08 | 深圳市智此一游科技服务有限公司 | Video generation method |
CN114143445A (en) * | 2021-10-15 | 2022-03-04 | 深圳市智此一游科技服务有限公司 | A video generation method |
CN113938747A (en) * | 2021-10-15 | 2022-01-14 | 深圳市智此一游科技服务有限公司 | Video generation method, device and server |
CN116055078A (en) * | 2021-10-28 | 2023-05-02 | 北京金山云网络技术有限公司 | Media stream encryption method, device, storage medium and electronic equipment |
CN114173087A (en) * | 2021-11-02 | 2022-03-11 | 上海三旺奇通信息科技有限公司 | Video data acquisition and processing method, edge gateway and storage medium |
CN114245077A (en) * | 2021-12-17 | 2022-03-25 | 广州西麦科技股份有限公司 | Intelligent alarm method based on opencv |
CN114245078A (en) * | 2021-12-17 | 2022-03-25 | 广州西麦科技股份有限公司 | Method and device for controlling field operation safety by applying various Al identification algorithms |
CN114245078B (en) * | 2021-12-17 | 2024-04-09 | 广州西麦科技股份有限公司 | Method and device for safely controlling field operation by using multiple Al recognition algorithms |
US20230319124A1 (en) * | 2022-03-22 | 2023-10-05 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method and apparatus for processing streaming media service, electronic device, and storage medium |
US11902346B2 (en) * | 2022-03-22 | 2024-02-13 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method and apparatus for processing streaming media service, electronic device, and storage medium |
EP4250684A1 (en) * | 2022-03-22 | 2023-09-27 | Beijing Baidu Netcom Science And Technology Co. Ltd. | Method and apparatus for processing streaming media service, electronic device, and storage medium |
CN114663424A (en) * | 2022-04-14 | 2022-06-24 | 联通(广东)产业互联网有限公司 | Endoscope video auxiliary diagnosis method, system, equipment and medium based on edge cloud cooperation |
CN114663424B (en) * | 2022-04-14 | 2025-01-24 | 联通(广东)产业互联网有限公司 | Endoscopic video-assisted diagnosis method, system, device and medium based on edge-cloud collaboration |
CN115291967B (en) * | 2022-08-01 | 2023-05-23 | 中国人民解放军32039部队 | Aerospace data analysis method and device and electronic equipment |
CN115291967A (en) * | 2022-08-01 | 2022-11-04 | 中国人民解放军32039部队 | Space data analysis method and device and electronic equipment |
CN116594846A (en) * | 2023-07-14 | 2023-08-15 | 支付宝(杭州)信息技术有限公司 | Inference service monitoring method and device |
Also Published As
Publication number | Publication date |
---|---|
WO2021151279A1 (en) | 2021-08-05 |
CN111723727B (en) | 2024-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111723727A (en) | Cloud monitoring method and device based on edge computing, electronic equipment and storage medium | |
CN114550076B (en) | Regional abnormal behavior monitoring method, device, equipment and storage medium | |
CN111950621B (en) | Target data detection method, device, equipment and medium based on artificial intelligence | |
CN112137591A (en) | Target object position detection method, device, equipment and medium based on video stream | |
CN114022841A (en) | Personnel monitoring and identifying method and device, electronic equipment and readable storage medium | |
CN113190703A (en) | Intelligent retrieval method and device for video image, electronic equipment and storage medium | |
CN113360803A (en) | Data caching method, device and equipment based on user behavior and storage medium | |
CN113868528A (en) | Information recommendation method, device, electronic device and readable storage medium | |
CN110807050B (en) | Performance analysis method, device, computer equipment and storage medium | |
CN115081538A (en) | Machine learning-based customer relationship identification method, device, equipment and medium | |
CN112631806A (en) | Asynchronous message arranging and scheduling method and device, electronic equipment and storage medium | |
CN111858604B (en) | Data storage method and device, electronic equipment and storage medium | |
CN111985545A (en) | Target data detection method, device, equipment and medium based on artificial intelligence | |
CN114677650B (en) | Intelligent analysis method and device for pedestrian illegal behaviors of subway passengers | |
CN113920582A (en) | Human body action scoring method, device, equipment and storage medium | |
CN112101191A (en) | Expression recognition method, device, equipment and medium based on frame attention network | |
CN115409041B (en) | Unstructured data extraction method, device, equipment and storage medium | |
CN113449037A (en) | AI-based SQL engine calling method, device, equipment and medium | |
CN112633170A (en) | Communication optimization method, device, equipment and medium | |
CN112528265A (en) | Identity recognition method, device, equipment and medium based on online conference | |
CN114172856B (en) | Message automatic replying method, device, equipment and storage medium | |
CN114723400B (en) | Service authorization management method, device, equipment and storage medium | |
CN113704405B (en) | Quality inspection scoring method, device, equipment and storage medium based on recorded content | |
CN112905817B (en) | Image retrieval method and device based on sorting algorithm and related equipment | |
CN114996386A (en) | Business role identification method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |