[go: up one dir, main page]

CN114071078B - Video data processing method, electronic device and storage medium - Google Patents

Video data processing method, electronic device and storage medium Download PDF

Info

Publication number
CN114071078B
CN114071078B CN202111238322.6A CN202111238322A CN114071078B CN 114071078 B CN114071078 B CN 114071078B CN 202111238322 A CN202111238322 A CN 202111238322A CN 114071078 B CN114071078 B CN 114071078B
Authority
CN
China
Prior art keywords
camera
target
video data
information
edge computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111238322.6A
Other languages
Chinese (zh)
Other versions
CN114071078A (en
Inventor
杨术
吴晓峰
常晓磊
周立新
吴振洲
夏修理
蒋俊锋
杨华胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute Tsinghua University
China Resources Digital Technology Co Ltd
Original Assignee
Shenzhen Research Institute Tsinghua University
China Resources Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute Tsinghua University, China Resources Digital Technology Co Ltd filed Critical Shenzhen Research Institute Tsinghua University
Priority to CN202111238322.6A priority Critical patent/CN114071078B/en
Publication of CN114071078A publication Critical patent/CN114071078A/en
Application granted granted Critical
Publication of CN114071078B publication Critical patent/CN114071078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2385Channel allocation; Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • H04N21/26216Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints involving the channel capacity, e.g. network bandwidth

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a video data processing method, electronic equipment and a storage medium, and belongs to the technical field of artificial intelligence. Comprising the following steps: determining the data source size of each camera in the plurality of cameras, determining a target mapping relation according to the resource calculation overhead information and the bandwidth overhead information of each edge calculation node in the plurality of edge calculation nodes and the data source size of each camera, wherein the target mapping relation is used for indicating the mapping relation between the edge calculation nodes and the cameras, and scheduling the video data of each camera according to the target mapping relation so that each camera can send the respective video data to the edge calculation node with the mapping relation to process the video data. The video data is scheduled based on the resource calculation cost, the bandwidth cost and the data source size, so compared with the method that each camera sends the video data to the cloud processing, the method can save the bandwidth and avoid the problem of processing delay caused by overlarge data volume.

Description

Video data processing method, electronic device and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a method for processing video data, an electronic device, and a storage medium.
Background
With the rapid development of the information age, video data is increasingly used. For example, in some monitoring scenarios, video data is acquired by multiple cameras, after which the acquired video data may be processed to facilitate target detection and monitoring.
In the related art, when processing video data, generally, each camera sends the video data to a cloud end, a computing module of the cloud end performs optimization processing such as compression on the video data sent by each camera, then the computing module performs processing such as target detection and image recognition based on the optimized video data, and then the processing result is sent to a user end.
However, the prior art solutions have the following problems: because each camera sends video data to the cloud for processing, when the data volume is too large, the problem of processing delay can be caused, and the bandwidth overhead is large.
Disclosure of Invention
The embodiment of the application provides a processing method of video data, electronic equipment and a storage medium, which can solve the problems of processing delay and large bandwidth consumption when a camera transmits video to a cloud terminal for processing. The technical scheme is as follows:
in a first aspect, a method for processing video data is provided, applied to a scheduling node, and the method includes:
a data source size for each of the plurality of cameras is determined. And determining a target mapping relation according to the resource calculation overhead information and the bandwidth overhead information of each edge calculation node in the plurality of edge calculation nodes and the data source size of each camera, wherein the target mapping relation is used for indicating the mapping relation between the edge calculation nodes and the cameras. And scheduling the video data of each camera according to the target mapping relation so that each camera sends the respective video data to an edge computing node with the mapping relation with each camera for processing.
As one example of the present application, the resource calculation overhead information includes a resource calculation single frame cost, which means a calculation cost required for the image processor to process a single video frame, and a target conversion rate, which means a conversion rate to eliminate redundant video frames, the bandwidth overhead information includes a bandwidth cost and a single frame traffic, which means a traffic required to transmit the single video frame. Determining a target mapping relation according to resource calculation overhead information and bandwidth overhead information of each edge calculation node in a plurality of edge calculation nodes and the data source size of each camera, wherein the method comprises the following steps: and according to the resource calculation single-frame cost, the target conversion rate, the bandwidth cost and the single-frame flow of each edge calculation node, determining the single-frame total cost corresponding to each edge calculation node. And determining the mapping relation according to the single frame total cost corresponding to each edge computing node and the data source size of each camera.
As an example of the present application, scheduling video data of each camera according to a target mapping relationship includes: and sending the address information of each edge computing node to the cameras with the mapping relation with each edge computing node based on the target mapping relation, so that each camera sends the respective video data to the corresponding edge computing node based on the received address information.
As an example of the present application, at least one camera set is stored in the scheduling node, the at least one camera set being obtained by grouping a plurality of cameras. The method further comprises the steps of: receiving a configuration set query request sent by a first edge computing node, wherein the first edge computing node is any one of a plurality of edge computing nodes, and the configuration set query request carries target camera information; inquiring a camera set to which target camera information belongs from at least one camera set to obtain a first camera set; and acquiring a target configuration set stored corresponding to the first camera set, wherein the target configuration set comprises configuration parameters required by video data processing. And sending a configuration set query response to the first edge computing node, wherein the configuration set query response carries the target configuration set.
As an example of the present application, the method further comprises: and receiving camera information, camera position information and target video data sent by each camera to obtain a plurality of camera information, a plurality of camera position information and a plurality of target video data, wherein the size of each target video data is a specified threshold value. One camera information is randomly selected from the plurality of camera information. And determining camera information to be grouped, wherein the camera information to be grouped refers to camera information which is not grouped except for the currently selected camera information in the plurality of camera information. And determining all the camera information of which the distance of the camera position information corresponding to the currently selected camera information is smaller than a distance threshold value from the camera information to be grouped according to the camera position information corresponding to the camera information to be grouped. And acquiring the currently selected camera information and the target video data corresponding to each piece of camera information in the determined camera information. And determining target video data with the same target from the target video data corresponding to each piece of acquired camera information. And determining camera information corresponding to the target video data with the same target as a group to obtain a camera set. And if the number of the ungrouped camera information is greater than or equal to the number threshold, reselecting one piece of camera information from the ungrouped camera information, and returning to the step of determining the camera information to be grouped until the number of the ungrouped camera information is smaller than the number threshold.
In a second aspect, there is provided a method of processing video data for use in a first edge computing node, the method comprising:
The method comprises the steps of receiving video data sent by a target camera, wherein the video data is sent by the target camera according to a scheduling condition after the scheduling node schedules a plurality of cameras based on a target mapping relation, the target mapping relation is used for indicating a mapping relation between edge computing nodes and the cameras, and the target mapping relation is determined based on resource computing overhead information and bandwidth overhead information of each edge computing node in the plurality of edge computing nodes and the size of a data source of each camera in the plurality of cameras.
As an example of the present application, a camera set to which a target camera belongs is determined according to target camera information of the target camera, and a first camera set is obtained. If the video data corresponding to the camera information included in the first camera set is not processed, determining a target configuration set according to the front appointed number of video frames of the video data sent by the target camera, wherein the target configuration set comprises configuration parameters required by video data processing. And processing the video data sent by the target camera according to the target configuration set.
As an example of the present application, if video data corresponding to camera information included in the first camera set has been processed, a configuration set query request is sent to the scheduling node, where the configuration set query request carries target camera information. And receiving a configuration set query response sent by the scheduling node, wherein the configuration set query response carries the target configuration set.
In a third aspect, there is provided a processing apparatus for video data, configured to a scheduling node, the apparatus comprising:
And the determining module is used for determining the data source size of each camera in the plurality of cameras. The method comprises the steps of determining a target mapping relation according to resource calculation overhead information and bandwidth overhead information of each edge calculation node in a plurality of edge calculation nodes and the data source size of each camera, wherein the target mapping relation is used for indicating the mapping relation between the edge calculation nodes and the cameras;
and the scheduling module is used for scheduling the video data of each camera according to the target mapping relation so that each camera can send the respective video data to an edge computing node with the mapping relation with each camera for processing.
As one example of the present application, the resource calculation overhead information includes a resource calculation single frame cost, which is a calculation cost required for an image processor to process a single video frame, and a target conversion rate, which is a conversion rate to eliminate redundant video frames, the bandwidth overhead information includes a bandwidth cost and a single frame traffic, which is a traffic required to transmit a single video frame;
the determining module is used for:
According to the resource calculation single-frame cost, the target conversion rate, the bandwidth cost and the single-frame flow of each edge calculation node, determining the single-frame total cost corresponding to each edge calculation node;
and determining the target mapping relation according to the single frame total cost corresponding to each edge computing node and the data source size of each camera.
As an example of the present application, the scheduling module is configured to:
And transmitting the address information of each edge computing node to a camera with a mapping relation with each edge computing node based on the target mapping relation, so that each camera transmits the respective video data to the corresponding edge computing node based on the received address information.
As an example of the present application, the scheduling node stores therein at least one camera set obtained by grouping the plurality of cameras;
the determining module is further configured to:
Receiving a configuration set query request sent by a first edge computing node, wherein the first edge computing node is any one of the edge computing nodes, and the configuration set query request carries target camera information;
Inquiring a camera set to which the target camera information belongs from the at least one camera set to obtain a first camera set;
acquiring a target configuration set stored corresponding to the first camera set, wherein the target configuration set comprises configuration parameters required by video data processing;
and sending a configuration set query response to the first edge computing node, wherein the configuration set query response carries the target configuration set.
As an example of the present application, the determining module is further configured to:
receiving camera information, camera position information and target video data sent by each camera to obtain a plurality of camera information, a plurality of camera position information and a plurality of target video data, wherein the size of each target video data is a specified threshold;
randomly selecting one piece of camera information from the plurality of pieces of camera information;
Determining camera information to be grouped, wherein the camera information to be grouped refers to camera information which is not grouped except for currently selected camera information in the plurality of camera information;
Determining all camera information of which the distance of the camera position information corresponding to the currently selected camera information is smaller than a distance threshold value from the camera information to be grouped according to the camera position information corresponding to the camera information to be grouped;
Acquiring currently selected camera information and target video data corresponding to each piece of camera information in the determined camera information;
Determining target video data with the same target from the target video data corresponding to each piece of acquired camera information;
Determining camera information corresponding to target video data with the same target as a group to obtain a camera set;
And if the number of the ungrouped camera information is greater than or equal to the number threshold, reselecting one piece of camera information from the ungrouped camera information, and returning to the step of determining the camera information to be grouped until the number of the ungrouped camera information is smaller than the number threshold.
In a fourth aspect, there is provided a processing apparatus for video data, configured at a first edge computing node, the apparatus comprising:
the receiving module is used for receiving video data sent by a target camera, the video data is sent by the target camera according to a scheduling condition after the scheduling node schedules a plurality of cameras based on a target mapping relation, the target mapping relation is used for indicating the mapping relation between the edge computing nodes and the cameras, and the target mapping relation is determined based on resource computing overhead information and bandwidth overhead information of each edge computing node in the plurality of edge computing nodes and the data source size of each camera in the plurality of cameras.
As an example of the present application, the receiving module is further configured to:
Determining a camera set to which the target camera belongs according to target camera information of the target camera to obtain a first camera set;
If the video data corresponding to the camera information included in the first camera set is not processed, determining a target configuration set according to a front designated number of video frames of the video data sent by the target camera, wherein the target configuration set comprises configuration parameters required by video data processing;
And processing the video data sent by the target camera according to the target configuration set.
As an example of the present application, the receiving module is further configured to:
if the video data corresponding to the camera information included in the first camera set is processed, sending a configuration set query request to the scheduling node, wherein the configuration set query request carries the target camera information;
And receiving a configuration set query response sent by the scheduling node, wherein the configuration set query response carries the target configuration set.
In a fifth aspect, there is provided an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to any of the first aspects or the second aspects when executing the computer program.
In a sixth aspect, there is provided a computer readable storage medium having instructions stored thereon, which when executed by a processor, perform the method of any of the first aspects above, or implement the method of any of the second aspects above.
In a seventh aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect above, or to carry out the method of any of the second aspects above.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
The method for processing video data provided by the application determines the data source size of each camera in a plurality of cameras, and determines the target mapping relation between the cameras and the edge computing nodes according to the data source size and the resource computing overhead information and bandwidth overhead information of each edge computing node in a plurality of edge computing nodes. And then distributing the video data of the camera to the corresponding edge computing nodes for processing according to the target mapping relation. Therefore, when the video data is scheduled, as the resource calculation cost, the bandwidth cost and the data source size are considered, compared with the method that each camera sends the video data to the cloud processing, the method can save the bandwidth and avoid the problem of processing delay caused by overlarge data quantity.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a system frame diagram illustrating a method of processing video data according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of processing video data according to an exemplary embodiment;
FIG. 3 is a diagram illustrating a grouping result according to an exemplary embodiment;
fig. 4 is a schematic diagram of a frame of a method of processing video data according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating another method of processing video data according to an exemplary embodiment;
FIG. 6 is a system block diagram of a first edge computing node, shown according to an exemplary embodiment;
FIG. 7 is a flowchart illustrating another method of processing video data according to an exemplary embodiment;
Fig. 8 is a schematic structural view of a processing apparatus for video data according to an exemplary embodiment;
fig. 9 is a schematic structural view of another video data processing apparatus according to an exemplary embodiment;
fig. 10 is a schematic diagram showing a structure of an electronic device according to an exemplary embodiment.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
It should be understood that references to "a plurality" in this disclosure refer to two or more. In the description of the present application, "/" means or, unless otherwise indicated, for example, A/B may represent A or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in order to facilitate the clear description of the technical solution of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and function. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
Before describing the video data processing method provided by the embodiment of the present application in detail, a simple description is first provided for an execution body related to the embodiment of the present application.
Referring to fig. 1, the processing system of the present application includes a plurality of cameras 01, a scheduling node 02, and a plurality of edge computing nodes 03. Each camera 01 of the plurality of cameras 01 is in communication connection with the scheduling node 02, the scheduling node 02 is also in communication connection with each edge computing node 03 of the plurality of edge computing nodes 03, and in addition, the communication connection between each camera 01 of the plurality of cameras 01 and each edge computing node 03 can be also established.
Each camera 01 may be used to collect video data, and may be, for example, an infrared camera, a gray-scale camera, an RGB camera, or the like, which is not limited in this embodiment.
The scheduling node 02 is configured to schedule video data of the plurality of cameras 01, so as to schedule the video data of the plurality of cameras 01 to different edge computing nodes 03 for processing. As an example, the scheduling node 02 may be a server or a server cluster having computing, storage, and communication functions, which is not limited in this embodiment.
Each edge computing node 02 may be a computer or other electronic device with video data processing capabilities, which is not limited in this embodiment.
After the execution subject designed by the embodiment of the present application is introduced, a method for processing video data provided by the embodiment of the present application will be described in detail with reference to the accompanying drawings.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for processing video data according to an exemplary embodiment, where the method may be applied to the processing system, and the method may include the following implementation steps:
step 201: the scheduling node receives camera information, camera position information and target video data sent by each camera, and the size of each target video data is a specified threshold.
The camera information is used to uniquely indicate one camera, and may be, for example, a camera ID, a camera name, or the like.
The camera position information may be used to indicate where the corresponding camera is located.
The target video data may refer to the previous K frames of video data of the corresponding camera, K being an integer greater than 1. As one example of the application, target video data may be used for subsequent camera grouping, as well as determining video source size.
The specified threshold may be set according to actual requirements, which is not limited in the embodiment of the present application.
In one example, after the communication connection is established between each camera and the scheduling node, the position information of each camera can be determined by an automatic discovery function, so as to obtain the position information of each camera. And then, each camera sends the respective camera information, the camera position information and the target video data to the scheduling node. In this way, the scheduling node obtains a plurality of camera information, a plurality of camera position information, and a plurality of target video data.
Step 202: the scheduling node performs packet processing on the plurality of camera information.
As an example of the present application, a specific implementation of packet processing may include the following sub-steps:
1. one camera information is randomly selected from the plurality of camera information.
Illustratively, assume that the plurality of camera information includes: camera1, camera2, camera3, camera4, camera5, camera6, camera7, camera8. One piece of Camera information is randomly selected from the plurality of pieces of Camera information, and the selected piece of Camera information is assumed to be Camera3.
2. And determining camera information to be grouped, wherein the camera information to be grouped refers to camera information which is not grouped except for the currently selected camera information in the plurality of camera information.
Illustratively, the camera information to be grouped includes: camera1, camera2, camera4, camera5, camera6, camera7, and Camera8.
3. And determining all the camera information of which the distance of the camera position information corresponding to the currently selected camera information is smaller than a distance threshold value from the camera information to be grouped according to the camera position information corresponding to the camera information to be grouped.
The distance threshold value can be set according to actual requirements.
As described above, since each camera transmits the respective camera position information to the scheduling node, the scheduling node may determine, according to the camera position information corresponding to the camera information to be grouped, the camera information having a distance smaller than the distance threshold value from the camera position information corresponding to the currently selected camera information. The camera position information corresponding to the currently selected camera information may be pre-classified into a group since their distance is less than a distance threshold, and their distance may be determined to be relatively close.
Illustratively, assuming that the Camera position information of each Camera in Camera1, camera2, camera4 and the Camera position information of Camera3 are determined to be smaller than the distance threshold by calculation, camera1, camera2, camera3, camera4 may be pre-divided into a group, as shown by 31 in fig. 3.
As an example of the present application, when grouping according to camera position information, a maximum threshold may be set, that is, the number of camera information in the group is less than or equal to the maximum threshold, so as to avoid grouping all the camera information as much as possible. The maximum threshold value can be set according to actual requirements.
4. And acquiring the currently selected camera information and the target video data corresponding to each piece of camera information in the determined camera information.
Illustratively, target video data of each Camera in Camera1, camera2, camera3, camera4 is acquired.
5. And determining target video data with the same target from the target video data corresponding to each piece of acquired camera information.
As one example of the present application, the object included in each object video data can be determined by object detection of the object video data of each Camera in Camera1, camera2, camera3, camera 4.
6. And determining camera information corresponding to the target video data with the same target as a group to obtain a camera set.
If some target video data includes the same target, it is indicated that the fields of view monitored by the cameras corresponding to the target video data are the same, and since the installation positions of the cameras are generally different, it can be understood that the same target is photographed from different angles. In order to facilitate the subsequent accurate recognition of the target, cameras capturing the same target are divided into a group. For example, assuming that the same object is present in the object video data of Camera1, camera2, camera3, and Camera4, camera1, camera2, and Camera3 are determined as one group, the resulting Camera set includes Camera1, camera2, and Camera3, as shown by 32 in fig. 3.
7. And if the number of the ungrouped camera information is greater than or equal to the number threshold, reselecting one piece of camera information from the ungrouped camera information, and returning to the step of determining the camera information to be grouped until the number of the ungrouped camera information is smaller than the number threshold.
Wherein the number threshold can be set according to actual requirements.
For example, after the grouping, if the number of the Camera information which is not grouped is greater than or equal to the number threshold, it is indicated that there are more Camera information which is not grouped, in this case, one piece of Camera information is randomly selected from the Camera information Camera4, camera5, camera6, camera7 and Camera8 which are not grouped, and then the process returns to step 2, and the grouping is continued according to the above flow. Until the number of ungrouped camera information is smaller than the number threshold, it is indicated that there is only a small amount of ungrouped camera information, in which case ungrouped camera information may be ignored, i.e. the grouping ends.
After the grouping is finished, the scheduling node may schedule the video data of each of the plurality of cameras to send the video data of each of the plurality of cameras to a different edge computing node for processing, and for a specific implementation, see the following steps 203 to 206.
Step 203: the scheduling node determines a data source size for each of the plurality of cameras.
In one embodiment, when each of the plurality of cameras establishes a communication connection with the scheduling node, the scheduling node sends the target video data to the scheduling node, and the scheduling node can determine the data source size of each camera according to the target video data sent by each camera.
In another embodiment, each of the plurality of cameras may autonomously report the video source size of the respective pre-transmitted video data to the scheduling node after establishing a communication connection with the scheduling node.
Step 204: and determining a target mapping relation according to the resource calculation overhead information and the bandwidth overhead information of each edge calculation node in the plurality of edge calculation nodes and the data source size of each camera. The target mapping relationship is used for indicating the mapping relationship between the edge computing node and the camera.
In addition, since the video data needs to consume certain computing resources in the processing process, the scheduling node determines the mapping relationship between the cameras and the edge computing nodes according to the resource computing overhead information and bandwidth overhead information of each edge computing node in the plurality of edge computing nodes and the data source size of each camera.
As one example of the application, the resource computation overhead information of the edge computation node includes a resource computation single frame cost, which is a computation cost required for the image processor (graphics processing unit, GPU) to process a single video frame, and a target conversion rate, which is a conversion rate to eliminate redundant video frames.
As one example of the application, the bandwidth overhead information of the edge computing node includes bandwidth cost and single frame traffic, which refers to the traffic required to transmit a single video frame.
As an example of the present application, a specific implementation of step 104 may include: and according to the resource calculation single-frame cost, the target conversion rate, the bandwidth cost and the single-frame flow of each edge calculation node, determining the single-frame total cost corresponding to each edge calculation node. And determining a target mapping relation according to the single frame total cost corresponding to each edge computing node and the data source size of each camera.
As an example of the present application, the specific implementation of determining the single frame overhead corresponding to each edge computing node according to the following formula (1) may include:
Where t represents the single frame overhead, η represents the single frame traffic, Indicating the target conversion rate, S A indicating the bandwidth charge, S B indicating the resource calculation single frame charge.
As an example of the application, the target mapping relation is determined by a greedy algorithm according to the single frame total cost corresponding to each edge computing node and the data source size of each camera.
It should be noted that, the foregoing describes an example in which the single frame cost, the target conversion rate, the bandwidth cost, and the single frame traffic are calculated according to the resource of each edge computing node, and the single frame total cost corresponding to each edge computing node is determined by the formula (1). In another embodiment, the resource computation overhead information of the edge computation node may further include a maximum support capacity, which refers to a maximum number of frames of video data that the edge computation node can support. In this case, according to the resource calculation single-frame cost, the target conversion rate, the maximum support capacity, the bandwidth cost and the single-frame flow of each edge calculation node, determining the single-frame total cost corresponding to each edge calculation node through a formula (2):
Where N represents the maximum supported capacity.
Step 205: the scheduling node schedules the video data of each camera according to the target mapping relation, so that each camera sends the respective video data to an edge computing node with the mapping relation with each camera for processing.
As an example of the present application, a specific implementation of step 105 may include: and sending the address information of each edge computing node to the cameras with the mapping relation with each edge computing node based on the target mapping relation, so that each camera sends the respective video data to the corresponding edge computing node based on the received address information.
Illustratively, assume that the plurality of cameras includes Camera1, camera2, camera3, camera4, camera5, camera6, camera7, camera8, and the edge computing node includes Device1, device2, device3. In the target mapping relationship, camera1, camera2 and Device1 have a mapping relationship, camera3, camera4, camera5 and Device2 have a mapping relationship, and Camera6, camera7, camera8 and Device3 have a mapping relationship. The scheduling node transmits address information of Device1 to Camera1, camera2, address information of Device2 to Camera3, camera4, and Camera5, and address information of Device3 to Camera6, camera7, and Camera8.
Step 206: the camera sends the video data to the corresponding edge computing node.
After the dispatching node finishes dispatching, each camera can send the respective video data to the edge computing node with the mapping relation to process based on the received address information. For example, with Camera1, video data is transmitted to Device1 for processing. As for Camera3, the video data is transmitted to Device2 for processing.
Step 207: the first edge computing node receives video data sent by the target camera.
The first edge computing node is any one of a plurality of edge computing nodes, that is, the processing procedure of the video data is described herein by taking any one edge computing node as an example.
The video data is sent by a scheduling node according to scheduling conditions after the scheduling node schedules the cameras based on a target mapping relationship, wherein the target mapping relationship is used for indicating the mapping relationship between the edge computing nodes and the cameras, and the target mapping relationship is determined based on resource computing overhead information and bandwidth overhead information of each edge computing node in the edge computing nodes and the data source size of each camera in the cameras. Specific implementation thereof can be found in the foregoing.
Step 208: and the first edge computing node determines a camera set to which the target camera belongs according to the target camera information of the target camera, so as to obtain a first camera set.
The target camera information refers to camera information of a target camera transmitting video data, and the target camera information may be transmitted by the target camera to the first edge computing node. The number of target camera information may be one or more, i.e. the scheduling node may schedule video data of one or more target cameras to the first edge computing node. Next, an example will be described.
In one example, the first edge computing node may query the dispatch node for its belonging camera set based on the target camera information. In another example, the scheduling node may also send at least one camera set to the first edge computing node in advance, where in this case, the first edge computing node may determine, according to the at least one camera set obtained in advance, a camera set to which the target camera information belongs, which is not limited in the embodiment of the present application.
Step 209: and judging whether the first edge computing node processes video data corresponding to camera information included in the first camera set.
As an example of the present application, the judging process may include: judging whether a label corresponding to the first camera set exists locally or not, wherein the label is used for indicating that video data corresponding to camera information included in the first camera set is processed. If the video data corresponding to the camera information included in the first camera set is processed, the video data corresponding to the camera information included in the first camera set is determined to be processed, and if the video data corresponding to the camera information included in the first camera set is not processed, the video data corresponding to the camera information included in the first camera set is determined to be unprocessed.
That is, if the first edge computing node processes the video data corresponding to the camera information included in a certain camera set, the tag corresponding to the camera set may be locally stored, so as to facilitate the subsequent execution of the judging operation.
Step 210: and acquiring a target configuration set according to the judging result, wherein the target configuration set comprises configuration parameters required by video data processing.
Illustratively, the target configuration set includes a frame rate, a resolution, and the like.
Specific implementations of step 210 may include the following two cases:
first case: if the first edge computing node does not process the video data corresponding to the camera information included in the first camera set, determining a target configuration set according to the front appointed number of video frames of the video data sent by the target camera.
Wherein, the appointed quantity can be set according to actual demands. Illustratively, the specified amount is 10%.
As an example of the present application, there are a plurality of preset configuration sets in the first edge computing node. If the first edge computing node does not process the video data corresponding to the camera information included in the first camera set, the first edge computing node can acquire the first 10% of video frames of the video data of the target camera, and the first edge computing node processes the first 10% of video frames based on each configuration set in the configuration sets respectively to obtain processed video frames corresponding to each configuration set. Then, the resource computing overhead required by the first edge computing node to process the video frames corresponding to each configuration set is determined. And a plurality of parameters are taken out from the configuration set corresponding to the maximum resource computing overhead to form a target configuration set.
Second case: if the first edge computing node has processed the video data corresponding to the camera information included in the first camera set, a configuration set query request is sent to the scheduling node, the configuration set query request carries target camera information, a configuration set query response sent by the scheduling node is received, and the configuration set query response carries target configuration sets.
That is, if the first edge computing node has processed video data corresponding to camera information included in the first camera set, it is explained that the target configuration set corresponding to the first camera set has been determined before. In one possible implementation manner, the target configuration set is stored in the scheduling node, so that the first edge computing node sends a configuration set query request to the scheduling node, and carries target camera information in the configuration set query request to instruct the scheduling node to feed back the target configuration set corresponding to the first camera set according to the target camera information.
Correspondingly, the scheduling node receives a configuration set query request sent by the first edge computing node. And inquiring the camera set to which the target camera information belongs from at least one camera set to obtain a first camera set. And acquiring a target configuration set stored corresponding to the first camera set. And sending a configuration set query response to the first edge computing node, wherein the configuration set query response carries the target configuration set.
That is, after receiving the configuration set query request sent by the first edge computing node, the scheduling node determines, according to the target camera information carried in the configuration set query request, a first camera set to which the configuration set query request belongs, so as to obtain a target configuration set corresponding to the first camera set, and send the target configuration set to the first edge computing node.
It should be noted that, the foregoing describes an example in which the configuration set query request carries the target camera information, and in another embodiment, the configuration set query request may also carry the set information (such as the set identifier) of the first camera set, so that the scheduling node may query the target configuration set directly according to the set information of the first camera set.
In addition, the foregoing description is given by taking as an example the inquiry of the target configuration set from the scheduling node. In another embodiment, the first edge computing node stores the storage capability and the target configuration set is stored in the first edge storage node, at which point the first edge computing node may obtain the target configuration set locally.
Step 211: the first edge computing node processes the video data based on the target configuration set.
If the first case of step 210, the first edge computing node processes subsequent video data based on the target configuration set. If the second case of step 210, the first edge computing node directly processes the video data based on the target configuration set.
As an example of the present application, a specific implementation of step 211 may include: and performing key frame extraction and resolution adjustment on the video data based on the target configuration set to obtain the processed video data.
Further, the object recognition may be performed based on the obtained processed video data, for example, the processed video data may be input into the object network model, and the object recognition result may be output. Wherein the target network model is capable of performing target recognition processing based on arbitrary video data.
Further, the recognition accuracy of the target recognition result can be judged, and if the recognition accuracy of the target recognition result is greater than or equal to a specified threshold, the video data processing is completed. The video data may be output at this time, and further, the target recognition result may be output, for example, to the cloud end. Wherein the specified threshold can be set according to actual requirements.
If the recognition accuracy of the target recognition result is less than the specified threshold, it indicates that the target configuration set is not suitable, in which case the target configuration set may be re-acquired according to the first case of step 210, and then video data processing may be performed again according to the re-acquired target configuration set.
In the embodiment of the application, the data source size of each camera in the plurality of cameras is determined, and the target mapping relation between the cameras and the edge computing nodes is determined according to the data source size and the resource computing overhead information and the bandwidth overhead information of each edge computing node in the plurality of edge computing nodes. And then distributing the video data of the camera to the corresponding edge computing nodes for processing according to the target mapping relation. Therefore, when the video data is scheduled, as the resource calculation cost, the bandwidth cost and the data source size are considered, compared with the method that each camera sends the video data to the cloud processing, the method can save the bandwidth and avoid the problem of processing delay caused by overlarge data quantity.
The above description is presented for the embodiment of the present application using multi-party interaction as an example. The single-sided implementation procedure is described next. By way of example and not limitation, referring to fig. 4, a scheduling node of the system architecture to which the method is applied is described herein, the scheduling node includes a determining module 410, a scheduling module 420, a storage module 430 (e.g., may be an SQL (Structured Query Language, structured query language) database), and an information display module 440. For example, referring to fig. 5, the method may include the following steps:
step 501: a data source size for each of the plurality of cameras is determined.
The specific implementation thereof may be seen in step 203 in the embodiment shown in fig. 2 described above.
Step 502: and determining a target mapping relation according to the resource calculation overhead information and the bandwidth overhead information of each edge calculation node in the plurality of edge calculation nodes and the data source size of each camera, wherein the target mapping relation is used for indicating the mapping relation between the edge calculation nodes and the cameras.
A specific implementation thereof may be seen in step 204 in the embodiment shown in fig. 2, supra.
As an example of the present application, steps 501 and 502 may be performed by a determination module.
Step 503: and scheduling the video data of each camera according to the target mapping relation so that each camera sends the respective video data to an edge computing node with the mapping relation with each camera for processing.
A specific implementation thereof may be seen in step 205 in the embodiment shown in fig. 2, supra.
As an example of the present application, a specific implementation of step 503 may be performed by the scheduling module.
The method for processing video data provided by the embodiment of the application has been realized. As an example of the application, the scheduling node further performs the steps of:
Step 504: and receiving a configuration set query request sent by a first edge computing node, wherein the first edge computing node is any one of a plurality of edge computing nodes, and the configuration set query request carries target camera information.
As described above, the first edge computing node may request the scheduling node to query the target configuration set during the process of processing the video data, and for this purpose, the first edge computing node may send a configuration set query request to the scheduling node, where the configuration set query request carries the target camera information of the target camera to be queried.
Step 505: and inquiring the camera set to which the target camera information belongs from at least one camera set to obtain a first camera set.
Wherein at least one camera set may be obtained in advance by grouping a plurality of cameras. In one example, a specific implementation of grouping the plurality of cameras may be referred to above in steps 201 to 202 of the embodiment shown in fig. 2.
After the grouping is finished, the scheduling node may store the at least one camera set obtained by the grouping in the storage module. In this case, the scheduling node may query, from the storage module, the first camera set to which the target camera information belongs.
Step 506: and acquiring a target configuration set stored corresponding to the first camera set, wherein the target configuration set comprises configuration parameters required by video data processing.
As an example of the present application, the storage module may store a configuration set corresponding to each of some or all camera sets, for example, the configuration set corresponding to each camera set may be stored by the corresponding edge computing node after being determined for the first time and then sent to the scheduling node. Therefore, after determining the first camera set corresponding to the target camera information, the scheduling node can query the target configuration set corresponding to the first camera set from the storage module.
Step 507: and sending a configuration set query response to the first edge computing node, wherein the configuration set query response carries the target configuration set.
In one example, a specific implementation of steps 504 through 507 may be performed by the determination module.
In addition, the scheduling node can display the key data through the information display module so as to be convenient for users to view and configure. The key data may include configured camera conditions, overhead information sent by the respective edge computing nodes, including actual resource computing overhead of the edge computing nodes processing the video data, and actual bandwidth overhead of receiving the video data.
In the embodiment of the application, the data source size of each camera in the plurality of cameras is determined, and the target mapping relation between the cameras and the edge computing nodes is determined according to the data source size and the resource computing overhead information and the bandwidth overhead information of each edge computing node in the plurality of edge computing nodes. And then distributing the video data of the camera to the corresponding edge computing nodes for processing according to the target mapping relation. Therefore, when the video data is scheduled, as the resource calculation cost, the bandwidth cost and the data source size are considered, compared with the method that each camera sends the video data to the cloud processing, the method can save the bandwidth and avoid the problem of processing delay caused by overlarge data quantity.
The above description is presented for a scheduling node side implementation process in the system architecture of the present application. Next, a description will be given of a first edge computing node side implementation procedure. By way of example and not limitation, referring to fig. 6, a first edge computing node includes a receiving module 610, a precision detection module 620. For example, referring to fig. 7, the method may include the following steps:
Step 701: and receiving video data sent by the target camera.
The implementation may be seen in step 207 in the embodiment shown in fig. 2 described above.
Step 702: and determining a camera set to which the target camera belongs according to the target camera information of the target camera, so as to obtain a first camera set.
A specific implementation thereof may be referred to as step 208 in the embodiment shown in fig. 2 and described above.
Step 703: and judging whether video data corresponding to the camera information included in the first camera set is processed or not.
The implementation may be seen in step 209 in the embodiment shown in fig. 2 and described above.
Step 704: and acquiring a target configuration set according to the judging result, wherein the target configuration set comprises configuration parameters required by video data processing.
The specific implementation thereof may be referred to as step 210 in the embodiment shown in fig. 2.
Step 705: the video data is processed based on the target configuration set.
The specific implementation thereof may be seen from step 211 in the embodiment shown in fig. 2 described above.
The above steps may be implemented by a receiving module, for example.
The accuracy detection module may be used for performing accuracy detection on the target recognition result, and may be specifically referred to above.
In the embodiment of the application, the data source size of each camera in the plurality of cameras is determined, and the target mapping relation between the cameras and the edge computing nodes is determined according to the data source size and the resource computing overhead information and the bandwidth overhead information of each edge computing node in the plurality of edge computing nodes. And then distributing the video data of the camera to the corresponding edge computing nodes for processing according to the target mapping relation. Therefore, when the video data is scheduled, as the resource calculation cost, the bandwidth cost and the data source size are considered, compared with the method that each camera sends the video data to the cloud processing, the method can save the bandwidth and avoid the problem of processing delay caused by overlarge data quantity.
All the above optional technical solutions may be combined according to any choice to form an optional embodiment of the present application, and the embodiments of the present application will not be described in detail.
In the embodiments of the present application, it should be understood that the serial numbers of the steps in the above embodiments do not mean the execution sequence, and the execution sequence of each process should be determined by the functions and the internal logic of each process, and should not limit the implementation process of the embodiments of the present application in any way.
Fig. 8 is a schematic structural diagram of a video data processing apparatus according to an exemplary embodiment, which may be implemented by software, hardware, or a combination of both, for example, as part or all of a scheduling node. The apparatus may include:
A determining module 810 is configured to determine a data source size of each of the plurality of cameras. The method comprises the steps of determining a target mapping relation according to resource calculation overhead information and bandwidth overhead information of each edge calculation node in a plurality of edge calculation nodes and the data source size of each camera, wherein the target mapping relation is used for indicating the mapping relation between the edge calculation nodes and the cameras;
and the scheduling module 820 is used for scheduling the video data of each camera according to the target mapping relationship, so that each camera sends the respective video data to an edge computing node with the mapping relationship with each camera for processing.
As one example of the present application, the resource calculation overhead information includes a resource calculation single frame cost, which is a calculation cost required for an image processor to process a single video frame, and a target conversion rate, which is a conversion rate to eliminate redundant video frames, the bandwidth overhead information includes a bandwidth cost and a single frame traffic, which is a traffic required to transmit a single video frame;
the determining module 810 is configured to:
According to the resource calculation single-frame cost, the target conversion rate, the bandwidth cost and the single-frame flow of each edge calculation node, determining the single-frame total cost corresponding to each edge calculation node;
and determining the target mapping relation according to the single frame total cost corresponding to each edge computing node and the data source size of each camera.
As an example of the present application, the scheduling module 820 is configured to:
And transmitting the address information of each edge computing node to a camera with a mapping relation with each edge computing node based on the target mapping relation, so that each camera transmits the respective video data to the corresponding edge computing node based on the received address information.
As an example of the present application, the scheduling node stores therein at least one camera set obtained by grouping the plurality of cameras;
The determining module 810 is further configured to:
Receiving a configuration set query request sent by a first edge computing node, wherein the first edge computing node is any one of the edge computing nodes, and the configuration set query request carries target camera information;
Inquiring a camera set to which the target camera information belongs from the at least one camera set to obtain a first camera set;
acquiring a target configuration set stored corresponding to the first camera set, wherein the target configuration set comprises configuration parameters required by video data processing;
and sending a configuration set query response to the first edge computing node, wherein the configuration set query response carries the target configuration set.
As an example of the present application, the determining module 810 is further configured to:
receiving camera information, camera position information and target video data sent by each camera to obtain a plurality of camera information, a plurality of camera position information and a plurality of target video data, wherein the size of each target video data is a specified threshold;
randomly selecting one piece of camera information from the plurality of pieces of camera information;
Determining camera information to be grouped, wherein the camera information to be grouped refers to camera information which is not grouped except for currently selected camera information in the plurality of camera information;
Determining all camera information of which the distance of the camera position information corresponding to the currently selected camera information is smaller than a distance threshold value from the camera information to be grouped according to the camera position information corresponding to the camera information to be grouped;
Acquiring currently selected camera information and target video data corresponding to each piece of camera information in the determined camera information;
Determining target video data with the same target from the target video data corresponding to each piece of acquired camera information;
Determining camera information corresponding to target video data with the same target as a group to obtain a camera set;
And if the number of the ungrouped camera information is greater than or equal to the number threshold, reselecting one piece of camera information from the ungrouped camera information, and returning to the step of determining the camera information to be grouped until the number of the ungrouped camera information is smaller than the number threshold.
In the embodiment of the application, the data source size of each camera in the plurality of cameras is determined, and the target mapping relation between the cameras and the edge computing nodes is determined according to the data source size and the resource computing overhead information and the bandwidth overhead information of each edge computing node in the plurality of edge computing nodes. And then distributing the video data of the camera to the corresponding edge computing nodes for processing according to the target mapping relation. Therefore, when the video data is scheduled, as the resource calculation cost, the bandwidth cost and the data source size are considered, compared with the method that each camera sends the video data to the cloud processing, the method can save the bandwidth and avoid the problem of processing delay caused by overlarge data quantity.
Fig. 9 is a schematic structural diagram of a video data processing apparatus according to an exemplary embodiment, which may be implemented by software, hardware, or a combination of both, such as part or all of what is called a first edge computing node. The apparatus may include:
The receiving module 910 is configured to receive video data sent by a target camera, where the video data is sent by the target camera according to a scheduling situation after a scheduling node schedules a plurality of cameras based on a target mapping relationship, where the target mapping relationship is used to indicate a mapping relationship between edge computing nodes and the cameras, and the target mapping relationship is determined based on resource computing overhead information and bandwidth overhead information of each of the plurality of edge computing nodes, and a data source size of each of the plurality of cameras.
As an example of the present application, the receiving module 910 is further configured to:
Determining a camera set to which the target camera belongs according to target camera information of the target camera to obtain a first camera set;
If the video data corresponding to the camera information included in the first camera set is not processed, determining a target configuration set according to a front designated number of video frames of the video data sent by the target camera, wherein the target configuration set comprises configuration parameters required by video data processing;
And processing the video data sent by the target camera according to the target configuration set.
As an example of the present application, the receiving module 910 is further configured to:
if the video data corresponding to the camera information included in the first camera set is processed, sending a configuration set query request to the scheduling node, wherein the configuration set query request carries the target camera information;
And receiving a configuration set query response sent by the scheduling node, wherein the configuration set query response carries the target configuration set.
In the embodiment of the application, the data source size of each camera in the plurality of cameras is determined, and the target mapping relation between the cameras and the edge computing nodes is determined according to the data source size and the resource computing overhead information and the bandwidth overhead information of each edge computing node in the plurality of edge computing nodes. And then distributing the video data of the camera to the corresponding edge computing nodes for processing according to the target mapping relation. Therefore, when the video data is scheduled, as the resource calculation cost, the bandwidth cost and the data source size are considered, compared with the method that each camera sends the video data to the cloud processing, the method can save the bandwidth and avoid the problem of processing delay caused by overlarge data quantity.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 10, the electronic apparatus 100 of this embodiment includes: at least one processor 1001 (only one is shown in fig. 10), a memory 1002 and a computer program 1003 stored in the memory 1002 and executable on the at least one processor 1001, the processor 1001 implementing the steps in any of the various method embodiments described above when executing the computer program 1003.
The electronic device 100 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The electronic device may include, but is not limited to, a processor 1001, a memory 1002. It will be appreciated by those skilled in the art that fig. 10 is merely an example of the electronic device 100 and is not meant to be limiting of the electronic device 100, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The Processor 1001 may be a CPU (Central Processing Unit ), the Processor 1001 may also be other general purpose processors, DSP (DIGITAL SIGNAL Processor), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field-Programmable GATE ARRAY) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1002 may in some embodiments be an internal storage unit of the electronic device 100, such as a hard disk or a memory of the electronic device 100. The memory 1002 may also be an external storage device of the electronic device 100 in other embodiments, for example, a plug-in hard disk, SMC (SMART MEDIA CARD, smart card), SD (Secure Digital) card, flash memory card (FLASH CARD) or the like, which are provided on the electronic device 100. Further, the memory 1002 may also include both an internal storage unit and an external storage device of the electronic device HP. The memory 1002 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, etc., such as program code of the computer program, etc. The memory 1002 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. A method of processing video data for use in a scheduling node, the method comprising:
determining a data source size of each of the plurality of cameras;
Determining a target mapping relation according to resource calculation overhead information and bandwidth overhead information of each edge calculation node in a plurality of edge calculation nodes and the data source size of each camera, wherein the target mapping relation is used for indicating the mapping relation between the edge calculation nodes and the cameras, the resource calculation overhead information comprises resource calculation single-frame cost and target conversion rate, the resource calculation single-frame cost is the calculation cost required by an image processor for processing a single video frame, the target conversion rate is the conversion rate for eliminating redundant video frames, the bandwidth overhead information comprises bandwidth cost and single-frame flow, and the single-frame flow is the flow required for transmitting the single video frame;
scheduling the video data of each camera according to the target mapping relation, so that each camera sends the respective video data to an edge computing node with the mapping relation with each camera for processing;
the determining a target mapping relationship according to the resource computing overhead information and the bandwidth overhead information of each edge computing node in the plurality of edge computing nodes and the data source size of each camera includes:
According to the resource calculation single-frame cost, the target conversion rate, the bandwidth cost and the single-frame flow of each edge calculation node, determining the single-frame total cost corresponding to each edge calculation node through a formula (1); the formula (1) is:
Wherein t represents single frame overhead, eta represents single frame traffic, and Representing a target conversion rate, wherein S A represents bandwidth cost, and S B represents resource calculation single frame cost;
and determining the target mapping relation according to the single frame total cost corresponding to each edge computing node and the data source size of each camera.
2. The method of claim 1, wherein the scheduling the video data of each camera according to the target mapping relationship comprises:
And transmitting the address information of each edge computing node to a camera with a mapping relation with each edge computing node based on the target mapping relation, so that each camera transmits the respective video data to the corresponding edge computing node based on the received address information.
3. The method of claim 1, wherein the scheduling node has stored therein at least one set of cameras obtained by grouping the plurality of cameras; the method further comprises the steps of:
Receiving a configuration set query request sent by a first edge computing node, wherein the first edge computing node is any one of the edge computing nodes, and the configuration set query request carries target camera information;
Inquiring a camera set to which the target camera information belongs from the at least one camera set to obtain a first camera set;
acquiring a target configuration set stored corresponding to the first camera set, wherein the target configuration set comprises configuration parameters required by video data processing;
and sending a configuration set query response to the first edge computing node, wherein the configuration set query response carries the target configuration set.
4. A method according to claim 3, characterized in that the method further comprises:
receiving camera information, camera position information and target video data sent by each camera to obtain a plurality of camera information, a plurality of camera position information and a plurality of target video data, wherein the size of each target video data is a specified threshold;
randomly selecting one piece of camera information from the plurality of pieces of camera information;
Determining camera information to be grouped, wherein the camera information to be grouped refers to camera information which is not grouped except for currently selected camera information in the plurality of camera information;
Determining all camera information of which the distance of the camera position information corresponding to the currently selected camera information is smaller than a distance threshold value from the camera information to be grouped according to the camera position information corresponding to the camera information to be grouped;
Acquiring currently selected camera information and target video data corresponding to each piece of camera information in the determined camera information;
Determining target video data with the same target from the target video data corresponding to each piece of acquired camera information;
Determining camera information corresponding to target video data with the same target as a group to obtain a camera set;
And if the number of the ungrouped camera information is greater than or equal to the number threshold, reselecting one piece of camera information from the ungrouped camera information, and returning to the step of determining the camera information to be grouped until the number of the ungrouped camera information is smaller than the number threshold.
5. A method of processing video data for application to a first edge computing node, the method comprising:
Receiving video data sent by a target camera, wherein the video data is sent by the target camera according to a scheduling condition after a plurality of cameras are scheduled by a scheduling node based on a target mapping relationship, the target mapping relationship is used for indicating the mapping relationship between an edge computing node and the cameras, and the target mapping relationship is determined based on resource computing overhead information and bandwidth overhead information of each edge computing node in the plurality of edge computing nodes and the data source size of each camera in the plurality of cameras;
The resource calculation overhead information comprises resource calculation single-frame cost and target conversion rate, wherein the resource calculation single-frame cost refers to calculation cost required by an image processor for processing a single video frame, the target conversion rate refers to conversion rate for eliminating redundant video frames, the bandwidth overhead information comprises bandwidth cost and single-frame flow, and the single-frame flow refers to flow required by transmitting the single video frame;
The target mapping relation is determined according to the single frame total cost corresponding to each edge computing node and the data source size of each camera, the single frame total cost corresponding to each edge computing node is determined according to the single frame cost, the target conversion rate, the bandwidth cost and the single frame flow of the resource computing node of each edge computing node through a formula (1), and the formula (1) is that;
Wherein t represents single frame overhead, eta represents single frame traffic, and Indicating the target conversion rate, S A indicating the bandwidth charge, and S B indicating the resource calculation single frame charge.
6. The method of claim 5, wherein the method further comprises:
Determining a camera set to which the target camera belongs according to target camera information of the target camera to obtain a first camera set;
If the video data corresponding to the camera information included in the first camera set is not processed, determining a target configuration set according to a front designated number of video frames of the video data sent by the target camera, wherein the target configuration set comprises configuration parameters required by video data processing;
And processing the video data sent by the target camera according to the target configuration set.
7. The method of claim 6, wherein determining, according to the target camera information of the target camera, a camera set to which the target camera belongs, after obtaining the first camera set, further comprises:
if the video data corresponding to the camera information included in the first camera set is processed, sending a configuration set query request to the scheduling node, wherein the configuration set query request carries the target camera information;
And receiving a configuration set query response sent by the scheduling node, wherein the configuration set query response carries the target configuration set.
8. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 4 or the method according to any of claims 5 to 7 when executing the computer program.
9. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the method of any one of claims 1 to 4 or the steps of the method of any one of claims 5 to 7.
CN202111238322.6A 2021-10-22 2021-10-22 Video data processing method, electronic device and storage medium Active CN114071078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111238322.6A CN114071078B (en) 2021-10-22 2021-10-22 Video data processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111238322.6A CN114071078B (en) 2021-10-22 2021-10-22 Video data processing method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN114071078A CN114071078A (en) 2022-02-18
CN114071078B true CN114071078B (en) 2024-09-10

Family

ID=80235321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111238322.6A Active CN114071078B (en) 2021-10-22 2021-10-22 Video data processing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114071078B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119723467A (en) * 2025-02-27 2025-03-28 长园共创安全科技(珠海)有限公司 Video surveillance method, device, equipment and storage medium for hazardous chemicals warehouse in industrial park

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509276A (en) * 2018-03-30 2018-09-07 南京工业大学 Video task dynamic migration method in edge computing environment
CN110928691A (en) * 2019-12-26 2020-03-27 广东工业大学 A device-edge collaborative computing offloading method for traffic data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111800283B (en) * 2019-04-08 2023-03-14 阿里巴巴集团控股有限公司 Network system, service providing and resource scheduling method, device and storage medium
CN116600017A (en) * 2020-12-02 2023-08-15 武汉联影医疗科技有限公司 Resource scheduling method, system, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509276A (en) * 2018-03-30 2018-09-07 南京工业大学 Video task dynamic migration method in edge computing environment
CN110928691A (en) * 2019-12-26 2020-03-27 广东工业大学 A device-edge collaborative computing offloading method for traffic data

Also Published As

Publication number Publication date
CN114071078A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN105934928B (en) The dispatching method of user's request, device and system in distributed resource system
CN111163018B (en) Network equipment and method for reducing transmission delay thereof
WO2023093053A1 (en) Inference implementation method, network, electronic device, and storage medium
CN111212264B (en) Image processing method, device and storage medium based on edge computing
CN112000454A (en) Multimedia data processing method and equipment
CN112817753B (en) Task processing method and device, storage medium, and electronic device
CN111538572B (en) Task processing method, device, scheduling server and medium
CN110865877B (en) Task request response method and device
WO2023185825A1 (en) Scheduling method, first computing node, second computing node, and scheduling system
CN114071078B (en) Video data processing method, electronic device and storage medium
CN117234681A (en) Data processing method, apparatus, device, storage medium, and program product
CN109885384B (en) Task parallelism optimization method and device, computer equipment and storage medium
CN112000446A (en) Data transmission method and robot
CN114301907B (en) Service processing method, system and device in cloud computing network and electronic equipment
CN113242149B (en) Long connection configuration method, apparatus, device, storage medium, and program product
CN119690694B (en) An asynchronous-to-synchronous conversion method, apparatus, device, storage medium, and product.
US20220357991A1 (en) Information processing apparatus, computer-readable recording medium storing aggregation control program, and aggregation control method
CN118819748A (en) A task scheduling method, scheduling management system and multi-core processor
CN118708327A (en) A method and corresponding device for data processing
CN115562820A (en) Distributed task processing and asynchronous model training system, method, device and medium
CN115334001A (en) Data resource scheduling method and device based on priority relation
CN110895464B (en) Application deployment method, device and system
CN118827481B (en) A method, device, and storage medium for calculating the online status of a device.
CN116303110B (en) Memory garbage recycling method and electronic equipment
CN112905351B (en) GPU and CPU load scheduling method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000

Applicant after: China Resources Digital Technology Co.,Ltd.

Applicant after: China Resources Shenzhen Bay Development Co.,Ltd. science and technology research branch

Applicant after: RESEARCH INSTITUTE OF TSINGHUA University IN SHENZHEN

Address before: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000

Applicant before: Runlian software system (Shenzhen) Co.,Ltd.

Applicant before: China Resources Shenzhen Bay Development Co.,Ltd. science and technology research branch

Applicant before: RESEARCH INSTITUTE OF TSINGHUA University IN SHENZHEN

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231214

Address after: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000

Applicant after: China Resources Digital Technology Co.,Ltd.

Applicant after: RESEARCH INSTITUTE OF TSINGHUA University IN SHENZHEN

Address before: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000

Applicant before: China Resources Digital Technology Co.,Ltd.

Applicant before: China Resources Shenzhen Bay Development Co.,Ltd. science and technology research branch

Applicant before: RESEARCH INSTITUTE OF TSINGHUA University IN SHENZHEN

GR01 Patent grant
GR01 Patent grant