[go: up one dir, main page]

CN117523458A - Low-altitude unmanned aerial vehicle supervision system and method thereof - Google Patents

Low-altitude unmanned aerial vehicle supervision system and method thereof Download PDF

Info

Publication number
CN117523458A
CN117523458A CN202311624615.7A CN202311624615A CN117523458A CN 117523458 A CN117523458 A CN 117523458A CN 202311624615 A CN202311624615 A CN 202311624615A CN 117523458 A CN117523458 A CN 117523458A
Authority
CN
China
Prior art keywords
feature
branch
perception
unmanned aerial
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202311624615.7A
Other languages
Chinese (zh)
Inventor
黄煜栋
陈彦佐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Polytechnic
Original Assignee
Hangzhou Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Polytechnic filed Critical Hangzhou Polytechnic
Priority to CN202311624615.7A priority Critical patent/CN117523458A/en
Publication of CN117523458A publication Critical patent/CN117523458A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Traffic Control Systems (AREA)

Abstract

The utility model relates to an intelligent supervision field, it specifically discloses a low-altitude unmanned aerial vehicle supervision system and method thereof, and it adopts the artificial intelligence technique based on the deep neural network model, acquires the flight video of low-altitude unmanned aerial vehicle of predetermined time quantum, acquires the flight state information through using the multi-branch perception domain module from a plurality of angles after extracting the key frame, draws and reinforces the key information in the flight state through using residual error dual-attention mechanism model after calculating the difference to obtain the classification result that is used for showing whether unmanned aerial vehicle breaks down. Furthermore, an automatic unmanned aerial vehicle supervision system can be realized, the manual supervision cost is reduced, and the supervision effect and accuracy are improved.

Description

Low-altitude unmanned aerial vehicle supervision system and method thereof
Technical Field
The present application relates to the field of intelligent supervision, and more particularly, to a low-altitude unmanned aerial vehicle supervision system and a method thereof.
Background
A low-altitude unmanned aerial vehicle, also known as a drone or a remotely piloted aircraft, is a non-manned aircraft controlled by a radio remote control device, or steered by a preprogrammed program. When low-altitude unmanned aerial vehicle carries out the operation, unmanned aerial vehicle probably collides with other aircraft, building, personnel or vehicles when low-altitude flies, causes casualties and loss of property, also can take place some trouble in the flight, leads to unmanned aerial vehicle space-time or crash, probably can smash the injury people. The flight activities of the unmanned aerial vehicle may also cause airspace confusion, interfering with the normal flight of other aircraft.
However, since unmanned aerial vehicle supervision in the prior art generally relies on manual operation or intermittent monitoring, real-time monitoring of large-scale unmanned aerial vehicle activities cannot be provided. This may lead to regulatory blind areas where abnormal behaviour or accidents of the unmanned aerial vehicle cannot be found in time. In addition, traditional regulations are often limited to specific areas, such as airport perimeter or specific activity sites, while in other areas, particularly in suburban or rural areas, the regulatory capability may be inadequate. Second, supervision typically requires manual intervention, including finding an abnormal situation, alerting, and waiting for emergency personnel to arrive at the site. This may result in a slower reaction rate and the inability to take timely action against the potential risk.
Thus, an optimized low-altitude drone supervision scheme is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a low-altitude unmanned aerial vehicle supervision system and a method thereof, which adopt an artificial intelligence technology based on a deep neural network model to acquire flight videos of the low-altitude unmanned aerial vehicle in a preset time period, acquire flight state information from a plurality of angles by using a multi-branch perception domain module after extracting key frames, extract and strengthen key information in the flight state by using a residual error dual-attention mechanism model after calculating difference so as to obtain a classification result for representing whether the unmanned aerial vehicle breaks down. Furthermore, an automatic unmanned aerial vehicle supervision system can be realized, the manual supervision cost is reduced, and the supervision effect and accuracy are improved.
According to one aspect of the present application, there is provided a low-altitude unmanned aerial vehicle supervision system comprising:
the flight monitoring video acquisition module is used for acquiring the flight video of the low-altitude unmanned aerial vehicle in a preset time period;
the flight key frame extraction module is used for extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle in the preset time period;
the multi-branch extraction module is used for enabling the plurality of flight monitoring key frames to pass through the multi-branch perception domain module so as to obtain a plurality of flight state feature images;
the difference calculation module is used for calculating the difference between two adjacent time points among the plurality of flight state feature images so as to obtain a plurality of flight state difference feature images;
the residual double-attention feature extraction module is used for enabling the plurality of flight state difference feature images to pass through a residual double-attention mechanism model to obtain a classification feature image;
the optimizing module is used for extracting hidden feature expression of the motion distribution model of the classification feature map relative to the target classification function so as to obtain an optimized classification feature map;
and the unmanned aerial vehicle fault judging module is used for enabling the optimized classification characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the unmanned aerial vehicle breaks down or not.
In the low-altitude unmanned aerial vehicle supervision system, the flight key frame extraction module is configured to: and extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle in the preset time period at a preset sampling frequency.
In the above low-altitude unmanned aerial vehicle supervision system, the multi-branch extraction module includes: the first point convolution unit is used for inputting the flight monitoring key frame into a first point convolution layer of the multi-branch perception domain module to obtain a convolution characteristic diagram; the multi-branch perception unit is used for respectively passing the convolution characteristic map through a first branch perception domain unit, a second branch perception domain unit and a third branch perception domain unit of the multi-branch perception domain module to obtain a first branch perception characteristic map, a second branch perception characteristic map and a third branch perception characteristic map, wherein the first branch perception domain unit, the second branch perception domain unit and the third branch perception domain unit are in parallel structures; the fusion unit is used for cascading the first branch perception feature map, the second branch perception feature map and the second branch perception feature map to obtain a fusion perception feature map; the second point convolution unit is used for inputting the fusion perception feature image into a second point convolution layer of the multi-branch perception domain module to obtain a channel correction fusion perception feature image; and the residual cascade unit is used for calculating the channel correction fusion perception feature map and the convolution feature map according to the position points to obtain the flight state feature map.
In the low-altitude unmanned aerial vehicle supervision system, the residual double-attention feature extraction module comprises a spatial feature extraction unit, a spatial feature extraction unit and a control unit, wherein the spatial feature extraction unit is used for enabling the flight state differential feature map to pass through a spatial attention module of the residual double-attention mechanism model so as to obtain a spatial attention force diagram; the channel feature extraction unit is used for enabling the flight state difference feature map to pass through a channel attention module of the residual double-attention mechanism model to obtain a channel attention map; a fusion unit for fusing the spatial attention map and the channel attention map to obtain a weighted feature map; the weighting unit is used for fusing the flight state difference feature map and the weighting feature map to obtain the enhanced flight state difference feature map; and a cascade unit, configured to cascade the plurality of enhanced flight state difference feature maps to obtain the classification feature map.
In the above low-altitude unmanned aerial vehicle supervision system, the channel feature extraction unit is configured to: carrying out global averaging on each feature matrix of the flight state difference feature map along the channel dimension to obtain a channel feature vector; the channel characteristic vector is subjected to a Softmax function to obtain a normalized channel characteristic vector; and weighting the feature matrix of the flight state difference feature map along the channel dimension by taking the feature value of each position in the normalized channel feature vector as a weight to obtain a channel attention map.
In the above low-altitude unmanned aerial vehicle supervision system, the unmanned aerial vehicle fault judging module includes: the unfolding unit is used for unfolding the optimized classification characteristic map into classification characteristic vectors; the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a full-connection layer of the classifier so as to obtain coded classification characteristic vectors; and a classification result unit, configured to pass the encoded classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
According to another aspect of the present application, there is provided a low-altitude unmanned aerial vehicle supervision method, comprising:
acquiring a flight video of the low-altitude unmanned aerial vehicle in a preset time period;
extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle in the preset time period;
the flight monitoring key frames pass through a multi-branch perception domain module to obtain a plurality of flight state feature diagrams;
calculating the difference between two adjacent time points among the plurality of flight state feature images to obtain a plurality of flight state difference feature images;
the plurality of flight state difference feature images are subjected to a residual error double-attention mechanism model to obtain a classification feature image;
extracting hidden feature expression of the motion distribution model of the classification feature map relative to the target classification function to obtain an optimized classification feature map;
and the optimized classification characteristic diagram is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the unmanned aerial vehicle has faults or not.
Compared with the prior art, the low-altitude unmanned aerial vehicle supervision system and the method thereof provided by the application adopt an artificial intelligence technology based on a deep neural network model, acquire flight videos of the low-altitude unmanned aerial vehicle in a preset time period, acquire flight state information from multiple angles by using a multi-branch perception domain module after extracting key frames, extract and strengthen key information in the flight state by using a residual error dual-attention mechanism model after calculating difference so as to obtain a classification result for representing whether the unmanned aerial vehicle breaks down. Furthermore, an automatic unmanned aerial vehicle supervision system can be realized, the manual supervision cost is reduced, and the supervision effect and accuracy are improved.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a block diagram of a low-altitude unmanned aerial vehicle supervision system according to an embodiment of the present application.
Fig. 2 is a schematic architecture diagram of a low-altitude unmanned aerial vehicle supervision system according to an embodiment of the present application.
Fig. 3 is a block diagram of a residual dual-attention feature extraction module in a low-altitude unmanned aerial vehicle supervision system according to an embodiment of the present application.
Fig. 4 is a block diagram of a residual dual-attention feature extraction module in a low-altitude unmanned aerial vehicle supervision system according to an embodiment of the present application.
Fig. 5 is a flowchart of a low-altitude unmanned aerial vehicle supervision method according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Exemplary System: fig. 1 is a block diagram of a low-altitude unmanned aerial vehicle supervision system according to an embodiment of the present application. Fig. 2 is a schematic architecture diagram of a low-altitude unmanned aerial vehicle supervision system according to an embodiment of the present application. As shown in fig. 1 and 2, a low-altitude unmanned aerial vehicle supervision system 100 according to an embodiment of the present application includes: a flight monitoring video acquisition module 110, configured to acquire a flight video of the low-altitude unmanned aerial vehicle in a predetermined period of time; a flight key frame extraction module 120, configured to extract a plurality of flight monitoring key frames from a flight video of the low-altitude unmanned aerial vehicle in the predetermined period of time; a multi-branch extraction module 130, configured to pass the plurality of flight monitoring key frames through a multi-branch perception domain module to obtain a plurality of flight status feature graphs; the difference calculating module 140 is configured to calculate differences between two adjacent time points between the plurality of flight status feature maps to obtain a plurality of flight status difference feature maps; the residual dual-attention feature extraction module 150 is configured to pass the plurality of flight state difference feature maps through a residual dual-attention mechanism model to obtain a classification feature map; an optimization module 160, configured to extract a hidden feature expression of the motion distribution model of the classification feature map relative to the objective classification function to obtain an optimized classification feature map; and the unmanned aerial vehicle fault judging module 170 is configured to pass the optimized classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the unmanned aerial vehicle has a fault.
The low-altitude unmanned aerial vehicle refers to an unmanned aerial vehicle capable of flying in a lower altitude range. Typically, the flying height of such unmanned aerial vehicles is between the ground and several hundred meters. Low-altitude unmanned aerial vehicles are commonly used in a variety of applications including, but not limited to, agriculture, environmental monitoring, geological exploration, security monitoring, power line inspection, and the like. They may carry various sensors and devices for collecting data, taking images, or performing other tasks. Low-altitude unmanned aerial vehicles are widely used in many industries due to their flexibility and relatively low operating costs.
When the unmanned aerial vehicle flies at low altitude, the unmanned aerial vehicle may collide with other aircrafts (such as helicopters, miniplanes) or other unmanned aerial vehicles, and may cause collision or dangerous situations. Moreover, due to the presence of various obstacles in low-altitude environments, such as buildings, wires, trees, etc., unmanned aerial vehicles need to avoid these obstacles during flight to avoid collisions or other accidents. Weather factors such as wind, air flow, etc. that may also be encountered, for small unmanned aerial vehicles, may have an impact on stability and control performance.
Therefore, the safety of a low-altitude airspace can be ensured by monitoring the unmanned aerial vehicle, so that the unmanned aerial vehicle is prevented from colliding with other aircrafts, and the risk of air accidents is reduced; the unmanned aerial vehicle can be ensured to obey privacy regulations in the flight and data collection process, and privacy rights of individuals and institutions are protected; the data collected by the unmanned aerial vehicle can be ensured to be properly protected and used under the framework of compliance; the unmanned aerial vehicle can ensure that the influence on the environment caused in the flight process is controlled, and the adverse effect on the natural ecological environment is avoided.
Based on this, in the technical scheme of this application, through gathering and the feature analysis to the flight video of the low-altitude unmanned aerial vehicle of predetermined time quantum to whether obtain unmanned aerial vehicle and break down when flying this, realize automatic unmanned aerial vehicle supervisory systems, reduce the manual supervision cost, improve supervision effect and accuracy.
In this embodiment of the present application, the flight monitoring video obtaining module 110 is configured to obtain a flight video of the low-altitude unmanned aerial vehicle in a predetermined period of time. The information such as the flight path, the altitude, the flight behavior and the like of the unmanned aerial vehicle is known by considering that the supervision department can review the flight activities of the unmanned aerial vehicle in a specified time period. In addition, the regulatory authorities can identify violations of the drone, such as out of range flights, violating a specified altitude or speed of flight, carrying contraband, and the like. In addition, the unmanned aerial vehicle monitoring video can also help the supervision department to detect whether unmanned aerial vehicle breaks down or abnormal conditions, such as the abnormal flying gesture, low battery power, sensor fault and the like. Therefore, by acquiring the flight monitoring video of the unmanned aerial vehicle, the supervision department can comprehensively know the flight state and behavior of the unmanned aerial vehicle, ensure the safe operation of the unmanned aerial vehicle, take measures in time to cope with the illegal behavior or fault condition, and maintain the airspace order and public safety.
In this embodiment of the present application, the flight keyframe extraction module 120 is configured to extract a plurality of flight monitoring keyframes from a flight video of the low-altitude unmanned aerial vehicle in the predetermined period of time. It is contemplated that by extracting key frames, a summary or preview of the video may be generated to quickly learn about the flight activities and important moments of the drone without having to view the entire video. In particular, the keyframes may help identify and capture key events during the unmanned aerial vehicle flight, such as take-off, landing, turning, hovering, obstacle avoidance, and the like. These key frames can be used for subsequent event analysis and processing. In addition, the extraction of the key frames can help to detect abnormal conditions in the flight process of the unmanned aerial vehicle, such as suddenly changing the flight track, abnormal gesture, abnormal speed and the like. These key frames can be used as inputs to an anomaly detection algorithm to further analyze and determine if a fault or risk exists. Therefore, by extracting the flight monitoring key frames, the unmanned aerial vehicle flight activities can be processed and analyzed more efficiently, important information is extracted, and valuable data is provided for subsequent supervision, analysis and decision.
Specifically, in the embodiment of the present application, the flight key frame extraction module is configured to: and extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle in the preset time period at a preset sampling frequency.
In this embodiment of the present application, the multi-branch extraction module 130 is configured to pass the plurality of flight monitoring key frames through the multi-branch perception domain module to obtain a plurality of flight status feature maps. The multi-branch perception domain module can analyze the flight monitoring key frames from multiple angles at the same time, so that different perception domains can provide different visual angles and information. For example, one branch may be used to detect the position and attitude of the drone, another branch to detect speed and acceleration, a third branch to detect obstacles to the surrounding environment, and so on. Specifically, through the multi-branch perception domain module, different feature maps can be fused to obtain richer flight state features. Each branch can extract different characteristic representations, such as colors, shapes and the like, and by fusing the characteristic images, more comprehensive and diversified flight state information can be obtained. In addition, the multi-branch perception domain module can be flexibly designed and expanded according to the needs. The method can increase or decrease the perception branches according to specific application scenes and task demands, and adjust the connection modes among the branches so as to adapt to different supervision and analysis demands. Therefore, through the multi-branch perception domain module, rich flight state information can be obtained from multiple angles and feature diagrams, the understanding and analysis capability of unmanned aerial vehicle flight activities are improved, and more comprehensive and accurate flight state features are provided for regulatory departments and related institutions.
It is noted that the Multi-branch perception domain module (Multi-Branch Perception Domain Module) generally refers to a module for processing a plurality of perception domain information in the fields of computer vision and deep learning. Such modules are typically used to process multi-modal input data, such as images, text, sound, etc., as well as to process multiple feature representations. In particular, in deep learning, multi-branch perceptual domain modules are typically used to process complex input data, where each branch exclusively processes information from a different perceptual domain. Each branch may include a different neural network structure to better capture characteristics of a particular perception domain. These branches typically work in parallel and the end result is integrated for subsequent tasks such as classification, regression or other predictive tasks. It should be appreciated that such a module has the advantage of being able to fully exploit the information of the different perception domains, improving the understanding and characterization capabilities of the model on the input data. By integrating information from different perceptual domains, the model can more fully understand the input data, thereby improving the accuracy of prediction or classification. The multi-branch perception domain module has wide application in fields such as multi-mode learning, cross-mode retrieval, video understanding and the like. They provide an efficient way to process complex data from multiple perceptual domains, helping to improve the generalization ability and applicability of the model.
Fig. 3 is a block diagram of a residual dual-attention feature extraction module in a low-altitude unmanned aerial vehicle supervision system according to an embodiment of the present application. Specifically, in the embodiment of the present application, as shown in fig. 3, the multi-branch extracting module 130 includes: a first point convolution unit 131, configured to input the flight monitoring key frame into a first point convolution layer of the multi-branch perception domain module to obtain a convolution feature map; a multi-branch perception unit 132, configured to pass the convolution feature map through a first branch perception domain unit, a second branch perception domain unit, and a third branch perception domain unit of the multi-branch perception domain module to obtain a first branch perception feature map, a second branch perception feature map, and a third branch perception feature map, where the first branch perception domain unit, the second branch perception domain unit, and the third branch perception domain unit have parallel structures; a merging unit 133, configured to concatenate the first branch perceptual feature map, the second branch perceptual feature map, and the second branch perceptual feature map to obtain a merged perceptual feature map; a second point convolution unit 134, configured to input the fused perceptual feature map into a second point convolution layer of the multi-branch perceptual domain module to obtain a channel corrected fused perceptual feature map; and a residual cascade unit 135, configured to calculate the channel correction fusion perceptual feature map and the convolution feature map according to the location points to obtain the flight state feature map.
In this embodiment of the present application, the difference calculating module 140 is configured to calculate a difference between two adjacent time points between the plurality of flight status feature maps to obtain a plurality of flight status difference feature maps. The dynamic change of the unmanned aerial vehicle flight state can be detected by calculating the difference between the characteristic diagrams. The differential map may show changes that occur at adjacent points in time, such as changes in position, changes in speed, changes in attitude, and so forth. This helps to capture the motion trajectories and trends in the unmanned aerial vehicle flight. In addition, the difference map can be used for detecting abnormal conditions in the flight of the unmanned aerial vehicle. By comparing the differential values of adjacent time points, abrupt, abnormal changes can be found. The method has important significance for detecting faults, accidents or illegal behaviors of the unmanned aerial vehicle, and corresponding measures can be taken in time. Furthermore, the differential map may provide information about the movement of the drone. By calculating the difference, characteristics of speed, acceleration, and the like related to the movement can be obtained. This is very useful for understanding the tasks of unmanned aerial vehicle's locomotor behavior, trajectory prediction and path planning. Therefore, by calculating the difference between the flight state feature maps, the change and dynamic information of the unmanned aerial vehicle flight state can be captured, and key features related to movement, change and abnormal conditions are provided.
In this embodiment of the present application, the residual dual-attention feature extraction module 150 is configured to pass the plurality of flight status differential feature maps through a residual dual-attention mechanism model to obtain a classification feature map. The attention mechanism model can be used for enhancing important information in the differential feature map by taking into account residual double attention mechanisms. The method can adaptively learn the importance of different positions in the feature map, and pay more attention to the key area, so that the representation capability of the key information is improved. In particular, context information in the differential feature map may be captured by a residual dual-attention mechanism model. It may take into account the relationship and dependency between the different positions in the signature to better understand the overall context of the flight state. Therefore, by using the residual double-attention mechanism model, key information in the flight state differential feature map can be extracted and enhanced, and the accuracy and the expression capacity of the classification feature map are improved.
Fig. 4 is a block diagram of a residual dual-attention feature extraction module in a low-altitude unmanned aerial vehicle supervision system according to an embodiment of the present application. Specifically, in the embodiment of the present application, as shown in fig. 4, the residual dual-attention feature extraction module 150 includes a spatial feature extraction unit 151 configured to pass the flight status difference feature map through a spatial attention module of the residual dual-attention mechanism model to obtain a spatial attention map; a channel feature extraction unit 152, configured to pass the flight status difference feature map through a channel attention module of the residual dual-attention mechanism model to obtain a channel attention map; a fusion unit 153 for fusing the spatial attention map and the channel attention map to obtain a weighted feature map; a weighting unit 154, configured to fuse the differential flight state feature map and the weighted feature map to obtain the differential flight state feature map; and a cascade unit 155, configured to cascade the plurality of enhanced flight state difference feature maps to obtain the classification feature map.
More specifically, in an embodiment of the present application, the channel feature extraction unit is configured to: carrying out global averaging on each feature matrix of the flight state difference feature map along the channel dimension to obtain a channel feature vector; the channel characteristic vector is subjected to a Softmax function to obtain a normalized channel characteristic vector; and weighting the feature matrix of the flight state difference feature map along the channel dimension by taking the feature value of each position in the normalized channel feature vector as a weight to obtain a channel attention map.
In this embodiment, the optimizing module 160 is configured to extract a hidden feature expression of the motion distribution model of the classification feature map relative to the objective classification function to obtain an optimized classification feature map.
In particular, in the technical scheme of the application, the classification characteristic diagram is obtained through a plurality of flight state difference characteristic diagrams through a residual double-attention mechanism model. However, to better represent whether the drone is malfunctioning, it is necessary to further improve the compatibility of the feature parts of the classification feature map with the desired distribution along the associated dimension within the feature whole. This is because the correlation between feature parts is very important for correctly classifying the state of the drone. Compatibility of feature parts with a desired distribution along an associated dimension within the feature whole refers to the degree of association between feature parts at different locations in the classification feature map and whether such association meets the desired distribution. In unmanned aerial vehicle flight monitoring, feature portions at different locations may correspond to different flight conditions, such as individual components or critical areas of an aircraft. Thus, the correlation between feature portions can provide more comprehensive, accurate flight status information. There are several reasons for improving the compatibility of feature parts of the classification feature map with the desired distribution along the associated dimension within the feature whole, firstly, feature parts at different positions often have a certain association, for example, a certain association relationship may exist between different parts of the unmanned aerial vehicle. Through enhancing the relevance between the feature parts, the relevant information in the unmanned aerial vehicle flight state can be better captured, so that the classification accuracy is improved. Secondly, the desired distribution refers to the distribution characteristics that the feature parts of the unmanned aerial vehicle should have in normal flight conditions. By improving the compatibility of the feature parts of the classification feature map with the expected distribution along the associated dimension in the feature whole, the classification feature map can better accord with the feature distribution in the normal flight state, thereby enhancing the accuracy of fault detection. In order to improve compatibility of feature parts of the classification feature map in the feature whole along the associated dimension and expected distribution, in the technical scheme of the application, hidden feature expression of a motion distribution model of the classification feature map relative to a target classification function is extracted, so that accuracy and robustness of unmanned aerial vehicle fault detection are improved, and whether the unmanned aerial vehicle breaks down or not is judged better.
Specifically, in the embodiment of the present application, the optimization module is configured to: extracting hidden feature expression of the motion distribution model of the classification feature map relative to the target classification function by using the following optimization formula to obtain an optimized classification feature map; wherein, the optimization formula is:wherein,the +.f. representing the classification characteristic diagram>Characteristic value of the location->The logarithmic function value is represented with a base of 2,representing normalized exponential function, ++>The +.f. representing the optimized classification characteristic graph>Characteristic values of the location.
That is, in order to improve the classification capability of the classification feature map, in the technical solution of the present application, the hidden feature expression of the motion distribution model of the classification feature map with respect to the objective classification function is used to replace or supplement the original features. In particular, the motion distribution model is a probabilistic model assuming that each point in the feature space is generated by a random variable whose distribution is determined by the gradient of the objective classification function, whereas the implicit feature expression of the motion distribution model refers to the potential motion state of each point in the feature space, which can reflect motion information in the feature map, i.e., the relative change between features.
The step of extracting the expression of the hidden features of the motion distribution model is as follows: firstly, calculating a gradient value of a target classification function corresponding to the characteristic value of each position in the classification characteristic diagram, and taking the gradient value as a movement direction. And dividing the feature space into a plurality of motion areas according to the magnitude and the direction of the gradient values, representing different motion modes, fitting the distribution of the gradient values to each motion area, and obtaining the parameters of the motion distribution model. Furthermore, cauchy normalization or other methods are used to eliminate the effects of outliers or noise for each motion region, making the motion distribution model more stable. Then, for each motion region, the motion information is represented using parameters or other features of the motion distribution model as a implicit feature expression of the motion distribution model. And finally splicing the hidden characteristic expression of each motion region to obtain the hidden characteristic expression of the motion distribution model of the whole feature map.
In this way, by extracting the hidden feature expression of the motion distribution model of the classification feature map relative to the target classification function, the higher-layer features in the classification feature map can be extracted to improve the consistency and robustness of the feature distribution, adapt to the change of different scales and angles, and keep the invariance and the distinguishability of the features, thereby improving the classification capability.
In this embodiment, the unmanned aerial vehicle fault determination module 170 is configured to pass the optimized classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the unmanned aerial vehicle has a fault. Considering that the classifier can divide the flight state of the unmanned aerial vehicle into two categories of normal or fault according to the characteristics in the optimized classification characteristic diagram and automatically judge whether the unmanned aerial vehicle breaks down, thereby reducing the burden of manual supervision. By comparison with normal flight conditions, the classifier can identify a flight condition that is inconsistent with expectations, which may indicate that the drone is malfunctioning or abnormal. The classifier can capture differences and classify to provide fault detection and judgment of abnormal conditions. In addition, if the classification result indicates that the unmanned aerial vehicle has a fault, relevant personnel can be immediately notified to repair or take other necessary actions. The result of the classifier can provide important references and bases for decisions. This helps to improve regulatory efficiency, reduce risk, and ensure safe flight of the drone.
Specifically, in the embodiment of the present application, the unmanned aerial vehicle fault determination module includes: the unfolding unit is used for unfolding the optimized classification characteristic map into classification characteristic vectors; the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a full-connection layer of the classifier so as to obtain coded classification characteristic vectors; and a classification result unit, configured to pass the encoded classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
In summary, the low-altitude unmanned aerial vehicle supervision system 100 according to the embodiment of the present application is illustrated, and an artificial intelligence technology based on a deep neural network model is adopted to obtain a flight video of a low-altitude unmanned aerial vehicle in a predetermined period of time, obtain flight state information from multiple angles by using a multi-branch perception domain module after extracting a key frame, extract and strengthen key information in the flight state by using a residual error dual-attention mechanism model after calculating a difference, so as to obtain a classification result for indicating whether the unmanned aerial vehicle has a fault. Furthermore, an automatic unmanned aerial vehicle supervision system can be realized, the manual supervision cost is reduced, and the supervision effect and accuracy are improved.
An exemplary method is: fig. 5 is a flowchart of a low-altitude unmanned aerial vehicle supervision method according to an embodiment of the present application. As shown in fig. 5, a low-altitude unmanned aerial vehicle supervision method according to an embodiment of the present application includes: s110, acquiring a flight video of the low-altitude unmanned aerial vehicle in a preset time period; s120, extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle in the preset time period; s130, passing the plurality of flight monitoring key frames through a multi-branch perception domain module to obtain a plurality of flight state feature diagrams; s140, calculating the difference between two adjacent time points among the plurality of flight state feature maps to obtain a plurality of flight state difference feature maps; s150, the plurality of flight state difference feature images pass through a residual double-attention mechanism model to obtain a classification feature image; s160, extracting hidden feature expression of the motion distribution model of the classification feature map relative to the target classification function to obtain an optimized classification feature map; and S170, enabling the optimized classification characteristic diagram to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the unmanned aerial vehicle fails.
Here, it will be appreciated by those skilled in the art that the specific operations of the respective steps in the above-described low-altitude unmanned aerial vehicle supervision method have been described in detail in the above description of the low-altitude unmanned aerial vehicle supervision system with reference to fig. 1 to 4, and thus, repetitive descriptions thereof will be omitted.
Exemplary electronic device and storage Medium: in the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of units is only one logical function division, and there may be other divisions in actual implementation, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes. Alternatively, the above-described integrated units of the present invention may be stored in a readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.

Claims (10)

1. A low-altitude unmanned aerial vehicle supervision system, comprising:
the flight monitoring video acquisition module is used for acquiring the flight video of the low-altitude unmanned aerial vehicle in a preset time period;
the flight key frame extraction module is used for extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle in the preset time period;
the multi-branch extraction module is used for enabling the plurality of flight monitoring key frames to pass through the multi-branch perception domain module so as to obtain a plurality of flight state feature images;
the difference calculation module is used for calculating the difference between two adjacent time points among the plurality of flight state feature images so as to obtain a plurality of flight state difference feature images;
the residual double-attention feature extraction module is used for enabling the plurality of flight state difference feature images to pass through a residual double-attention mechanism model to obtain a classification feature image;
the optimizing module is used for extracting hidden feature expression of the motion distribution model of the classification feature map relative to the target classification function so as to obtain an optimized classification feature map;
and the unmanned aerial vehicle fault judging module is used for enabling the optimized classification characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the unmanned aerial vehicle breaks down or not.
2. The low-altitude unmanned aerial vehicle supervision system according to claim 1, wherein the flight keyframe extraction module is configured to:
and extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle in the preset time period at a preset sampling frequency.
3. The low-altitude unmanned aerial vehicle supervision system according to claim 2, wherein the multi-branch extraction module comprises:
the first point convolution unit is used for inputting the flight monitoring key frame into a first point convolution layer of the multi-branch perception domain module to obtain a convolution characteristic diagram;
the multi-branch perception unit is used for respectively passing the convolution characteristic map through a first branch perception domain unit, a second branch perception domain unit and a third branch perception domain unit of the multi-branch perception domain module to obtain a first branch perception characteristic map, a second branch perception characteristic map and a third branch perception characteristic map, wherein the first branch perception domain unit, the second branch perception domain unit and the third branch perception domain unit are in parallel structures;
the fusion unit is used for cascading the first branch perception feature map, the second branch perception feature map and the second branch perception feature map to obtain a fusion perception feature map;
the second point convolution unit is used for inputting the fusion perception feature image into a second point convolution layer of the multi-branch perception domain module to obtain a channel correction fusion perception feature image;
and the residual cascade unit is used for calculating the channel correction fusion perception feature map and the convolution feature map according to the position points to obtain the flight state feature map.
4. A low-altitude unmanned aerial vehicle supervision system according to claim 3, wherein the residual dual-attention feature extraction module comprises:
the space feature extraction unit is used for enabling the flight state difference feature map to pass through a space attention module of the residual double-attention mechanism model so as to obtain a space attention map;
the channel feature extraction unit is used for enabling the flight state difference feature map to pass through a channel attention module of the residual double-attention mechanism model to obtain a channel attention map;
a fusion unit for fusing the spatial attention map and the channel attention map to obtain a weighted feature map;
the weighting unit is used for fusing the flight state difference feature map and the weighting feature map to obtain the enhanced flight state difference feature map;
and the cascading unit is used for cascading the plurality of enhanced flight state difference feature graphs to obtain the classification feature graph.
5. The low-altitude unmanned aerial vehicle supervision system according to claim 4, wherein the channel feature extraction unit is configured to:
carrying out global averaging on each feature matrix of the flight state difference feature map along the channel dimension to obtain a channel feature vector;
the channel characteristic vector is subjected to a Softmax function to obtain a normalized channel characteristic vector;
and weighting the feature matrix of the flight state difference feature map along the channel dimension by taking the feature value of each position in the normalized channel feature vector as a weight to obtain a channel attention map.
6. The low-altitude unmanned aerial vehicle supervision system according to claim 5, wherein the optimization module is configured to:
extracting hidden feature expression of the motion distribution model of the classification feature map relative to the target classification function by using the following optimization formula to obtain an optimized classification feature map;
wherein, the optimization formula is:wherein (1)>The +.f. representing the classification characteristic diagram>Characteristic value of the location->Represents a logarithmic function value based on 2, < +.>Representing normalized exponential function, ++>The +.f. representing the optimized classification characteristic graph>Characteristic values of the location.
7. The low-altitude unmanned aerial vehicle supervision system according to claim 6, wherein the unmanned aerial vehicle fault determination module comprises:
the unfolding unit is used for unfolding the optimized classification characteristic map into classification characteristic vectors;
the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a full-connection layer of the classifier so as to obtain coded classification characteristic vectors;
and the classification result unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
8. A method of low-altitude unmanned aerial vehicle supervision, comprising:
acquiring a flight video of the low-altitude unmanned aerial vehicle in a preset time period;
extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle in the preset time period;
the flight monitoring key frames pass through a multi-branch perception domain module to obtain a plurality of flight state feature diagrams;
calculating the difference between two adjacent time points among the plurality of flight state feature images to obtain a plurality of flight state difference feature images;
the plurality of flight state difference feature images are subjected to a residual error double-attention mechanism model to obtain a classification feature image;
extracting hidden feature expression of the motion distribution model of the classification feature map relative to the target classification function to obtain an optimized classification feature map;
and the optimized classification characteristic diagram is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the unmanned aerial vehicle has faults or not.
9. The low-altitude unmanned aerial vehicle supervision method according to claim 8, wherein extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle for the predetermined period of time comprises:
and extracting a plurality of flight monitoring key frames from the flight video of the low-altitude unmanned aerial vehicle in the preset time period at a preset sampling frequency.
10. The low-altitude unmanned aerial vehicle supervision method of claim 9, wherein passing the plurality of flight monitoring key frames through a multi-branch perception domain module to obtain a plurality of flight status feature maps comprises:
inputting the flight monitoring key frame into a first point convolution layer of the multi-branch perception domain module to obtain a convolution feature map;
the convolution characteristic map is respectively passed through a first branch perception domain unit, a second branch perception domain unit and a third branch perception domain unit of the multi-branch perception domain module to obtain a first branch perception characteristic map, a second branch perception characteristic map and a third branch perception characteristic map, wherein the first branch perception domain unit, the second branch perception domain unit and the third branch perception domain unit have parallel structures;
cascading the first branch perception feature map, the second branch perception feature map and the second branch perception feature map to obtain a fusion perception feature map;
inputting the fusion perception feature map into a second point convolution layer of the multi-branch perception domain module to obtain a channel correction fusion perception feature map;
and calculating the channel correction fusion perception feature map and the convolution feature map according to the position points to obtain the flight state feature map.
CN202311624615.7A 2023-11-30 2023-11-30 Low-altitude unmanned aerial vehicle supervision system and method thereof Withdrawn CN117523458A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311624615.7A CN117523458A (en) 2023-11-30 2023-11-30 Low-altitude unmanned aerial vehicle supervision system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311624615.7A CN117523458A (en) 2023-11-30 2023-11-30 Low-altitude unmanned aerial vehicle supervision system and method thereof

Publications (1)

Publication Number Publication Date
CN117523458A true CN117523458A (en) 2024-02-06

Family

ID=89743611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311624615.7A Withdrawn CN117523458A (en) 2023-11-30 2023-11-30 Low-altitude unmanned aerial vehicle supervision system and method thereof

Country Status (1)

Country Link
CN (1) CN117523458A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119339585A (en) * 2024-12-12 2025-01-21 四川西物激光技术有限公司 Low-altitude UAV flight safety assurance system based on intelligent meteorological and airspace monitoring

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119339585A (en) * 2024-12-12 2025-01-21 四川西物激光技术有限公司 Low-altitude UAV flight safety assurance system based on intelligent meteorological and airspace monitoring

Similar Documents

Publication Publication Date Title
Hosseini et al. Intelligent damage classification and estimation in power distribution poles using unmanned aerial vehicles and convolutional neural networks
KR101995107B1 (en) Method and system for artificial intelligence based video surveillance using deep learning
Alexandrov et al. Analysis of machine learning methods for wildfire security monitoring with an unmanned aerial vehicles
CN109154976B (en) System and method for training object classifier through machine learning
Kaljahi et al. An automatic zone detection system for safe landing of UAVs
US20060053342A1 (en) Unsupervised learning of events in a video sequence
KR20150100141A (en) Apparatus and method for analyzing behavior pattern
CN111488803A (en) Airport target behavior understanding system integrating target detection and target tracking
CN104981818A (en) Systems and methods to classify moving airplanes in airports
WO2023104557A1 (en) Machine-learning for safety rule violation determination
KR20220146670A (en) Traffic anomaly detection methods, devices, devices, storage media and programs
KR102556447B1 (en) A situation judgment system using pattern analysis
Xiao et al. Real-time object detection for substation security early-warning with deep neural network based on YOLO-V5
CN117523458A (en) Low-altitude unmanned aerial vehicle supervision system and method thereof
CN113095160B (en) Power system personnel safety behavior identification method and system based on artificial intelligence and 5G
CN117994700A (en) Intelligent construction site personnel behavior recognition system and method based on AI intelligent recognition
CN111801689A (en) System for real-time object detection and recognition using image and size features
Piciarelli et al. Surveillance-oriented event detection in video streams
Cheng et al. Moving target detection technology based on UAV Vision
CN115083229B (en) Intelligent recognition and warning system of flight training equipment based on AI visual recognition
Kinaneva et al. An artificial intelligence approach to real-time automatic smoke detection by unmanned aerial vehicles and forest observation systems
Khattak et al. Interpretable ensemble imbalance learning strategies for the risk assessment of severe‐low‐level wind shear based on LiDAR and PIREPs
Zaman et al. Deep learning approaches for vehicle and pedestrian detection in adverse weather
EP4350639A1 (en) Safety rule violation detection in a construction or constructed site
CN117590863B (en) Unmanned aerial vehicle cloud edge end cooperative control system of 5G security rescue net allies oneself with

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20240206

WW01 Invention patent application withdrawn after publication