[go: up one dir, main page]

CN115690628A - River and lake supervision method and system based on unmanned aerial vehicle - Google Patents

River and lake supervision method and system based on unmanned aerial vehicle Download PDF

Info

Publication number
CN115690628A
CN115690628A CN202211374088.4A CN202211374088A CN115690628A CN 115690628 A CN115690628 A CN 115690628A CN 202211374088 A CN202211374088 A CN 202211374088A CN 115690628 A CN115690628 A CN 115690628A
Authority
CN
China
Prior art keywords
target
water conservancy
image
sample
detection frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211374088.4A
Other languages
Chinese (zh)
Inventor
安新代
何刘鹏
姜成桢
胡洁
韦蔚
杨仁杰
荆芳
刘金明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yellow River Engineering Consulting Co Ltd
Original Assignee
Yellow River Engineering Consulting Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yellow River Engineering Consulting Co Ltd filed Critical Yellow River Engineering Consulting Co Ltd
Priority to CN202211374088.4A priority Critical patent/CN115690628A/en
Publication of CN115690628A publication Critical patent/CN115690628A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a river and lake supervision method and system based on an unmanned aerial vehicle, which comprises the following steps: inputting a plurality of frames of target images shot by the unmanned aerial vehicle into a trained water conservancy target recognition model respectively to obtain a water conservancy target marked by a detection frame and position information of the water conservancy target in the target images; obtaining the state of the water conservancy target in the detection frame image by adopting a trained water conservancy target tracking network model; calculating the number of reference water conservancy targets according to the state by adopting a line collision counting method; obtaining a real geographical position of a reference water conservancy target by adopting a space positioning model; and judging the reference water conservancy target in the preset river management range line as illegal behavior. According to the invention, after the water conservancy target identification model identifies the water conservancy target, the water conservancy target tracking network model is adopted for tracking and counting, the real geographic coordinate information of the water conservancy target can be calculated through the space positioning model, and the limitation of the unmanned aerial vehicle on river and lake supervision is reduced.

Description

River and lake supervision method and system based on unmanned aerial vehicle
Technical Field
The invention relates to the technical field of river and lake supervision, in particular to an unmanned aerial vehicle-based river and lake intelligent supervision method and system.
Background
The problem of river and lake supervision is that the river and lake supervision is rapidly found, most of the existing modes depend on manual inspection, the manual inspection is high in cost and poor in timeliness and coverage, and personnel in some severely hidden areas are difficult to reach and the safety of inspection personnel can be endangered. In the existing method, fixed cameras are adopted for video supervision, only a small number of fixed key areas can be targeted, and overall accurate management of river and lake problems cannot be well met. The unmanned aerial vehicle has the advantages of good real-time performance, flexibility, mobility, high resolution, high cost performance and the like, is a powerful means for realizing efficient patrol management and protection, and can provide powerful support for reservoir supervision. However, the application depth and the intellectualization of the unmanned aerial vehicle in reservoir area management in river and lake supervision are insufficient, and the unmanned aerial vehicle only serves as an information acquisition means and has limitation.
Disclosure of Invention
The invention aims to provide a river and lake supervision method and system based on an unmanned aerial vehicle, which reduce the limitation of the unmanned aerial vehicle on river and lake supervision.
In order to achieve the purpose, the invention provides the following scheme:
an unmanned aerial vehicle-based river and lake supervision method comprises the following steps:
acquiring a multi-frame target image of a target supervision area; the target image is shot by a camera on the unmanned aerial vehicle;
respectively inputting a plurality of frames of target images into a trained water conservancy target recognition model, marking a water conservancy target in the target images by adopting a detection frame, and determining position information of the water conservancy target in the target images; the trained water conservancy target identification model is a model obtained by training by taking a sample image as input and taking a sample water conservancy target marked by a sample detection frame and position information of the sample water conservancy target in the sample image as output;
intercepting the image in each detection frame to obtain a plurality of detection frame images;
inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image; the states include stationary, track, and stop tracking; the trained water conservancy target tracking network model is a model obtained by training by taking a sample detection frame image as input and taking the state of a sample water conservancy target in the sample detection frame image as output;
calculating the number of reference water conservancy targets by adopting a line collision counting method according to the states of the water conservancy targets in all the detection frame images, and acquiring position information of the reference water conservancy targets in the reference images during counting; the reference water conservancy target is a water conservancy target in tracking; the reference image is a target image when the reference water conservancy target passes through a preset counting line;
inputting the position information of the reference water conservancy target in a reference image and POS data corresponding to the reference image into a spatial positioning model to obtain the real geographical position of the reference water conservancy target; the POS data comprises current time, unmanned aerial vehicle spatial position information, unmanned aerial vehicle attitude information and camera attitude information; the spatial positioning model is a model obtained by training by taking the position information of the sample reference water conservancy target in the sample reference image and the sample POS data corresponding to the sample reference image as input and the real geographical position of the sample reference water conservancy target as output;
judging whether the reference water conservancy target is within the preset river channel management range line or not according to the real geographical position of the reference water conservancy target and the preset river channel management range line, and if so, judging the reference water conservancy target as illegal behavior; and if not, judging the reference water conservancy target as the illegal action.
Optionally, the method further includes:
and displaying the real geographical position of the reference water conservancy target which is judged to be illegal.
Optionally, before the inputting the multiple frames of target images into the trained water conservancy target recognition model respectively, the method further includes: training the water conservancy target recognition model, wherein the training process is as follows:
acquiring a data set; the data set comprises a plurality of sample images containing sample water conservancy targets and labels corresponding to the sample images; the label is a sample water conservancy target marked by a sample detection frame and position information of the sample water conservancy target in the sample image;
and training a water conservancy target recognition model by adopting the data set to obtain the trained water conservancy target recognition model.
Optionally, the inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image specifically includes:
inputting the detection frame image into a trained water conservancy target tracking network model to obtain predicted position information of the water conservancy target in the detection frame image;
calculating an interaction ratio according to the predicted position information and the real position information of the water conservancy target in the next frame target image;
and determining the state of the water conservancy target in the detection frame image according to the interaction ratio and a preset interaction ratio.
Optionally, the determining the state of the water conservancy target in the detection frame image according to the interaction ratio and a preset interaction ratio specifically includes:
and if the interaction ratio is greater than the preset interaction ratio, the state of the water conservancy target is in tracking, otherwise, the state of the water conservancy target is in stopping tracking.
Optionally, before the inputting the position information of the reference water conservancy target in the reference image and the POS data corresponding to the reference image into the spatial positioning model, the method further includes:
and interpolating the POS data by adopting an interpolation algorithm to obtain POS data corresponding to each reference image.
Optionally, the determining, according to the real geographic position of the reference water conservancy target and a preset river management range line, whether the reference water conservancy target is within the preset river management range line specifically includes:
and judging whether the reference water conservancy target is in the preset river channel management range line or not according to the real geographical position of the reference water conservancy target and the preset river channel management range line by adopting an ray method.
Optionally, the trained water conservancy target tracking network model is a deepsort model.
The invention also provides a river and lake supervision system based on the unmanned aerial vehicle, which comprises the following components:
the target image acquisition module is used for acquiring multi-frame target images of a target supervision area; the target image is shot by a camera on the unmanned aerial vehicle;
the water conservancy target recognition module is used for respectively inputting a plurality of frames of target images into a trained water conservancy target recognition model, marking a water conservancy target in the target images by adopting a detection frame, and determining position information of the water conservancy target in the target images; the trained water conservancy target identification model is a model obtained by training by taking a sample image as input and taking a sample water conservancy target marked by a sample detection frame and position information of the sample water conservancy target in the sample image as output;
the detection frame image acquisition module is used for intercepting the image in each detection frame to obtain a plurality of detection frame images;
the water conservancy target tracking module is used for inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image; the states include stationary, track, and stop tracking; the trained water conservancy target tracking network model is a model obtained by training by taking a sample detection frame image as input and taking the state of a sample water conservancy target in the sample detection frame image as output;
the counting module is used for calculating the number of reference water conservancy targets by adopting a line collision counting method according to the states of the water conservancy targets in all the detection frame images and acquiring position information of the reference water conservancy targets in the reference images during counting; the reference water conservancy target is a water conservancy target in tracking; the reference image is a target image when the reference water conservancy target passes through a preset counting line;
the real geographical position acquisition module is used for inputting the position information of the reference water conservancy target in the reference image and the POS data corresponding to the reference image into a spatial positioning model to obtain the real geographical position of the reference water conservancy target; the POS data comprises current time, unmanned aerial vehicle spatial position information, unmanned aerial vehicle attitude information and camera attitude information; the spatial positioning model is a model obtained by training by taking the position information of the sample reference water conservancy target in the sample reference image and the sample POS data corresponding to the sample reference image as input and the real geographical position of the sample reference water conservancy target as output;
the judging module is used for judging whether the reference water conservancy target is in the preset river channel management range line or not according to the real geographical position of the reference water conservancy target and the preset river channel management range line, and if yes, judging the reference water conservancy target as illegal behavior; and if not, judging the reference water conservancy target as the illegal action.
Optionally, the system further comprises a display module;
and the display module is used for displaying the real geographical position of the reference water conservancy target which is judged to be illegal.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a river and lake supervision method and system based on an unmanned aerial vehicle, which comprises the following steps: acquiring multi-frame target images of a target supervision area; the target image is shot by a camera on the unmanned aerial vehicle; respectively inputting the multiple frames of target images into a trained water conservancy target recognition model, marking a water conservancy target in the target images by adopting a detection frame, and determining position information of the water conservancy target in the target images; intercepting the image in each detection frame to obtain a plurality of detection frame images; inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image; states include stationary, track, and stop tracking; calculating the number of reference water conservancy targets by adopting a line collision counting method according to the states of the water conservancy targets in all the detection frame images, and acquiring position information of the reference water conservancy targets in the reference images during counting; inputting the position information of the reference water conservancy target in the reference image and POS data corresponding to the reference image into a space positioning model to obtain the real geographical position of the reference water conservancy target; the POS data comprises current time, unmanned aerial vehicle spatial position information, unmanned aerial vehicle attitude information and camera attitude information; and judging whether the illegal behaviors exist according to the reference water conservancy target whether the real geographical position of the reference water conservancy target is in the preset river channel management range line or not. According to the invention, after the water conservancy target identification model identifies the water conservancy target, the water conservancy target tracking network model is adopted for tracking and counting, and the real geographic coordinate information of the water conservancy target can be calculated through the space positioning model, so that the limitation of the unmanned aerial vehicle on river and lake supervision is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flow chart of a river and lake supervision method based on an unmanned aerial vehicle according to embodiment 1 of the present invention;
fig. 2 is a flowchart of a specific implementation of the method for supervising a river or lake based on an unmanned aerial vehicle according to embodiment 1 of the present invention;
fig. 3 is a structural diagram of a water conservancy target identification model provided in embodiment 1 of the present invention;
fig. 4 is a schematic view of a simple culture room in a reservoir area identified by a water conservancy target identification model provided in embodiment 1 of the present invention;
FIG. 5 is a flowchart of the deepsort algorithm provided in embodiment 1 of the present invention;
fig. 6 is a schematic diagram of target tracking counting of the unmanned aerial vehicle according to embodiment 1 of the present invention;
fig. 7 is a schematic diagram of a ray method provided in embodiment 1 of the present invention;
FIG. 8 is a schematic view of a ray intersection provided in example 1 of the present invention;
FIG. 9 is a schematic view of several cases of ray exclusion provided in example 1 of the present invention;
fig. 10 is a structural diagram of the intelligent inspection system provided in embodiment 1 of the present invention;
fig. 11 is a large-screen interface of the intelligent inspection system according to embodiment 1 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a river and lake supervision method and system based on an unmanned aerial vehicle, which reduce the limitation of the unmanned aerial vehicle on river and lake supervision.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example 1
The embodiment provides a river and lake supervision method based on an unmanned aerial vehicle, and referring to fig. 1 and fig. 2, the method comprises the following steps:
s1: acquiring a multi-frame target image of a target supervision area; the target image is shot by a camera on the unmanned aerial vehicle. In this embodiment, a video of a target supervision area shot by a camera on the unmanned aerial vehicle is first acquired, then the video is divided into one frame and one frame of image, and the divided one frame and one frame of image are used as a target image.
S2: respectively inputting a plurality of frames of target images into a trained water conservancy target recognition model, marking a water conservancy target in the target images by adopting a detection frame, and determining position information of the water conservancy target in the target images; the trained water conservancy target recognition model is a model obtained by training by taking a sample image as input and taking a sample water conservancy target marked by a sample detection frame and position information of the sample water conservancy target in the sample image as output.
S3: and intercepting the image in each detection frame to obtain a plurality of detection frame images.
S4: inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image; the states include stationary, track, and stop tracking; the trained water conservancy target tracking network model is a model obtained by training by taking a sample detection frame image as input and taking the state of the sample water conservancy target in the sample detection frame image as output.
S5: calculating the number of reference water conservancy targets by adopting a line collision counting method according to the states of the water conservancy targets in all the detection frame images, and acquiring position information of the reference water conservancy targets in the reference images during counting; the reference water conservancy target is a water conservancy target in tracking; the reference image is a target image when the reference water conservancy target passes through a preset counting line.
S6: inputting the position information of the reference water conservancy target in a reference image and POS data corresponding to the reference image into a spatial positioning model to obtain the real geographical position of the reference water conservancy target; the POS data comprises current time, unmanned aerial vehicle spatial position information, unmanned aerial vehicle attitude information and camera attitude information; the spatial positioning model is obtained by training by taking the position information of the sample reference water conservancy target in the sample reference image and the sample POS data corresponding to the sample reference image as input and the real geographical position of the sample reference water conservancy target as output.
S7: judging whether the reference water conservancy target is within the preset river channel management range line or not according to the real geographical position of the reference water conservancy target and the preset river channel management range line, and if so, judging the reference water conservancy target as illegal behavior; if not, the reference water conservancy target is judged as the illegal action.
In this embodiment, the method further includes a construction process of the water conservancy target identification model:
the YOLO series algorithm is based on PyTorch framework, is convenient to expand to mobile equipment, and belongs to a lighter-weight network. The YOLOv5 comprises four network structures of YOLOv5s, YOLOv5m, YOLOv5l and YOLOv5x, the network widths and the depths of the four network structures are different, and the parameter quantity is increased in sequence. YOLOv5s is the first choice of a lightweight network and is convenient to deploy to embedded equipment, the width and the depth of the model are increased to different degrees on the basis of the other 3 models, and the YOLOv5s model is selected by integrating the detection precision and the detection rate. In this embodiment, a YOLOv5 model is used as a water conservancy target identification model, a network structure of the model is shown in fig. 3, the type and precision of model identification are directly related to a sample, data needs to be collected and collated manually in the early stage to form a training data set, parameters are debugged and subjected to multiple iterations to obtain a corresponding weight parameter file, the weight file and a target to be detected are simultaneously input into the network model, and the output is a predicted water conservancy target identification result.
After constructing the water conservancy target recognition model, still need train the water conservancy target recognition model, the training process specifically includes:
acquiring a data set; the data set comprises a plurality of sample images containing sample water conservancy targets and labels corresponding to the sample images; the label is a sample water conservancy target marked by a sample detection frame and position information of the sample water conservancy target in the sample image;
and training a water conservancy target recognition model by adopting the data set to obtain the trained water conservancy target recognition model. Specifically, the method comprises the following steps:
the sample data set is randomly divided into two parts, namely a training set and a test set, wherein the proportion of the training set to the test set is 9. Setting main model parameters during training:
number of learning (epochs): the number of times the model convergence needs to be optimized is adjusted, and the default setting is 200.
Learning rate (learning _ rate): the learning rate is an important hyper-parameter for deep learning, which controls the speed of adjusting the weight of the neural network based on the loss gradient, for which most optimization algorithms (SGD, RMSprop, adam) are involved. The smaller the learning rate, the slower the speed of the loss gradient descent, and the longer the convergence time, and the default learning rate of the system is 0.001.
Sub training set size (batch _ size): the larger the parameter is, the better global optimal solution can be obtained by the model, but the consumption of the corresponding video memory is increased, the video memory shortage can occur when the setting is overlarge, so that the system is directly closed to stop running, and the default setting of the 6G video memory is 3-4; the default of the 12G video memory is 8-10.
And (3) precision evaluation standard: in this embodiment, a class average pixel accuracy (MPA) and an average Intersection ratio (MIoU) are selected as accuracy evaluation indexes. The meaning of the class average pixel accuracy is to calculate the proportion of the number of pixels correctly classified in each class respectively and then add up to average. And (3) average intersection ratio mIOU (average intersection ratio), namely, the intersection of the prediction region and the actual region is divided by the union of the prediction region and the actual region, so that the IOU in a single category is obtained by calculation, then the IOU in other categories is calculated by repeating the algorithm, and then the average number of the IOU in other categories is calculated. It means the ratio of the intersection and union of the predicted result and the true value of each type by the model, and then the sum is summed and the average is calculated.
The model can automatically store the optimal training model in all iteration times according to the MPA and the IOU, and print out the training loss, the training precision, the testing loss, the testing precision and the like in each iteration process in real time, so that a user can view the operation result and the precision of the model.
Target ground object: in this embodiment, the water conservancy target ground object (water conservancy target) that needs to be extracted includes "indiscriminate occupation, indiscriminate mining, indiscriminate heap, indiscriminate building" etc., wherein, each water conservancy target classification standard is as follows:
disorderly account for: the method is used for planting trees, high-stem crops and the like which are not approved to enclose and cultivate river channels, illegally occupy water areas and beach lands, and block flood discharge.
Random mining: and illegally collecting sand and soil in rivers and lakes.
Disorder: throwing garbage, dumping, burying, storing, stacking solid waste, discarding and stacking objects which obstruct flood discharge, and the like.
Construction in disorder: the bank lines of the river and lake water areas are occupied for a long time, occupied more and less, occupied abusive and abusive, illegal river-related projects are constructed, and buildings, structures and the like which obstruct flood passing are constructed in the river management range.
In this embodiment, a target image shot by an unmanned aerial vehicle is input into a water conservancy target recognition model, the target image is output after being judged by the target recognition model, if the target image of the frame contains a recognition water conservancy target, a rectangular frame (detection frame) is used for surrounding the recognition water conservancy target with a mark, the width and the height of the rectangular frame are obtained, and the pixel coordinates of the central point of the rectangular frame are used for representing the position information of the water conservancy target in the target image. The simple house is a common violation building, and the training of the water conservancy target identification model is performed by taking the simple house as a water conservancy target in the embodiment, for example, fig. 4 shows that the simple house is cultivated in a reservoir area identified by the water conservancy target identification model.
In this embodiment, the inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image specifically includes:
inputting the detection frame image into a trained water conservancy target tracking network model to obtain predicted position information of the water conservancy target in the detection frame image;
calculating an interaction ratio according to the predicted position information and the real position information of the water conservancy target in the next frame target image;
and determining the state of the water conservancy target in the detection frame image according to the interaction ratio and a preset interaction ratio.
The determining the state of the water conservancy target in the detection frame image according to the interaction ratio and a preset interaction ratio specifically includes:
and if the interaction ratio is greater than the preset interaction ratio, the state of the water conservancy target is in tracking, and if not, the state of the water conservancy target is in stopping tracking. Specifically, the method comprises the following steps:
when an object is identified as a water conservancy business target (namely, a water conservancy target) by a target identification model, the object is determined as a tracking target, the state of the target is recorded in a tracking list, then a counting line is preset in a video picture, judgment is carried out when a central pixel point of a rectangular frame of the tracking target passes through the counting line, if the state of the water conservancy target is in the tracking list, the tracking target is judged (namely, the state of the water conservancy target is in tracking), and the number of the water interest marks in one-time routing inspection is increased by one; and if the state of the water conservancy target is not in the tracking list, judging that the water conservancy target is not the tracking target and keeping the number of the water conservancy business targets unchanged.
In this embodiment, the states of the water conservancy target include a static state and a moving state; wherein the motion state includes track-in and stop-tracking. When the state of the water conservancy target is a static state, the pixel coordinates of the water conservancy target in two continuous frame target images are not changed, namely the two frame target images shot by the unmanned aerial vehicle are repeated. And calculating an interaction ratio through the predicted position information and the real position information of the water conservancy target in the next frame of target image, comparing the interaction ratio with a preset interaction ratio, if the interaction ratio is larger than the preset interaction ratio, determining that the state of the water conservancy target is in tracking, and otherwise, determining that the state of the water conservancy target is in stopping tracking. In the present embodiment, the preset interaction ratio is 0.6.
It should be noted that, in this embodiment, if the water conservancy target v appears in the first three frames but the water conservancy target v does not appear in the fourth frame, it is determined whether the water conservancy target v exists in the fifth frame image, and if the water conservancy target v does not exist in the fifth frame image, the state of the water conservancy target v is to stop tracking. And if the water conservancy target v in the third frame is in a state of stopping tracking, but the water conservancy target v appears in the fifth frame, tracking and counting the water conservancy target v serving as a new water conservancy target.
This embodiment adopts deppsort model as water conservancy target tracking network model, carries out the judgement between the video continuous frame to the water conservancy target of discerning, judges whether the target of discerning is same object in a plurality of frame images, uses the count algorithm of strikeing to count and carry out video frame image save and record water conservancy target information, including water conservancy target position information the water conservancy target that has been tracked to be same object between a plurality of continuous frames then. Before that, a water conservancy target tracking network model is required to be constructed:
and further tracking water conservancy targets and calculating the number of the water conservancy targets after the water conservancy targets are identified, and a target tracking algorithm model is needed for realizing the water conservancy target tracking. The apparent characteristics of the water conservancy target are extracted in the target tracking process to carry out nearest neighbor matching, so that the target tracking effect under the shielding condition can be effectively improved. As shown in FIG. 5, the process of the deepsort algorithm can be divided into four parts, namely trajectory processing and state estimation, correlation measurement, cascade matching and depth feature descriptor.
depsort is to apply the depth appearance characteristic to the model, depsort is to upgrade on the basis of sort target tracking, and the appearance characteristic of the target is extracted in the target tracking process to carry out nearest neighbor matching, so that the target tracking effect under the shielding condition can be effectively improved. The motion state of the target is predicted through a standard Kalman filter based on a constant velocity model and a linear observation model, if a water conservancy target cannot be matched with an existing path all the time in the target matching process, the water conservancy target is considered to be a new water conservancy target, if the water conservancy target is continuously detected in the next 3 frames, the water conservancy target is considered to be the new water conservancy target, a new tracking path is generated by taking the water conservancy target as an initial target, and otherwise, the new tracking path is not generated. After the water conservancy target is determined to be tracked, the state of the water conservancy target is recorded in a tracking list, then a counting line is arranged in a video picture, the central pixel point of the water conservancy target is judged when passing through the counting line, if the state of the water conservancy target is in the tracking list, the tracking target is judged (namely the state of the water conservancy target is in tracking), and counting is increased by one; if the state of the water conservancy target is not in the tracking list, the water conservancy target is judged not to be the tracking target, and counting is unchanged, so that the problem that the water conservancy target is repeatedly counted in each frame during target identification can be effectively solved, and the effect of accurate counting is achieved. As shown in FIG. 6, the horizontal line in the middle of FIG. 6 is a counting line, and only when the water conservancy target center point passes through the counting line, the counting is performed, and the water conservancy targets which do not pass through and have passed through do not participate in the counting. In fig. 6 (a), building1 is the 1 st concierge participating in counting, and in fig. 6 (b), building4 is the 4 th concierge participating in counting.
In this embodiment, before the inputting the position information of the reference water conservancy target in the reference image and the POS data corresponding to the reference image into the spatial positioning model, the method further includes:
and (3) interpolating the POS data by adopting an interpolation algorithm to obtain POS data corresponding to each reference image, specifically:
the POS attitude information of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the video frame is needed to be used in the target coordinate calculation algorithm, the POS data are obtained by routing inspection APP and calling a ground control station of the unmanned aerial vehicle, because POS information can only be returned for 6-7 times per second, the video of the next conventional second is 24 frames, the video frames are not matched with the obtained POS information, interpolation is needed to be carried out according to the flight attitude and the speed of the aircraft, and therefore the POS data of the frames are obtained. In this embodiment, the POS data corresponding to each frame of target image mainly includes current time, unmanned aerial vehicle position, unmanned aerial vehicle attitude, camera attitude, etc., where the geographic coordinate of the unmanned aerial vehicle at a certain time is: x uva ,Y uva ,Z uva The unmanned aerial vehicle attitude comprises yaw angle yaw uva Side roll angle roll uva And pitch angle pitch uva . Because in order to reduce video distortion when patrolling and examining, set up camera perpendicular to unmanned aerial vehicle organism, the unmanned aerial vehicle camera is shot perpendicularly downwards, therefore the gesture of each frame: yaw = yaw uva ,roll=roll uva ,pitch=pitch uva . And then matching the obtained POS data of the unmanned aerial vehicle with each frame of video target image, and then analyzing the spatial position of the video frame by using a collinear equation.
And then, constructing a spatial positioning model, and inputting the position information of the reference water conservancy target in the reference image and the POS data corresponding to the reference image into the spatial positioning model to obtain the real geographical position of the reference water conservancy target. And the position information of the reference water conservancy target in the reference image is the central point coordinate of the reference water conservancy target. In this embodiment, a spatial positioning model is further adopted to obtain the image center point of the reference image and the real geographic position coordinates of the four corner points, and the coordinate point positions and the image information are stored.
The following describes the spatial positioning process:
acquiring pixel point coordinates (I, J) of a target point A in an image, wherein I is a pixel column number, J is a pixel row number, and calculating the coordinates of the target point in an image plane coordinate system according to a formula (1):
Figure BDA0003925803260000111
where Δ is the physical size of a single pixel, m is the number of rows of image pixels, and n is the number of rows of image pixels.
The acquired camera parameters are: the focal length of the camera is f, and the size of the CCD array corresponding to the sensor is W x H. Firstly, the space coordinate of the center point of the video projection is calculated by utilizing the collinear equation (formulas (2) and (3)):
Figure BDA0003925803260000121
Figure BDA0003925803260000122
in the formula: x and y are coordinates of an image plane coordinate system of the image point; x is the number of 0 ,y 0 Is the internal orientation element of the image; x s =X uva ,Y s =Y uva ,Z S =Z uva The object space coordinates of the filming points are set; x A ,Y A ,Z A The object space coordinates of the ground point corresponding to (x, y); a is i ,b i ,c i (i =1,2,3) is a rotation matrix composed of 3 external orientation angle elements, as in equation (4).
Figure BDA0003925803260000123
Wherein,
Figure BDA0003925803260000124
Figure BDA0003925803260000125
b 1 =cosωsin k;b 2 =cosωcosκ;b 3 =-sinω;
Figure BDA0003925803260000126
Figure BDA0003925803260000127
in the formula: raw = κ, roll = ω,
Figure BDA0003925803260000128
from the collinearity equation:
Figure BDA0003925803260000129
Figure BDA00039258032600001210
wherein (X) A ,Y A ) For the coordinates of the target point A in the terrestrial photogrammetry coordinate system, (X) s ,Y s ) The coordinates of the image principal point in the terrestrial photogrammetry coordinate system, and (x, y) the coordinates of the target point A in the image plane coordinate system. Therefore, the spatial coordinates of the ground points corresponding to the image points can be calculated by using the formulas (5) and (6), and the geographic positioning is realized.
After the target identification network model identifies the specific water conservancy target in the video frame, the pixel coordinate of the central point of the rectangular frame of the target identification network model is transmitted into the video space positioning model, and the real geographic coordinate of the specific water conservancy target can be calculated according to the steps.
In this embodiment, the determining, according to the real geographic position of the reference water conservancy target and a preset river channel management range line, whether the reference water conservancy target is within the preset river channel management range line specifically includes:
and judging whether the reference water conservancy target is in the preset river channel management range line or not according to the real geographical position of the reference water conservancy target and the preset river channel management range line by adopting a ray method. Specifically, the method comprises the following steps:
as shown in fig. 7, the requirement often faced when determining whether a point is within a polygon to process spatial data, such as a clicking function in GIS software, selecting points within a polygon according to polygon boundaries, finding intersections, selecting points not within a polygon, and so on. The ray method is characterized in that the ray method starts from the judging point, a ray is taken in the right (or left) horizontal direction, the number of intersection points of the ray and each edge of the polygon is calculated, if the number of the intersection points is an odd number, the point is located in the polygon, and if the number of the intersection points is an even number, the point is located outside the polygon. The algorithm can also judge the composite polygon correctly. The key of the ray method is to correctly calculate the intersection condition of the ray and the polygon edge, and if the ray intersects with one edge, the number of intersection points is added with 1. As shown in fig. 8, if the number of intersecting points of the ray a and the polygon is 3, the end point of the ray a is inside the polygon; if the number of intersection points of the ray b and the polygon is 4, the end point of the ray b is not in the polygon. In this embodiment, it is also provided that the line segments overlap the ray or that the ray passes through the lower end point of the line segment without intersection. The disjoint case is excluded first, and both cases of fig. 9 need to be excluded. The exclusion method is explained by the line segment above the ray, and the coordinates of the ray are (x, y) and the coordinates of the starting endpoint of the line segment are (x) s ,y s ) The coordinates of the ending point of the line segment are (x) e ,y e ) If y s >y,y e >y may determine that the line segment is above the ray.
In this embodiment, the method further includes: and displaying the real geographical position of the reference water conservancy target which is judged to be illegal. Specifically, the method comprises the following steps:
and uploading the real geographical position coordinates of the reference water conservancy target obtained by the calculation to a GIS system, judging the ray position with a patrol management range vector line (a preset river management range line), determining whether the reference water conservancy target is in the river management range line, judging whether the identified reference water conservancy target is illegal, and uploading the real geographical information of the target to the GIS system for display after the reference water conservancy target is judged to be illegal. According to the method, the corresponding positions of all points of the river channel management range line in video pixels are calculated according to the proportional relation between the real geographic position coordinate point of each point of the river channel management range line and a rectangle determined by four corner points, the corresponding positions of all points of the river channel management range line are drawn in corresponding reference images, then video frames are converted into rtmp live streaming services to be uploaded to a streaming media server, and then video streams are pulled by a GIS system to be visually displayed.
In this embodiment, an intelligent inspection system is provided based on an unmanned aerial vehicle, as shown in fig. 10, the intelligent inspection system is composed of an unmanned aerial vehicle platform, an unmanned aerial vehicle inspection APP, a video streaming service, an intelligent interpretation service, a data management system, and a comprehensive application system 6. Fig. 11 is a large screen interface of the intelligent inspection system.
(1) Unmanned aerial vehicle platform
The rotor unmanned aerial vehicle is used, appropriate sensor equipment is selected according to the routing inspection task, and the sensors commonly used comprise an optical camera, a thermal infrared camera, a multispectral imager, a laser scanner and the like.
(2) Unmanned inspection APP (unmanned management APP)
The APP mainly achieves functions of routing inspection task receiving, unmanned flight control, flight state monitoring, flight POS information obtaining, collected data returning and the like. The collected data return function can be realized by additionally arranging a 4G/5G module on the unmanned aerial vehicle platform.
(3) Video streaming service
The video streaming service receives video information sent back by the unmanned aerial vehicle inspection APP, transcodes the video information while storing the data, and pushes the video information to the intelligent analysis service in protocol formats such as flv and hls.
(4) Intelligent analytics service
The intelligent analysis service is through multiple techniques such as deep learning, mode identification, and the powerful parallel processing ability of make full use of GPU realizes taking out the frame to the video or unmanned aerial vehicle patrols and examines the direct propelling movement image data of APP and carry out rapid analysis, matches POS information, discernment problem classification, calculates the spatial position, and the screenshot evidence is reserved to call data management system interface and write into the database with the analysis result, remind the management user to examine and check and do further processing. The video frame data is also required to be restored into a video stream so as to be synchronously displayed in the integrated application system.
(5) Data management system
The data management system is mainly used for maintaining and managing the operation data of the whole system, the state data of the unmanned aerial vehicle, the intelligent identification result data and the like so as to ensure the normal operation of the system and provide data support for the comprehensive application system.
(6) Integrated application system
With the mode that multi-terminal (desktop end and removal end), multiform (two-dimentional, big screen, APP) combined together, realize patrolling and examining route planning, patrol and examine task management, flight state show, flight emergency control, patrol and examine the function such as problem warning, preceding back contrastive analysis, problem standing book management.
Aiming at pain points of river and lake supervision problems, a water conservancy target identification model, a water conservancy target tracking network model and a space positioning model of a water conservancy specific object (water conservancy target) are constructed; identifying a specific water conservancy target in the unmanned aerial vehicle video image through a water conservancy target identification model, and performing real geographic coordinate calculation processing on the identified specific water conservancy target through a spatial positioning model; obtaining geographical coordinate information of a specific water conservancy target, then performing superposition judgment on the geographical coordinate information and river channel management range vector data, uploading the water conservancy target in a river channel management range line to an alarm platform, and sending the real geographical coordinate of the illegal water conservancy target and a screenshot containing the water conservancy target in the unmanned aerial vehicle to a GIS (geographic information system) for visual display; meanwhile, the vector data of the river management range can be inversely calculated from the geographic coordinates into pixel coordinates in a real-time video frame, and finally, the pixel coordinates are superposed on the video and then are pushed to a GIS system for synchronous display.
Compared with the prior art, the method has high precision for identifying the water conservancy target, can track and count after identifying the water conservancy target, can calculate the real geographic coordinate information according to the space positioning model during counting, can also perform superposition judgment on the geographic coordinate information and the river channel management vector range, accurately obtain the violation behaviors in the river channel management range line and push and alarm the information, can also synchronously visually display the violation behaviors in a GIS (geographic information system) system, and can also push and display the violation behaviors in the GIS system after reversely calculating and superposing the river channel management range line data in a video.
Example 2
This embodiment provides a river and lake supervisory systems based on unmanned aerial vehicle, includes:
the target image acquisition module is used for acquiring multi-frame target images of a target supervision area; the target image is shot by a camera on the unmanned aerial vehicle.
The water conservancy target recognition module is used for respectively inputting a plurality of frames of target images into a trained water conservancy target recognition model, marking a water conservancy target in the target images by adopting a detection frame, and determining position information of the water conservancy target in the target images; the trained water conservancy target recognition model is a model obtained by training by taking a sample image as input and taking a sample water conservancy target marked by a sample detection frame and position information of the sample water conservancy target in the sample image as output.
And the detection frame image acquisition module is used for intercepting the images in each detection frame to obtain a plurality of detection frame images.
The water conservancy target tracking module is used for inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image; the states include stationary, track, and stop tracking; the trained water conservancy target tracking network model is a model obtained by training by taking a sample detection frame image as input and taking the state of a sample water conservancy target in the sample detection frame image as output.
The counting module is used for calculating the number of reference water conservancy targets by adopting a line collision counting method according to the states of the water conservancy targets in all the detection frame images and acquiring position information of the reference water conservancy targets in the reference images during counting; the reference water conservancy target is a water conservancy target in tracking; the reference image is a target image when the reference water conservancy target passes through a preset counting line.
The real geographical position acquisition module is used for inputting the position information of the reference water conservancy target in the reference image and the POS data corresponding to the reference image into a spatial positioning model to obtain the real geographical position of the reference water conservancy target; the POS data comprises current time, unmanned aerial vehicle spatial position information, unmanned aerial vehicle attitude information and camera attitude information; the spatial positioning model is obtained by training by taking the position information of the sample reference water conservancy target in the sample reference image and the sample POS data corresponding to the sample reference image as input and the real geographical position of the sample reference water conservancy target as output.
The judging module is used for judging whether the reference water conservancy target is in the preset river channel management range line or not according to the real geographic position of the reference water conservancy target and the preset river channel management range line, and if yes, judging the reference water conservancy target as an illegal action; if not, the reference water conservancy target is judged as the illegal action.
In this embodiment, the display device further comprises a display module;
and the display module is used for displaying the real geographical position of the reference water conservancy target which is judged to be illegal.
The emphasis of each embodiment in the present specification is on the difference from the other embodiments, and the same and similar parts among the various embodiments may be referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principle and the embodiment of the present invention are explained by applying specific examples, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the foregoing, the description is not to be taken in a limiting sense.

Claims (10)

1. A river and lake supervision method based on an unmanned aerial vehicle is characterized by comprising the following steps:
acquiring multi-frame target images of a target supervision area; the target image is shot by a camera on the unmanned aerial vehicle;
respectively inputting a plurality of frames of target images into a trained water conservancy target recognition model, marking a water conservancy target in the target images by adopting a detection frame, and determining position information of the water conservancy target in the target images; the trained water conservancy target identification model is a model obtained by training by taking a sample image as input and taking a sample water conservancy target marked by a sample detection frame and position information of the sample water conservancy target in the sample image as output;
intercepting the image in each detection frame to obtain a plurality of detection frame images;
inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image; the states include stationary, track, and stop tracking; the trained water conservancy target tracking network model is a model obtained by training by taking a sample detection frame image as input and taking the state of a sample water conservancy target in the sample detection frame image as output;
calculating the number of reference water conservancy targets by adopting a line collision counting method according to the states of the water conservancy targets in all the detection frame images, and acquiring position information of the reference water conservancy targets in a reference image during counting; the reference water conservancy target is a water conservancy target in tracking; the reference image is a target image when the reference water conservancy target passes through a preset counting line;
inputting the position information of the reference water conservancy target in a reference image and POS data corresponding to the reference image into a spatial positioning model to obtain the real geographical position of the reference water conservancy target; the POS data comprises current time, unmanned aerial vehicle spatial position information, unmanned aerial vehicle attitude information and camera attitude information; the spatial positioning model is a model obtained by training by taking the position information of a sample reference water conservancy target in a sample reference image and sample POS data corresponding to the sample reference image as input and the real geographic position of the sample reference water conservancy target as output;
judging whether the reference water conservancy target is within the preset river channel management range line or not according to the real geographical position of the reference water conservancy target and the preset river channel management range line, and if so, judging the reference water conservancy target as illegal behavior; if not, the reference water conservancy target is judged as the illegal action.
2. The unmanned-aerial-vehicle-based river and lake supervision method according to claim 1, further comprising:
and displaying the real geographical position of the reference water conservancy target which is judged to be illegal.
3. The unmanned aerial vehicle-based river and lake supervision method according to claim 1, wherein before the step of respectively inputting the multiple frames of target images into the trained water conservancy target recognition model, the method further comprises: training the water conservancy target recognition model, wherein the training process is as follows:
acquiring a data set; the data set comprises a plurality of sample images containing sample hydraulic targets and labels corresponding to the sample images; the label is a sample water conservancy target marked by a sample detection frame and position information of the sample water conservancy target in the sample image;
and training the water conservancy target recognition model by adopting the data set to obtain the trained water conservancy target recognition model.
4. The unmanned aerial vehicle-based river and lake supervision method according to claim 1, wherein the step of inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image specifically comprises the steps of:
inputting the detection frame image into a trained water conservancy target tracking network model to obtain predicted position information of the water conservancy target in the detection frame image;
calculating an interaction ratio according to the predicted position information and the real position information of the water conservancy target in the next frame target image;
and determining the state of the water conservancy target in the detection frame image according to the interaction ratio and a preset interaction ratio.
5. The unmanned-aerial-vehicle-based river and lake supervision method according to claim 4, wherein the determining of the state of the water conservancy target in the detection frame image according to the interaction ratio and a preset interaction ratio specifically comprises:
and if the interaction ratio is greater than the preset interaction ratio, the state of the water conservancy target is in tracking, otherwise, the state of the water conservancy target is in stopping tracking.
6. The unmanned-aerial-vehicle-based river and lake supervision method according to claim 1, wherein before inputting the position information of the reference water conservancy target in the reference image and the POS data corresponding to the reference image into the spatial positioning model, the method further comprises:
and interpolating the POS data by adopting an interpolation algorithm to obtain the POS data corresponding to each reference image.
7. The unmanned aerial vehicle-based river and lake supervision method according to claim 1, wherein the step of judging whether the reference water conservancy target is within the preset river management range line according to the real geographic position of the reference water conservancy target and a preset river management range line specifically comprises:
and judging whether the reference water conservancy target is in the preset river channel management range line or not according to the real geographical position of the reference water conservancy target and the preset river channel management range line by adopting an ray method.
8. The unmanned aerial vehicle-based river and lake supervision method according to claim 1, wherein the trained water conservancy target tracking network model is a deepsort model.
9. The utility model provides a river lake supervisory systems based on unmanned aerial vehicle which characterized in that includes:
the target image acquisition module is used for acquiring multi-frame target images of a target supervision area; the target image is shot by a camera on the unmanned aerial vehicle;
the water conservancy target recognition module is used for respectively inputting a plurality of frames of target images into a trained water conservancy target recognition model, marking a water conservancy target in the target images by adopting a detection frame, and determining position information of the water conservancy target in the target images; the trained water conservancy target recognition model is a model obtained by training by taking a sample image as input and taking a sample water conservancy target marked by a sample detection frame and position information of the sample water conservancy target in the sample image as output;
the detection frame image acquisition module is used for intercepting the image in each detection frame to obtain a plurality of detection frame images;
the water conservancy target tracking module is used for inputting the detection frame image into a trained water conservancy target tracking network model to obtain the state of the water conservancy target in the detection frame image; the states include stationary, track, and stop tracking; the trained water conservancy target tracking network model is a model obtained by training by taking a sample detection frame image as input and taking the state of a sample water conservancy target in the sample detection frame image as output;
the counting module is used for calculating the number of reference water conservancy targets by adopting a line collision counting method according to the states of the water conservancy targets in all the detection frame images and acquiring position information of the reference water conservancy targets in the reference images during counting; the reference water conservancy target is a water conservancy target in tracking; the reference image is a target image when the reference water conservancy target passes through a preset counting line;
the real geographical position acquisition module is used for inputting the position information of the reference water conservancy target in the reference image and the POS data corresponding to the reference image into a spatial positioning model to obtain the real geographical position of the reference water conservancy target; the POS data comprises current time, unmanned aerial vehicle spatial position information, unmanned aerial vehicle attitude information and camera attitude information; the spatial positioning model is a model obtained by training by taking the position information of a sample reference water conservancy target in a sample reference image and sample POS data corresponding to the sample reference image as input and the real geographic position of the sample reference water conservancy target as output;
the judging module is used for judging whether the reference water conservancy target is in the preset river channel management range line or not according to the real geographic position of the reference water conservancy target and the preset river channel management range line, and if yes, judging the reference water conservancy target as an illegal action; if not, the reference water conservancy target is judged as the illegal action.
10. The drone-based river and lake supervision system according to claim 9, further comprising: a display module;
and the display module is used for displaying the real geographical position of the reference water conservancy target which is judged to be illegal.
CN202211374088.4A 2022-11-04 2022-11-04 River and lake supervision method and system based on unmanned aerial vehicle Pending CN115690628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211374088.4A CN115690628A (en) 2022-11-04 2022-11-04 River and lake supervision method and system based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211374088.4A CN115690628A (en) 2022-11-04 2022-11-04 River and lake supervision method and system based on unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN115690628A true CN115690628A (en) 2023-02-03

Family

ID=85048084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211374088.4A Pending CN115690628A (en) 2022-11-04 2022-11-04 River and lake supervision method and system based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN115690628A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119007052A (en) * 2024-10-24 2024-11-22 杭州定川信息技术有限公司 Intelligent river patrol method based on unmanned aerial vehicle
CN119026629A (en) * 2024-08-08 2024-11-26 成都华芯智云科技有限公司 A passenger flow counting method based on infrared rays

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119026629A (en) * 2024-08-08 2024-11-26 成都华芯智云科技有限公司 A passenger flow counting method based on infrared rays
CN119007052A (en) * 2024-10-24 2024-11-22 杭州定川信息技术有限公司 Intelligent river patrol method based on unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
CN111145545B (en) UAV monitoring system and method for road traffic behavior based on deep learning
Van Etten et al. The multi-temporal urban development spacenet dataset
CN103425967B (en) A kind of based on stream of people's monitoring method of pedestrian detection and tracking
KR102203135B1 (en) Method and system for detecting disaster damage information based on artificial intelligence using drone
CN109190508A (en) A kind of multi-cam data fusion method based on space coordinates
CN108154110B (en) Intensive people flow statistical method based on deep learning people head detection
CN108596054A (en) A kind of people counting method based on multiple dimensioned full convolutional network Fusion Features
CN103795976A (en) Full space-time three-dimensional visualization method
CN115690628A (en) River and lake supervision method and system based on unmanned aerial vehicle
Hinz et al. Car detection in aerial thermal images by local and global evidence accumulation
KR101645959B1 (en) The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map
CN107295230A (en) A kind of miniature object movement detection device and method based on thermal infrared imager
CN110246160A (en) Detection method, device, equipment and the medium of video object
CN102509287A (en) Finding method for static target based on latitude and longitude positioning and image registration
CN113505643B (en) Method and related device for detecting violation target
CN108471497A (en) A kind of ship target real-time detection method based on monopod video camera
CN114184175A (en) A method for constructing 3D model of complex terrain based on UAV video streaming route
CN110909625A (en) Computer vision basic network training, identifying and constructing method and device
CN114067438A (en) A method and system for apron human motion recognition based on thermal infrared vision
CN104702917A (en) Video concentrating method based on micro map
CN117152971A (en) AI traffic signal optimization method based on high-altitude panoramic video
LU500512B1 (en) Crowd distribution form detection method based on unmanned aerial vehicle and artificial intelligence
CN114782677A (en) Image processing method, image processing apparatus, computer device, storage medium, and computer program
CN109977796A (en) Trail current detection method and device
CN110276379A (en) A kind of the condition of a disaster information rapid extracting method based on video image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination