CN109708659B - Distributed intelligent photoelectric low-altitude protection system - Google Patents
Distributed intelligent photoelectric low-altitude protection system Download PDFInfo
- Publication number
- CN109708659B CN109708659B CN201811586162.2A CN201811586162A CN109708659B CN 109708659 B CN109708659 B CN 109708659B CN 201811586162 A CN201811586162 A CN 201811586162A CN 109708659 B CN109708659 B CN 109708659B
- Authority
- CN
- China
- Prior art keywords
- target
- subsystem
- information
- photoelectric
- station
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 claims abstract description 94
- 238000003384 imaging method Methods 0.000 claims abstract description 32
- 238000009826 distribution Methods 0.000 claims abstract description 16
- 238000005286 illumination Methods 0.000 claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 11
- 230000003993 interaction Effects 0.000 claims abstract description 7
- 238000005259 measurement Methods 0.000 claims description 24
- 230000003287 optical effect Effects 0.000 claims description 13
- 238000012544 monitoring process Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 7
- 230000033001 locomotion Effects 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 5
- 230000009545 invasion Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000005856 abnormality Effects 0.000 claims description 2
- 230000007123 defense Effects 0.000 claims 1
- 238000013461 design Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 241000592183 Eidolon Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004132 cross linking Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000011900 installation process Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000005622 photoelectricity Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Landscapes
- Closed-Circuit Television Systems (AREA)
Abstract
The invention discloses a distributed intelligent photoelectric low-altitude protection system which comprises a photoelectric distribution imaging system based on laser active illumination, a server and a display control terminal, wherein the photoelectric distribution imaging system is connected with the server and the display control terminal through a local area network, the display control terminal acquires service information and realizes man-machine interaction through the connection server, and the photoelectric distribution imaging system comprises a plurality of single-station photoelectric detection subsystems which are distributed. The system comprises the following six working steps: the method comprises the following steps of system detection and parameter configuration, photoelectric detection of low-altitude targets, resource scheduling of a distributed subsystem, binocular/multi-view vision imaging, automatic tracking and evidence obtaining of target videos, human-computer interaction and comprehensive information display.
Description
Technical Field
The invention relates to the field of low-altitude unmanned-machine technology control, in particular to a distributed intelligent photoelectric low-altitude protection system.
Background
In recent years, the development of small and miniature unmanned aerial vehicles for civil use is rapid, the number of the unmanned aerial vehicles is increased geometrically, and the management of the unmanned aerial vehicles is delayed seriously. The phenomena of black flight and messy flight of the unmanned aerial vehicle are increasingly serious, and the unmanned aerial vehicle poses new threats to the flight safety of civil aviation and military aviation. According to FAA statistics, more than 1000 events affecting flight of unmanned aerial vehicles are counted each year, and the events are in a trend of rising year by year. Therefore, the unmanned aerial vehicle has serious threat to aviation safety, how to effectively detect, identify and monitor the low-altitude unmanned aerial vehicle and depending on the prior art and equipment, developing a mature, reliable and low-cost detection system has important significance for guaranteeing the aviation safety of the military and civil aviation.
At present, the detection, identification and monitoring means for the small and miniature unmanned aerial vehicle targets are single, the unmanned aerial vehicle targets are difficult to be rapidly positioned in a three-dimensional mode, the targets are classified and identified, the unmanned aerial vehicle is difficult to effectively monitor the operation unmanned aerial vehicle, and the all-weather working requirement in all weather is difficult to meet. The existing system generally takes radar as a core detection system and is expensive. The patent CN101285883 proposes a mobile radar bird-detecting device, which can only realize two-dimensional detection of targets, but cannot realize classification and identification of targets; patent CN106932753A proposes an anti-drone system based on drone remote control and telemetry signal monitoring and direction finding cross positioning, which has a small detection range and poor positioning accuracy, and cannot acquire identity information of a drone; in addition, patent CN205958746U proposes an anti-drone detecting system including radar and photoelectricity, which is difficult to realize target recognition and unable to monitor a target when the visual condition is not good.
Disclosure of Invention
In order to solve the problems, the invention provides a distributed intelligent photoelectric low-altitude protection system, which comprises a photoelectric distribution imaging system based on laser active illumination, a server and a display control terminal, wherein the photoelectric distribution imaging system is connected with the server and the display control terminal through a local area network, and the display control terminal is also connected with the server; the photoelectric distribution imaging system comprises a plurality of single-station photoelectric detection subsystems which are arranged in a distributed mode.
The distributed intelligent photoelectric low-altitude protection system comprises the following working steps:
s1, system detection and parameter configuration: the system carries out power-on self-check, enters a working state when no abnormality exists, and carries out working parameter configuration, including the working parameter configuration of each single-station photoelectric detection subsystem and the server;
s2, photoelectric detection of low-altitude targets: the single-station photoelectric detection subsystem carries out omnibearing automatic search and finds a low-altitude invading target, after receiving the flight path or position information of the target returned by the server, captures the target, tracks and identifies the target, reports the pose and characteristic information of the target to the server, judges whether the target is intercepted or not and finally calculates the landing position of the target; in the step, as long as the target enters a detection field of a single-station photoelectric detection subsystem, the image is recorded and the position information of the target is marked in real time;
s3, distributed subsystem resource scheduling: the server roughly calculates the azimuth and pitching information of the target according to the target related information reported by the single-station photoelectric detection subsystems, and starts the single-station photoelectric detection subsystems around the single-station photoelectric detection subsystem which detects the target for the first time by combining the relative positions of the single-station photoelectric detection subsystems;
s4. binocular/multi-ocular visual imaging: the multi-station photoelectric detection subsystem consists of two or more than two single-station photoelectric detection subsystems, and realizes the position measurement of a target through binocular or multi-view vision by configuring the working state and visual axis direction of adjacent stations and utilizing the imaging principle of a camera;
s5, automatically tracking and obtaining evidence of the target video: converting the acquired target azimuth, pitching and position information into photoelectric turntable and camera control parameters to realize clear imaging of the target; then, image processing is carried out, target motion parameters are estimated, and control parameters of a photoelectric turntable and a camera are adjusted to realize automatic video tracking of the target;
s6, displaying man-machine interaction and comprehensive information: and the display control terminal reads and displays the fused target comprehensive information, map information and system real-time working state information in the server.
Further, the single-station photoelectric detection subsystem comprises: the system comprises an optical machine frame, a measurement television subsystem, a laser illumination subsystem, a servo subsystem, a master control communication subsystem and an image recording subsystem, wherein the master control communication subsystem is connected with a server, a display control terminal, the optical machine frame, the measurement television subsystem, the laser illumination subsystem and the servo subsystem, the image recording subsystem is connected with the measurement television subsystem and the servo subsystem, and the servo subsystem is further connected with the optical machine frame.
Further, in step S2, the single-station photo-detection subsystem performs the following steps:
s21, in the first stage of target flight, the single station photoelectric detection subsystem carries out omnibearing automatic search and finds out the target of low-altitude invasion;
s22, in the second stage of target flight, after the single-station photoelectric detection subsystem receives the track or position information of the target returned by the server, the optical-mechanical frame is quickly turned back to point to the target, and the target is confirmed and captured;
s23, in the third stage of target flight, the single-station photoelectric detection subsystem stably tracks the target, establishes target flight information, identifies and confirms the target, reports the pose, feature information, situation and threat level of the target, the flight path and flight direction of the target flight, and extrapolates the position and time of the target entering the fourth stage;
s24, in the fourth stage of target flight, whether the target is intercepted or not is confirmed, if interception countermeasures are needed, the single-station photoelectric detection subsystem needs to observe and record the whole interception process besides keeping target tracking and information output in the stage;
and S25, calculating the landing position of the target, wherein when the pitch angle of the target is less than a certain value, the target is indicated to have landed or is in a landing state.
Further, in step S2, the server performs the following steps:
target detection: background estimation is realized by adopting anisotropic background prediction, false targets are further removed by utilizing the height characteristics, gray distribution characteristics, scene prior information and foreground change information of low-altitude targets after residual images are obtained, and then the false alarm probability is reduced by multi-frame accumulative detection;
target tracking: the server returns the target position and the pitching information, the target position and the pitching information are converted into the focal distance of a camera of the television subsystem and the rotating speed and the rotating direction of a photoelectric turntable of the servo subsystem, then the target motion parameters are estimated, the focal distance of the camera and the rotating speed and the rotating direction of the photoelectric turntable are adjusted, and the target is tracked in real time;
feature extraction: collecting a large number of images of unmanned aerial vehicles and flying birds under different states, angles and illumination conditions, taking the images as target images, preprocessing the target images to serve as positive samples, and collecting a large number of background images to serve as negative samples; then, adopting contour features, shape features, statistical features, entropy features and GMM parameter features as identification features of the target;
target identification: and inputting the recognition features of the target into an SVM classifier for training, and classifying and recognizing the target by using the trained SVM classifier.
Further, in step S2, the content reported to the server by the single-station photoelectric detection subsystem includes, in addition to the pose and feature information of the target: coordinate information and system configuration information of the single-station photoelectric detection subsystem, and video information of the target.
Furthermore, the system configuration information of the single-station photoelectric detection subsystem comprises the system focal length, the array surface size and the pixel number of the measurement television subsystem, and the photoelectric turntable pointing direction of the servo subsystem.
Furthermore, the display control terminal comprises a functional area menu bar, a system parameter setting area, a system monitoring information display area, a system comprehensive situation information display area and a photoelectric video information display area.
The invention has the beneficial effects that: realize low latitude protection through a distributed photoelectric detection system, compare current technical route, if: the protection composed of the low-altitude detection radar and the spectrum detection system has the advantages of low cost, strong operability and wide stationing. Meanwhile, the system does not emit electromagnetic waves externally, and has strong adaptability in occasions with higher requirements on the operation of equipment in an electromagnetic environment.
Drawings
FIG. 1 is a block diagram of the physical components of the system;
FIG. 2 is a block diagram of a single station photoelectric detection system;
FIG. 3 is a cross-linking relationship of the units of the system;
FIG. 4 is a schematic diagram of a single station photodetector subsystem operation;
FIG. 5 is a low altitude intruder detection process;
FIG. 6 is an intrusion target tracking workflow;
FIG. 7 is an SVM based target feature extraction;
FIG. 8 is a low altitude intruder identification process;
FIG. 9 is a schematic diagram of the operation of the multi-station networking photoelectric detection subsystem;
fig. 10 is a schematic view of unmanned aerial vehicle flyer positioning based on a vision system.
Detailed Description
In order to more clearly understand the technical features, objects, and effects of the present invention, embodiments of the present invention will now be described with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1, distributed intelligent photoelectric low-altitude protection system includes the photoelectric distribution imaging system based on laser active illumination, and server and display control terminal, and the photoelectric distribution imaging system passes through local area network connection server and display control terminal, and display control terminal still is connected with the server, and the photoelectric distribution imaging system includes the single-station photoelectric detection subsystem that a plurality of distributing types set up, and is specific:
the single-station photoelectric detection subsystem receives a control instruction sent by the display control terminal and realizes that: low-altitude intrusion target capturing, target detecting, evidence obtaining and tracking, laser active lighting and intrusion flying target identification.
The local area network provides a physical link for the whole system for control instructions, service data, system state data, remote updating and upgrading and other data.
The server receives and stores image information acquired by the single-station photoelectric detection subsystems, and combines coordinate information and working parameters of each single-station photoelectric detection subsystem and acquired target azimuth and distance information to realize resource management and scheduling work of the distributed photoelectric detection system; the system also receives the returned data of the optoelectronic system of the invader, analyzes the returned data, completes the work of invader detection, feature extraction, identification, tracking, evidence obtaining and the like, and stores the corresponding business data; forming target comprehensive message information including a target comprehensive point track, a type, identity information and tracking video information; loading off-line 2/3D map information, and detecting the working state according to the instruction to form self-checking information; and transmitting the target comprehensive information, the map environment information and the working state information to a comprehensive display and control terminal for displaying.
The comprehensive display and control terminal carries out man-machine interaction, sets working parameters of the system and displays target information, video information, map information and working state information acquired by the system.
In addition, the system also comprises a power supply for supplying power to each part of the system, and the correlation relationship and the information interaction relationship of each component unit of the system are shown in fig. 3.
Example 2
This embodiment is an optimized embodiment based on embodiment 1, and as shown in fig. 2, the single-station photodetection subsystem includes: the system comprises an optical machine frame, a measurement television subsystem, a laser illumination subsystem, a servo subsystem, a master control communication subsystem and an image recording subsystem, wherein the master control communication subsystem is connected with a server, a display control terminal, the optical machine frame, the measurement television subsystem, the laser illumination subsystem and the servo subsystem, the image recording subsystem is connected with the measurement television subsystem and the servo subsystem, and the servo subsystem is further connected with the optical machine frame. Specifically, the optical-mechanical frame provides support for the system; the measurement television subsystem realizes target imaging and completes target pose measurement through multi-view vision; under the condition of low visibility, the laser illumination subsystem increases the detection capability of the measurement television subsystem by means of illumination and light supplement; the servo subsystem realizes two-degree-of-freedom servo control and adjusts and measures the optical axis direction of the television subsystem; the master control communication subsystem realizes the functions of whole machine control and communication; the image recording subsystem is a storage system of the system, and images collected or recorded in the operation process of the storage equipment are stored.
Example 3
In this embodiment, based on the optimization embodiment of embodiment 2, as shown in fig. 4, the specific work flow of the single-station photodetection subsystem is as follows:
1) in the AB stage of target flight, the single-station photoelectric detection subsystem carries out omnibearing automatic search and finds out the target of low-altitude invasion;
2) in the BC stage of target flight, after the single-station photoelectric detection subsystem receives the flight path or position information of the target returned by the server, the optical-mechanical frame is quickly turned back to point to the target, and the target is confirmed and captured;
3) in the CD stage of target flight, the single-station photoelectric detection subsystem stably tracks the target, establishes target flight information, identifies and confirms the target, reports the pose and characteristic information, the pose and threat level of the target, the flight path and the flight direction of the target flight, and extrapolates the position and the moment when the target enters the DE stage;
4) in the DE stage of target flight, whether the target is intercepted or not is confirmed, if interception countermeasures are needed, the single-station photoelectric detection subsystem needs to observe and record the whole interception process in addition to keeping target tracking and information output in the stage;
5) and calculating the landing position of the target, wherein when the pitch angle of the target is less than a certain value, the target is shown to have landed or is in a landing state.
In each stage, as long as the target enters the view field of the detector, the image needs to be recorded, and the position information of the target is marked in real time so as to be convenient for subsequent analysis and evidence collection.
Example 4
In this embodiment, based on the optimization embodiment of embodiment 3, in the process of executing a specific workflow by the single-station photoelectric detection subsystem, the tasks of target detection, target tracking, feature extraction, and target identification are executed synchronously by the server, which is specifically as follows:
1) target detection
The condition of staring imaging or initial guiding imaging is small, and the invading unmanned aerial vehicle or bird target is in a small target form in the field of view. In order to realize the classification recognition and tracking of the low-altitude invaders, small target detection under a complex background is firstly completed. In the tracking process, because the platform has strong mobility, in an imaging system with a large field of view, the field of view contains sundries on the sky and the ground, and the scene changes rapidly. Background estimation is realized by adopting anisotropic background prediction, and false targets are removed by utilizing the height characteristics of low-altitude invaders, the gray level distribution characteristics of flyers, scene prior information, foreground change and the like after residual images are obtained. And the false alarm probability is reduced through multi-frame accumulative detection, and the low-altitude intrusion target detection is realized. The low-altitude intruder detection process is shown in fig. 5.
2) Target tracking
The server transmits back the azimuth and the pitching information of the target, and converts the azimuth and the pitching information into the photoelectric turntable control parameters of the servo subsystem and the camera control parameters of the measurement television subsystem, wherein the parameters specifically comprise the rotation speed, the rotation direction and the camera focal length of the photoelectric turntable, so that the target can be clearly imaged. Estimating the motion parameters of the target, adjusting the control parameters of the photoelectric turntable and the camera, and realizing the automatic tracking of the target. And (4) an intrusion target tracking workflow, as shown in fig. 6. Because the flight path of the low-altitude invader has certain time-varying property, the Xinjiang eidolon series unmanned aerial vehicle has the actions of hovering, accelerating, decelerating and the like, the running characteristic of birds is more complex, and the flight speed is faster. The Kalman prediction filtering algorithm based on the statistical model improves the acceleration zero mean value in the Singer model into the self-adaptive acceleration mean value, and the acceleration of the target at the next moment is considered to be only in the neighborhood of the current acceleration, so that the tracking precision and the performance are greatly improved. In a binocular vision system, the detail imaging of a low-altitude invader is realized by utilizing a monocular, and the other vision system performs large-view-field staring imaging on a target, so that the tracking of an invader point target is realized, and the difficulty of tracking design and the cost of system turntable design are effectively reduced.
3) Feature extraction
In order to avoid the influence of noise on imaging quality and the loss of characteristics, the contour characteristics, the shape characteristics, the statistical characteristics, the entropy characteristics and the GMM parameter characteristics are used as the identification characteristics of the target and are used as the input of the SVM classifier. Firstly, collecting a large number of target images of unmanned aerial vehicles, flying birds and the like under different states, angles and illumination conditions, and preprocessing the target images to be used as positive samples; a large number of background images were collected as negative examples. The features of the training samples are obtained by extracting the target features, and the features are input into the SVM trainer for training, as shown in fig. 7.
4) Object recognition
In order to identify low-altitude invaders, after target detection is realized under the condition of a large visual field, system zooming is realized according to initial distance information after binocular or multi-view ranging, and the system focal length of a tracking camera is adjusted by combining with an image definition evaluation index. After the invader is clearly imaged, image acquisition is realized, and preparation is made for carrying out invader identification. The target recognition algorithm based on the statistical pattern recognition considers the target recognition as distinguishing the category to which the target belongs from the samples, and requires that a large number of samples are used for training a classifier of the target before the target recognition, and the trained target classifier is used for carrying out classification recognition. The SVM is adopted to realize target recognition, and a recognition flow of the invader is shown in fig. 8.
Example 5
The present embodiment is an optimized embodiment based on embodiment 4, and the resource scheduling of the distributed subsystem:
when a certain single-station photoelectric detection subsystem detects a low-altitude invader, the coordinate information of the single-station photoelectric detection subsystem, system configuration information (including measurement of the system focal length, array surface size and pixel number of a television subsystem and the direction of a rotary table of a servo subsystem) and video information are reported to a server, the server executes tasks of target detection, tracking, feature extraction and identification, roughly calculates the azimuth and elevation information of the invader, and starts the single-station photoelectric detection subsystems around the single-station photoelectric detection subsystem which firstly detects the invader by combining the azimuth and elevation information and the relative position distribution of each single-station photoelectric detection subsystem, so that the invader is captured, tracked and identified.
Example 6
This example is based on the optimized example of example 5, binocular/multi-ocular vision imaging:
the multi-station photoelectric detection subsystem is used for realizing the position measurement of the invading unmanned aerial vehicle through the binocular/multi-view vision by configuring the working state and visual axis pointing direction of adjacent stations and utilizing the imaging principle of a camera. The multi-station photoelectric detection subsystem is composed of two or more than two single-station photoelectric detection subsystems, relative pose information between the single-station photoelectric detection subsystems is recorded through an optical calibration means in the single-station installation process, and a relative coordinate system of the relative pose information is mapped to a body coordinate system of a protection field, so that multi-station combined monitoring is realized, and multi-purpose monitoring is formed. On one hand, low-altitude monitoring in a wider field range is realized by a multi-sensor fusion method by utilizing multi-view vision until the whole low-altitude range of a monitoring site is covered; in addition, networking detection realizes multi-view vision, and the working mechanism of the multi-view vision is utilized to realize the measurement of the absolute position of the unmanned aerial vehicle, so that flight information such as the speed, the angular speed, the distance and the like of the unmanned aerial vehicle are provided for the display control terminal.
Example 7
The present embodiment is an optimized embodiment based on embodiment 6, and the target video is automatically tracked and forensics:
and converting the acquired target azimuth, pitching and distance information into the rotation speed and rotation direction of the photoelectric turntable and the camera focal length, so as to realize clear imaging of the target and obtain evidence of the low-altitude invasion event in real time. And (4) carrying out image processing, estimating target motion parameters, adjusting photoelectric turntable and camera control parameters, and realizing automatic video tracking of the target. By taking the tracking application in a binocular vision system as an example, the monocular is utilized to realize the detail imaging of the low-altitude invader, and the other vision system performs the large-view-field staring imaging on the target to realize the tracking of the invader point target, thereby effectively reducing the difficulty of tracking design and the cost of system turntable design.
Example 8
The present embodiment is an optimized embodiment based on embodiment 7, and the unmanned aerial vehicle flight hand position estimation:
the system flow of the multi-station combined work is explained by taking binocular vision as an example, as shown in fig. 9, when the unmanned aerial vehicle appears in a view field of a certain station, the initial position and the pitching of the unmanned aerial vehicle under the current subsystem body measurement coordinate system are roughly solved according to the optical axis pointing direction and the position of a point target appearing on the target surface of the detector. And adjusting adjacent stations to participate in intrusion target detection, tracking and identification according to relative coordinates of the adjacent stations, solving the initial position and pitching of the intrusion unmanned aerial vehicle under the measurement coordinate system of the adjacent stations through relative coordinate transformation, configuring working parameters of the intrusion unmanned aerial vehicle, controlling a turning optical machine frame machine head and a current station through a servo subsystem to jointly image the low-altitude intrusion unmanned aerial vehicle, reading angle information of the servo subsystem after stable imaging is realized, and conjecturing the position of the unmanned aerial vehicle flyer by combining relative positions between the stations and the absolute position of the intrusion unmanned aerial vehicle.
For monitoring interest targets at any position, once the targets enter the monitoring system, the distributed sensors can sense the positions of the targets through information exchange, so that the server can provide the optimal sensors for target tracking and monitoring by utilizing the position information of the servers. The schematic view of the positioning of the manipulator of the unmanned aerial vehicle is shown in fig. 10. The pose measurement of the invading unmanned aerial vehicle is measured by adopting binocular vision, the pose of the unmanned aerial vehicle which is measured by a binocular vision system at the moment T is assumed to be above the area 1, and the unmanned aerial vehicle flyer flies the unmanned aerial vehicle through a remote control data chain, and the acting distance of the unmanned aerial vehicle remote control data chain to the unmanned aerial vehicle is limited and is about 1 Km. The pose of the drone is measured to be above zone 2 at time T + 1, and the pose of the drone is measured to be above zone 3 at time T + 2. Supposing that the moving distance of the unmanned aerial vehicle flyer is short in the three-time measurement process, the unmanned aerial vehicle flyer position can be known to be located in the repeated coverage areas of the three areas of the area 1, the area 2 and the area 3, namely the area where the triangle is located according to the performance index of the detection system and the action distance of the remote control data chain.
Example 9
The embodiment is an optimized embodiment based on the embodiment 1, and the man-machine interaction and the comprehensive information display are as follows:
the display control terminal adopts a man-machine friendly interactive design and comprises a functional area menu bar, a system parameter setting area, a system monitoring information display area, a system comprehensive situation information display area and a photoelectric video information display area. And the display control terminal reads the fused target comprehensive information, map information and real-time system working state information in the server and displays the comprehensive information.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
In the description of the present invention, it should be noted that the terms "first", "second", "third", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," and "connected" are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; may be wired or wireless.
Claims (7)
1. A distributed intelligent photoelectric low-altitude protection system is characterized by comprising a photoelectric distribution imaging system based on laser active illumination, a server and a display control terminal, wherein the photoelectric distribution imaging system is connected with the server and the display control terminal through a local area network, and the display control terminal is also connected with the server; the photoelectric distribution imaging system comprises a plurality of single-station photoelectric detection subsystems which are arranged in a distributed mode;
the distributed intelligent photoelectric low-altitude protection system comprises the following working steps:
s1, system detection and parameter configuration: the system carries out power-on self-check, enters a working state when no abnormality exists, and carries out working parameter configuration, including the working parameter configuration of each single-station photoelectric detection subsystem and the server;
s2, photoelectric detection of low-altitude targets: the single-station photoelectric detection subsystem carries out omnibearing automatic search and finds a low-altitude invading target, after receiving the flight path or position information of the target returned by the server, captures the target, tracks and identifies the target, reports the pose and characteristic information of the target to the server, judges whether the target is intercepted or not and finally calculates the landing position of the target; in the step, as long as the target enters a detection field of a single-station photoelectric detection subsystem, the image is recorded and the position information of the target is marked in real time;
s3, distributed subsystem resource scheduling: the server roughly calculates the azimuth and pitching information of the target according to the target related information reported by the single-station photoelectric detection subsystems, and starts the single-station photoelectric detection subsystems around the single-station photoelectric detection subsystem which detects the target for the first time by combining the relative positions of the single-station photoelectric detection subsystems;
s4. binocular/multi-ocular visual imaging: the multi-station photoelectric detection subsystem consists of two or more than two single-station photoelectric detection subsystems, and realizes the position measurement of a target through binocular or multi-view vision by configuring the working state and visual axis direction of adjacent stations and utilizing the imaging principle of a camera;
s5, automatically tracking and obtaining evidence of the target video: converting the acquired target azimuth, pitching and position information into photoelectric turntable and camera control parameters to realize clear imaging of the target; then, image processing is carried out, target motion parameters are estimated, and control parameters of a photoelectric turntable and a camera are adjusted to realize automatic video tracking of the target;
s6, displaying man-machine interaction and comprehensive information: and the display control terminal reads and displays the fused target comprehensive information, map information and system real-time working state information in the server.
2. The distributed intelligent photoelectric low-altitude protection system according to claim 1, wherein the single-station photoelectric detection subsystem comprises: the system comprises an optical machine frame, a measurement television subsystem, a laser illumination subsystem, a servo subsystem, a master control communication subsystem and an image recording subsystem, wherein the master control communication subsystem is connected with a server, a display control terminal, the optical machine frame, the measurement television subsystem, the laser illumination subsystem and the servo subsystem, the image recording subsystem is connected with the measurement television subsystem and the servo subsystem, and the servo subsystem is further connected with the optical machine frame.
3. The distributed intelligent photoelectric low-altitude protection system according to claim 2, wherein in step S2, the single-station photoelectric detection subsystem performs the following steps:
s21, in the first stage of target flight, the single station photoelectric detection subsystem carries out omnibearing automatic search and finds out the target of low-altitude invasion;
s22, in the second stage of target flight, after the single-station photoelectric detection subsystem receives the track or position information of the target returned by the server, the optical-mechanical frame is quickly turned back to point to the target, and the target is confirmed and captured;
s23, in the third stage of target flight, the single-station photoelectric detection subsystem stably tracks the target, establishes target flight information, identifies and confirms the target, reports the pose, feature information, situation and threat level of the target, the flight path and flight direction of the target flight, and extrapolates the position and time of the target entering the fourth stage;
s24, in the fourth stage of target flight, whether the target is intercepted or not is confirmed, if interception countermeasures are needed, the single-station photoelectric detection subsystem needs to observe and record the whole interception process besides keeping target tracking and information output in the stage;
and S25, calculating the landing position of the target, wherein when the pitch angle of the target is less than a certain value, the target is indicated to have landed or is in a landing state.
4. The distributed intelligent photoelectric low-altitude protection system according to claim 2, wherein in step S2, the server performs the following steps:
target detection: background estimation is realized by adopting anisotropic background prediction, false targets are further removed by utilizing the height characteristics, gray distribution characteristics, scene prior information and foreground change information of low-altitude targets after residual images are obtained, and then the false alarm probability is reduced by multi-frame accumulative detection;
target tracking: the server returns the target position and the pitching information, the target position and the pitching information are converted into the focal distance of a camera of the television subsystem and the rotating speed and the rotating direction of a photoelectric turntable of the servo subsystem, then the target motion parameters are estimated, the focal distance of the camera and the rotating speed and the rotating direction of the photoelectric turntable are adjusted, and the target is tracked in real time;
feature extraction: collecting a large number of images of unmanned aerial vehicles and flying birds under different states, angles and illumination conditions, taking the images as target images, preprocessing the target images to serve as positive samples, and collecting a large number of background images to serve as negative samples; then, adopting contour features, shape features, statistical features, entropy features and GMM parameter features as identification features of the target;
target identification: and inputting the recognition features of the target into an SVM classifier for training, and classifying and recognizing the target by using the trained SVM classifier.
5. The distributed intelligent photoelectric low-altitude defense system according to claim 2, wherein in step S2, the content reported to the server by the single-station photoelectric detection subsystem further includes, in addition to pose and feature information of the target: coordinate information and system configuration information of the single-station photoelectric detection subsystem, and video information of the target.
6. A distributed intelligent photoelectric low altitude protection system as claimed in claim 5, wherein the system configuration information of said single station photoelectric detection subsystem includes system focal length, wavefront size and pixel number of the measurement television subsystem, and photoelectric turntable orientation of the servo subsystem.
7. The distributed intelligent photoelectric low-altitude protection system according to claim 1, wherein the display control terminal comprises a functional area menu bar, a system parameter setting area, a system monitoring information display area, a system comprehensive situation information display area and a photoelectric video information display area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811586162.2A CN109708659B (en) | 2018-12-25 | 2018-12-25 | Distributed intelligent photoelectric low-altitude protection system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811586162.2A CN109708659B (en) | 2018-12-25 | 2018-12-25 | Distributed intelligent photoelectric low-altitude protection system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109708659A CN109708659A (en) | 2019-05-03 |
CN109708659B true CN109708659B (en) | 2021-02-09 |
Family
ID=66257459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811586162.2A Active CN109708659B (en) | 2018-12-25 | 2018-12-25 | Distributed intelligent photoelectric low-altitude protection system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109708659B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110672118B (en) * | 2019-11-04 | 2022-04-12 | 中国人民解放军空军工程大学 | Moving target track acquisition method based on single observation whistle digital telescope |
CN110895332B (en) * | 2019-12-03 | 2023-05-23 | 电子科技大学 | Distributed tracking method for extended target |
CN112070820A (en) * | 2020-08-29 | 2020-12-11 | 南京翱翔信息物理融合创新研究院有限公司 | Distributed augmented reality positioning terminal, positioning server and positioning system |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE202010012985U1 (en) * | 2010-11-25 | 2012-02-27 | Sick Ag | Sensor arrangement for object recognition |
CN102625482B (en) * | 2012-03-09 | 2014-09-17 | 中国科学院上海微系统与信息技术研究所 | Network system framework of low altitude target detection sensor |
CN104569915A (en) * | 2015-01-15 | 2015-04-29 | 中国电子科技集团公司第二十八研究所 | Positioning method used in multiple photoelectric detection systems and based on target movement model |
CN105137421A (en) * | 2015-06-25 | 2015-12-09 | 苏州途视电子科技有限公司 | Photoelectric composite low-altitude early warning detection system |
CN107862705B (en) * | 2017-11-21 | 2021-03-30 | 重庆邮电大学 | A small target detection method for unmanned aerial vehicles based on motion features and deep learning features |
CN108062516B (en) * | 2017-12-04 | 2020-09-11 | 广东核电合营有限公司 | Low-altitude airspace management and control method, device and system |
CN108037543B (en) * | 2017-12-12 | 2019-07-26 | 河南理工大学 | A multispectral infrared imaging detection and tracking method for monitoring low-altitude unmanned aerial vehicles |
CN108333584A (en) * | 2017-12-28 | 2018-07-27 | 陕西弘毅军民融合智能科技有限公司 | A kind of remote unmanned plane detection system of low altitude small target and detection method |
-
2018
- 2018-12-25 CN CN201811586162.2A patent/CN109708659B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109708659A (en) | 2019-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11017228B2 (en) | Method and arrangement for condition monitoring of an installation with operating means | |
CN111679695B (en) | Unmanned aerial vehicle cruising and tracking system and method based on deep learning technology | |
CN114373138A (en) | Full-automatic unmanned aerial vehicle inspection method and system for high-speed railway | |
CN108957445A (en) | A kind of low-altitude low-velocity small targets detection system and its detection method | |
CN110133573A (en) | A kind of autonomous low latitude unmanned plane system of defense based on the fusion of multielement bar information | |
CN108008408B (en) | Search and track imaging method, apparatus and system | |
CN105758397B (en) | A kind of aircraft camera positioning method | |
CN105730705B (en) | A kind of aircraft camera positioning system | |
CN109708659B (en) | Distributed intelligent photoelectric low-altitude protection system | |
CN110244314A (en) | One kind " low slow small " target acquisition identifying system and method | |
CN109816702A (en) | A kind of multiple target tracking device and method | |
CN106965946A (en) | A kind of method and apparatus that landing security is improved based on detection obstacle | |
CN114034296A (en) | Navigation signal interference source detection and identification method and system | |
CN105810023B (en) | Airport undercarriage control automatic monitoring method | |
CN117950422B (en) | Unmanned aerial vehicle inspection system and inspection method | |
CN115035470A (en) | A method and system for low, small and slow target recognition and localization based on hybrid vision | |
CN111319502A (en) | Unmanned aerial vehicle laser charging method based on binocular vision positioning | |
CN116202489A (en) | Method and system for co-locating power transmission line inspection machine and pole tower and storage medium | |
Coluccia et al. | The drone-vs-bird detection grand challenge at icassp 2023: A review of methods and results | |
CN115932834A (en) | Anti-unmanned aerial vehicle system target detection method based on multi-source heterogeneous data fusion | |
CN115328201A (en) | Intelligent capturing device and intelligent capturing method for black-flying unmanned aerial vehicle | |
CN115150591B (en) | Regional video monitoring method based on intelligent algorithm | |
CN115861366B (en) | Multi-source perception information fusion method and system for target detection | |
CN113496514B (en) | Data processing method, monitoring system, electronic equipment and display equipment | |
CN119382350B (en) | Electric power corridor air hidden danger wide area monitoring method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |