[go: up one dir, main page]

CN101214851A - Intelligent all-weather actively safety early warning system and early warning method thereof for ship running - Google Patents

Intelligent all-weather actively safety early warning system and early warning method thereof for ship running Download PDF

Info

Publication number
CN101214851A
CN101214851A CNA2008100692312A CN200810069231A CN101214851A CN 101214851 A CN101214851 A CN 101214851A CN A2008100692312 A CNA2008100692312 A CN A2008100692312A CN 200810069231 A CN200810069231 A CN 200810069231A CN 101214851 A CN101214851 A CN 101214851A
Authority
CN
China
Prior art keywords
image
ship
processing unit
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008100692312A
Other languages
Chinese (zh)
Other versions
CN101214851B (en
Inventor
黄席樾
刘俊
黄瀚敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING ANCHI TECHNOLOGY Co Ltd
Original Assignee
CHONGQING ANCHI TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING ANCHI TECHNOLOGY Co Ltd filed Critical CHONGQING ANCHI TECHNOLOGY Co Ltd
Priority to CN2008100692312A priority Critical patent/CN101214851B/en
Publication of CN101214851A publication Critical patent/CN101214851A/en
Application granted granted Critical
Publication of CN101214851B publication Critical patent/CN101214851B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/30Adapting or protecting infrastructure or their operation in transportation, e.g. on roads, waterways or railways

Landscapes

  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention discloses an intelligent type all-weather initiative safety early-warning system of ship running and an early-warning method thereof. The early-warning system consists of a target sampling unit, a system central processing unit, a display unit, a singlechip and a safety early-warning unit. The early-warning method comprises the following steps that: a, the system is actuated; b, the target is sampled and a digital signal is processed; c, a system decision-making unit obtains information from a to be combined with the information form b to process for initiative safety early-warning decision-making analysis and calculation to obtain a final shunning proposal, and a display and an acoustic-optic warning terminal are used for the initiative safety warning synchronously. The present invention improves the sense ability of ship operating personnel to the navigation environment and the initiative safe navigation guarantee capability of the ship under the circumstance of bad visibility greatly and helps the ship operating personnel for making decision, so as to reduce the error of ship operating, improve the success rate of ship initiative shunning bump, reduce or avoid the occurrence of the traffic accidents such as ship collision, ship-against-bridge, aground, etc., and ensure the safety of the navigation transportation.

Description

Intelligent all-weather active safety early warning system for ship running and early warning method thereof
The invention relates to an early warning system, in particular to an intelligent all-weather active safety early warning system for ship running and an early warning method thereof.
The background technology increases the risk of shipping accidents along with the growth of shipping industry in China. With the huge development of shipping in China, ships develop towards large-scale and high-speed, the number of ships, the water area traffic density and the loading capacity of dangerous goods are continuously increased, sea damage accidents happen sometimes, a large amount of casualties and economic losses are caused, and the water area and the natural environment are seriously polluted.
Statistical analysis on a large number of ship collision and ship bridge collision accidents shows that the accident causes mainly include three types: the first major category is human error; the second major category is mechanical failure; the third major category is the harsh natural environment. The accident rate under the condition of poor visibility reaches more than 85 percent.
One of the keys of the successful collision avoidance of the ship is to acquire accurate information of other dangerous targets such as other ships. With radar, the target vessel's course may be determined over time by monitoring changes in the target vessel's azimuth and distance. However, if the radar is used alone, it is difficult to immediately detect a change in the route because the radar has low accuracy in a short time range, and at the same time, it is vulnerable to electronic interference because it radiates a high-power electromagnetic wave into the air when it operates, and the angle measurement accuracy is low. In addition, as there are many ships without AIS or GPS, or GPS ships with different communication protocol interfaces, the AIS or GPS equipment of the ship cannot be found.
The infrared imaging equipment has the advantages of strong anti-interference capability, strong adaptability to climatic environment, continuous passive detection in day and night and the like. In particular, the problem that radar targets are difficult to detect due to clutter interference can be overcome. With the development of science and technology, infrared imaging equipment is continuously mature, and the price is gradually brought to the civil market. Uncooled infrared focal plane array thermal imagers are the mainstay of small, low-cost applications. From the current development trend, the market is about to have a new turn, and the market becomes a new hot industry which is comparable to the current CCD visible light photographing technology in the future 5-10 years. Due to the advantages of infrared target detection, tracking and identification, the infrared imaging equipment is used as a supplement of visual information of radar, GPS, AIS and other equipment, and can certainly play an important role in the shipping and transportation system.
The infrared imaging equipment is arranged in the gravity areas of ports, docks, bridges, wharfs, dangerous river sections, gates, restriction areas and the like and various ships in oceans and inland rivers, is used for detecting and monitoring ship navigation, and plays an active role in safe navigation and collision avoidance of the ships. In addition, the infrared image information can be fused with information provided by other detection equipment, and more reliable data is provided for ship navigation. The method can greatly improve the sensing ability of ship operating personnel on the navigation surrounding environment, improve the active safety navigation guarantee ability of the ship under the condition of poor visibility, assist the ship operating personnel to make decisions, reduce the errors of ship operating, greatly improve the success rate of ship active collision avoidance, guarantee the safety of life and property of personnel, reduce or avoid the occurrence of serious water area pollution and natural environment accidents, and ensure the safety of navigation transportation.
The invention aims to overcome the defects of the prior art, provides a civil product which is arranged on the heavy areas such as ports, docks, bridges, wharfs, dangerous river sections, gates, restriction areas and the like, and various ships in oceans and inland rivers, in particular large pleasure boats, passenger ships, ro-ro ships, container ships, oil tankers, dangerous goods transport ships, ships for searching and rescuing on water and law enforcement, solves the problems of small volume, complete functions, low price, intelligent decision control function and convenient installation in severe weather such as night, rainy days, foggy days and the like, can effectively supervise a plurality of moving ship targets, bridges, piers and reef targets around, greatly improves the sensing capability of personnel on the navigation surrounding environment, improves the active safety navigation guarantee capability of the ships under the condition of poor visibility, assists personnel in making decisions, reduces the errors of ship operation, the intelligent all-weather active safety early warning system and the early warning method thereof for ship driving can greatly improve the success rate of ship active collision avoidance, greatly reduce or avoid the occurrence of collision avoidance traffic accidents, ensure the safety of lives and properties of personnel, reduce or avoid the occurrence of accidents which seriously pollute water areas and natural environments, and ensure the safety of navigation and transportation.
In order to achieve the purpose, the invention adopts the following technical scheme:
the intelligent all-weather active safety early warning system for ship running is composed of a target sampling unit, a system central processing unit, a display unit, a single chip microcomputer and a safety early warning unit;
the target sampling unit consists of an outdoor intelligent high-speed holder with a decoder and a thermal infrared imager with a video acquisition card, the thermal infrared imager is fixed on the outdoor intelligent high-speed holder, the outdoor intelligent high-speed holder is connected with the system central processing unit through the decoder, and the thermal infrared imager is connected with the system central processing unit through the video acquisition card;
the central processing system comprises a keyboard, an infrared video image digital signal processing unit and a system decision processing unit;
wherein,
the keyboard is used for inputting information and control instructions to the system decision processing unit;
the infrared video image digital signal processing unit is used for receiving the digital image signals of the target sampling unit, calculating the position, the azimuth angle, the speed and the acceleration of the target by utilizing a processing and analyzing device in the memory according to the digital image signals, and sending the calculation result to the system decision processing unit through the serial communication interface for use;
the system decision processing unit is used for carrying out active safety early warning decision analysis and calculation on the information from the digital signal processing unit in combination with the information input by the keyboard information input unit to obtain a final avoidance scheme, and then carrying out active safety early warning simultaneously through the display unit and the warning unit;
the display unit is used for displaying images marked on targets around the installation position of the system, azimuth angles, speeds and accelerations between the targets and the installation position of the system, and displaying recommended avoidance schemes and danger levels;
the single chip microcomputer is used for receiving the decision result of the system decision processing unit and controlling the alarm mode of the alarm unit;
and the alarm unit is used for receiving the control signal of the singlechip and using sound and light modes with different frequencies to express the prompt information of the danger level and the danger direction.
The system adopts an integrated design, realizes the real-time detection of the distance, the direction, the speed and the acceleration between a system installation position and a plurality of moving ship targets, bridges, piers and reef targets around, and the intelligent analysis and processing of a central processing unit automatically carries out the decision of danger early warning judgment, and automatically alarms in a way of conforming to the humanized characteristics such as visual video, characters, sound, light, electricity and the like, and finally realizes the purpose of all-weather active safety early warning of ships.
Preferably, the infrared video image digital signal processing unit is 2 floating point digital signal processors TMS320C6713, and 2 TMS320C6713 are connected with a video acquisition card of a thermal infrared imager; the floating-point digital signal processor TMS320C6713 includes an image memory, a programmable logic device, a program memory and a controller, wherein the program memory is provided with a processing and analyzing device. The TMS320C6713 has the characteristics of small volume, low cost, high stability, good real-time performance and the like, can simultaneously realize 4-path video acquisition and processing, and simultaneously, the TMS320C6713 has extremely strong processing capability, high flexibility and programmability and can well meet the requirements of target detection, tracking and identification algorithms.
Preferably, a large-view-field image registration and splicing device is arranged in a memory in the infrared video image digital signal processing unit, the infrared sequence images are automatically spliced into a panoramic view, and the automatically spliced panoramic view is sent to a system decision processing unit for storage. The large-field-of-view image registration and splicing device realizes automatic splicing of panoramic views of sequence images by estimating interframe transformation parameters by using a global motion estimation method. During motion parameter estimation, pyramid layered block matching motion vector estimation is adopted, the running speed of a program is effectively improved, and elimination operation on abnormal blocks is added, so that the precision of global motion estimation is greatly improved. In addition, when the light intensity is different between images. And the balanced light difference operation is adopted, so that a better splicing effect is achieved. The method can accurately and quickly generate the large-view-field image of the video sequence, so that the view field range of video monitoring is enlarged. And the automatically spliced panoramic view is sent to a system decision processing unit for storage, and can be used for later analysis and evaluation and accident reason analysis.
Preferably, the system decision processing unit is a PC computer. The common PC computer has the advantages of low price, strong function and convenient interface, can meet the requirement of decision calculation of the system, is convenient to be connected with each part of the system, and can fully store the infrared video image collected by the target sampling unit and the panoramic view automatically spliced by the image registration and splicing device due to the large capacity of the hard disk of the computer.
Preferably, the display unit is 2 displays, wherein one display displays images of a plurality of marked targets around the display; and related text information: the distance, azimuth angle, speed and acceleration between the installation position of the intelligent all-weather active safety early warning system for ship running and a plurality of ships, bridges, piers and reef targets around; another display displays a textual description of the recommended avoidance maneuver: the method comprises the steps of adopting variable speed yielding and/or steering yielding, speed, direction, danger level, danger direction and alarm mode. The method adopts the display mode of two displays, utilizes the obtained real-time panoramic view image, and effectively detects, identifies and tracks other multiple targets such as meeting ships (multiple targets), bridges (piers), water surface reefs and the like in the area in front of the running ship. The marked image display can ensure that a user can intuitively use the system, the user can conveniently evaluate the performance of the system by using the character description, and different requirements of the user are met.
Preferably, the alarm unit is a single chip microcomputer which is connected with the serial communication interface of the system decision processing unit and controls an external loudspeaker and/or an external alarm lamp. The method comprehensively uses video, sound, light, electricity and other modes which accord with humanized characteristics to automatically alarm, improves the sensing capability of ship operating personnel on the navigation environment, and assists collision avoidance decisions, thereby improving the success rate of ship collision avoidance.
Preferably, the target sampling unit is further equipped with a radar, a visible light image sensor with a video capture card is arranged in front of a radar screen, the visible light image sensor is connected with a visible light video image digital signal processing unit in a central processing unit of the system through the video capture card, and the visible light video image digital signal processing unit is 2 floating point digital signal processors TMS320C 6713; the TMS320C6713 comprises an image memory, a programmable logic unit, a program memory and a controller, wherein a processing and analyzing device is arranged in the program memory; the central processing system is also provided with an information fusion unit which is used for carrying out information fusion on various information respectively from the infrared video image digital signal processing unit and the visible light video image digital signal processing unit to obtain the distance, the azimuth angle, the speed and the acceleration of a final target, marking the target in the image and sending a calculation result to the system decision processing unit through the serial communication interface; the information fusion unit is a high-speed real-time digital signal processor ADSP 21060. By the method, the original ship radar navigation system is not damaged, the use of the radar is not influenced, the electric circuit is not changed, and only the visible light sensor is additionally arranged on the original radar, so that great convenience is brought to the acquisition of multi-target information in the radar. The ADSP21060 is internally integrated with a 4Mbit dual-port static memory and is provided with a special peripheral I/O bus, so that the main functions of the digital signal processing system are effectively integrated on one chip, a single chip system is easy to form, and the volume of a circuit board is reduced. The on-chip high-speed instruction CACHE (CACHE) enables instructions to be executed in a pipelined manner, ensuring that each instruction is completed in a single cycle (25ns), the floating point peak operation rate is as high as 120MFLOPS (million floating point operations per second), and the normal floating point operation rate is 80 MFLOPS. The controller of the peripheral I/O bus provides six sets of high-speed links and two sets of synchronous serial ports, and a large amount of ADSP21060 can form a loosely-coupled parallel processing system by using the links. In addition, the processor also comprises 3 interrupt pins, 4 mark pins and an MSO-MS3 chip selection pin, so that the interface between the processor and other peripheral equipment is simple. In a word, the ADSP21060 has a huge address space, a powerful addressing mode, and a 48-bit ultra-long instruction word and its floating point arithmetic capability, which can completely meet the real-time requirement of the fusion device software algorithm.
Preferably, the intelligent all-weather active safety early warning system for ship running is arranged on a bridge, a port, a dock, a wharf boat, a dangerous river section, a restricted area, a gate or a ship; when the intelligent all-weather active safety early warning system for ship running is arranged on a ship, the display unit also displays images of a plurality of marked targets around the ship; and related text information: the model and the size of the ship, the control performance parameters of the ship, the load, the azimuth of the ship, the real-time ship speed of the ship, the distance between the ship and a plurality of surrounding ships, bridges, piers and reef targets, the azimuth, the speed and the acceleration. The system is fixedly installed in key areas such as ports, wharfs, bridges, wharfs, restricted areas and the like in oceans and rivers, moving ships passing through the key areas are detected and identified by utilizing infrared video image processing, tracking and positioning processing is carried out on the moving ships, the future moving direction of the ships is judged, real-time monitoring is carried out on the running ships, when the ships enter an early warning area, the ships are judged and processed by a computer, and if the future moving direction of the ships points to the bridge piers, the restricted areas, and threats and the like are caused to important equipment facilities in the key areas of the ports and the wharfs, monitoring personnel are reminded, and the ship drivers are warned by broadcasting, so that accidents are avoided.
The early warning method for realizing the intelligent all-weather active safety early warning system for ship running is characterized by comprising the following steps of:
a. starting system
Starting the intelligent all-weather active safety early warning system for ship running, and inputting information and control instructions by a keyboard; if the intelligent all-weather active safety early warning system for ship running is installed on a ship, the ship model, the ship size, the ship control performance parameters and the load information of the ship are input by a keyboard, and the ship direction and real-time ship speed information in a GPS (global positioning system) carried by the ship are simultaneously sent to a system decision processing unit through a serial communication interface;
b. target sampling and digital signal processing
The infrared video image target sampling and digital signal processing unit: the infrared thermal imager is driven by a holder to scan the periphery according to a specified time interval and angle step length, so that the infrared thermal imager shoots the surrounding water surface environment, the shot image is converted into a digital signal by a video acquisition card and is sent to a programmable logic device in an infrared video image digital signal processing unit for time sequence conversion and bus control, a control line signal and image information are respectively sent to an image memory in the digital signal processing unit for storage, the image memory obtains the information and then sends the information to the programmable logic device in the digital signal processing unit for confirmation information, a processing and analyzing device in a program memory in the digital signal processing unit is used for extracting the image from the image memory for processing and analyzing, the processed and analyzed result is sent to a controller of the digital signal processing unit, and after the controller obtains the information, giving a confirmation signal to the video acquisition card, and finally, carrying out a plurality of targets at the installation position and around the system: azimuth angle, speed and acceleration information among ships, bridges, piers and reefs are sent to a system decision unit through a serial communication interface;
c. the system decision unit obtains the information from a, and in combination with the information from b, the system decision unit performs active safety early warning decision analysis and calculation to obtain a final avoidance scheme, and then performs active safety early warning through a display and an acousto-optic warning terminal at the same time;
wherein:
the display displays images of a plurality of marked targets around the installation site of the system; and related text information: the distance, azimuth angle, speed and acceleration between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the system are calculated; if the system is installed on a ship, the model and the size of the ship, the control performance parameters, the load, the azimuth of the ship, the real-time ship speed of the ship and the text description of a recommended avoidance scheme are also displayed: comprises adopting variable speed yielding and/or steering yielding, speed and direction;
and carrying out intelligent collision avoidance active safety early warning according to the information, if the information is dangerous, judging the danger level, using sound and light modes with different frequencies to express the prompt information of the danger level and the danger direction through the sound and light alarm terminal, and controlling an external loudspeaker or an alarm lamp to alarm through a singlechip connected with a serial communication interface of the system decision processing unit.
Preferably, the step b further comprises image registration and stitching, the sequence images are automatically stitched into a panoramic view, and the automatically stitched panoramic view is sent to the system decision processing unit for storage. The large-field-of-view image registration and splicing device realizes automatic splicing of panoramic views of sequence images by estimating interframe transformation parameters by using a global motion estimation method. During motion parameter estimation, pyramid layered block matching motion vector estimation is adopted, the running speed of a program is effectively improved, and elimination operation on abnormal blocks is added, so that the precision of global motion estimation is greatly improved. The method can accurately and quickly generate the large-view-field image of the video sequence, so that the view field range of video monitoring is enlarged. And the automatically spliced panoramic view is sent to a system decision processing unit for storage, and can be used for later analysis and evaluation and accident reason analysis.
Preferably, the image processing and analysis in step b is performed according to the following steps:
firstly, performing infrared image preprocessing, including image denoising, image enhancement and sharpening, image correction and motion background correction; an infrared image is a set of real scene images, imaging noise and imaging interference. Assuming that f (x, y) represents the infrared image acquired by the imaging system, the infrared scene image containing the target may be scanned at the speed of:
f(x,y)=fT(x,y)+fB(x,y)+n(x,y)+n1(x,y)
fT(x, y) is a target gray value; f. ofB(x, y) is a background image, n (x, y) represents imaging noise, n1(x, y) represents imaging interference. Background image fB(x, y) typically has a long correlation length, which occupies low frequency information in the spatial frequency of the scene image f (x, y). At the same time, the background image f is due to the non-uniformity of the thermal distribution inside the scene and the sensorB(x, y) is a non-stationary process, where local gray values in the image may vary significantly, and, in addition, fB(x, y) also includes partial spatial frequenciesHigh frequency components in the domain, which are mainly distributed at the edges of the respective homogeneous regions of the background map image.
Imaging noise n (x, y) is introduced in the imaging process and is a few small disturbances superimposed on random positions of the infrared image; imaging disturbance n1(x, y) is the result of the infrared imaging photosensor response non-uniformity or dummy which forms some erroneous image data points at random or fixed locations in the infrared image. Thus, the imaging disturbance n1(x, y) are shown as isolated points in the image with pixel gray values much larger or smaller than the median value of its surrounding neighborhood, and the purpose of infrared image denoising is to estimate the real scene image from the image f.
Motion background correction, since image sensors are sometimes mounted on a moving platform, or even on a stationary platform, may cause sensor jitter due to some disturbance, which will cause background jitter. By a global motion parameter estimation technique. It firstly estimates the motion parameter of the background, then uses the estimated motion parameter to make background correction, and corrects the multi-frame image under the same coordinate system.
Directly segmenting an image region, then extracting the characteristics of a bridge, a pier and a reef, and effectively tracking and identifying;
the method comprises the following steps of respectively adopting two different algorithm combinations to extract the waterline, wherein the extraction algorithm combination of the first possible waterline sequentially comprises the following steps: image iteration threshold segmentation, Roberts gradient operator edge detection, refinement and Hough transformation extraction of a first sky line; the combination of the extraction algorithm of the second possible waterline comprises the following steps in sequence: detecting the edge of a Roberts gradient operator, binarizing, refining, and extracting a second sky line by using Hough transformation; taking one of two possible sky waterlines close to the lower end of the image as a finally extracted sky waterline, judging the credibility of the sky waterline, and then taking an image area in a certain range above and below the sky waterline as an ROI (region of interest);
based on the position relation between the sky line and the ship target, the ship target image is generally located in the sky line area. This is determined by mid-range plane imaging. The target may not be located in the sky area completely off the waterline, nor in other areas such as land, canyons, etc. Therefore, the sky line is correctly detected, then the image area in a certain range above and below the sky line is used as a region of interest (ROI), and the subsequent detection, tracking and identification of the infrared ship target greatly reduce the image processing range and greatly avoid the interference of high-radiation areas on clouds, water waves, lands or canyons in the air, so that the calculated amount of various algorithms is greatly reduced, and the requirement of the algorithm on real-time property is ensured.
The extraction process of the waterline adopted by the invention is as follows: firstly, evaluating the image quality of an original image, determining whether to preprocess the image or not according to an evaluation result, and then simultaneously using two different algorithm combinations to extract a sky waterline, wherein the first possible sky waterline extraction algorithm combination sequentially comprises the following steps: image iteration threshold segmentation, Roberts gradient operator edge detection, refinement and Hough transformation extraction of a first strip; the combination of the extraction algorithm of the second possible waterline comprises the following steps in sequence: and (4) edge detection and binarization of the Roberts gradient operator, refining, and extracting a second sky waterline by using Hough transformation. And after the two possible sky line extractions are finished, taking a sky line close to the lower end of the image as a sky line for final extraction.
And (3) evaluating the image quality: mean Square Error (MSE) is used to evaluate image quality and to determine whether to pre-process the image.
image preprocessing with if MSE > Kthen
if MSE is less than or equal to K then without image preprocessing
Take K-25.
First-stage image preprocessing: in the sky and ground region above the sky line in the infrared image, there are bridge, ground building, continuous rock, etcThe gray values of the objects are generally at the highest gray level of the image, the gradient between the gray values and the surrounding environment is often larger than the gradient at two sides of the sky-water line, and especially when the objects are in continuous linear distribution in the nearly horizontal direction in the image, the existing algorithm cannot correctly extract the sky-water line. The following method is adopted to eliminate the target interference with high brightness value: assuming that the gray value of a pixel in an image is f (x, y), the total number of pixels in the image is M × N, and the proportion of pixels with high brightness values to be eliminated is R, fM×N×R(x, y) represents the gray scale value of the MxNxR-th pixel when the gray scale values are arranged from high to low, R represents the lowest gray scale value among the eight adjacent pixels of the pixel (x, y), and the gray scale value corresponding to the pixel after preprocessing is g (x, y), then
if f(x,y)≥fM×N×R(x,y)then g(x,y)=r
if f(x,y)<fM×N×R(x,y)then g(x,y)=f(x,y)
In the step, the brightness value of the pixels with partial high brightness values is only reduced, so that the accurate extraction of the waterline is not influenced. And R is 0.05.
And (3) second-stage image preprocessing: in the water surface area below the waterline in the infrared image, the interference of strong water waves is another main factor causing the waterline not to be extracted. The strong water wave interference is more in the pixel occupied by the water surface area, the gray value of the strong water wave interference is distributed near the mean value of the whole image, and the strong water wave interference is removed by adopting the following method: calculating to obtain an image mean value fmean, wherein the gray value corresponding to the pixel after the second-stage preprocessing is h (x, y), and then
if f(x,y)>fmean then h(x,y)=f(x,y)-fmean
if f(x,y)≤fmean then h(x,y)=0
After the second stage of pretreatment, most of strong water wave interference is suppressed. In the same step, the brightness value of the interference pixels of only part of strong water waves is reduced, so that the accurate extraction of the sky water line is not influenced.
Image iterative threshold segmentation: the whole image in a water-borne environment is seen as being composed of two regions, namely a water surface region below the waterline and a sky and ground region above the waterline. When MSE of the original image is less than or equal to K, because the interference is small, the image is directly subjected to iterative threshold segmentation, and effective and reliable segmentation of two regions can be realized; when MSE of the original image is larger than K, most of interference is removed after image preprocessing, and then iterative threshold segmentation is carried out on the image, so that effective and reliable segmentation of the two regions can be realized.
Roberts gradient operator edge detection: the Roberts gradient is separately determined for each pixel point in the image.
Binarization: and carrying out binarization on the gradient image by adopting an edge threshold strategy. In an image, the number of non-edge points occupies a certain proportion of the total number of pixel points of the image, and the corresponding scale factor is represented as Ratio. And gradually accumulating image points from the low gradient value grade according to the image gradient value corresponding histogram, wherein when the accumulation number reaches the Ratio of the total number of pixels of the image, the corresponding image gradient value is the segmentation threshold value. Ratio is 0.95.
Thinning: the binary image may have pixels connected into one piece, which affects the extraction accuracy and real-time performance of Hough transform, so it needs to be refined. The principle of refinement is to reduce the line segment to one pixel along the vertical direction.
Extracting the waterline by Hough transformation:
a straight line is extracted by utilizing Hough transformation, so that mapping from an image space to a parameter space is realized, and the basic idea is duality of a point and a line. Polar equation by straight line: and p is xcos theta + ysin theta, and Hough transformation is carried out, namely points on a straight line in the image space are represented by a sine curve. In this context, the parameter space is discretized into an accumulator array, each point (x, y) in the image is mapped into a series of accumulators corresponding to the parameter space, the corresponding accumulator value is incremented by 1, and if the image space contains a straight line, a local maximum occurs in one of the corresponding accumulators in the parameter space. By detecting this local maximum, a pair of parameters (ρ, θ) corresponding to the straight line can be specified, and the straight line can be detected.
The reliability of the sky-water line is a problem of uncertainty reduction which needs a plurality of evidences to support the same fact, and therefore the evidence theory is used for judging the reliability of the extracted sky-water line.
(1) Whether the waterline is located in a possible waterline area;
after the same installation is finished, the waterline is bound to be in a sub-area, if the extracted waterline is in the sub-area, the reliability M1 of the waterline is 1, otherwise, the reliability M1 of the waterline is 0. By the formula:
M1=1,y1≤y≤y2
m1 is 0, y < y1 or y > y2
In the formula: y1, y2 are the coordinates of the middle pixel point row of the highest and lowest possible water lines, respectively; and y is the coordinates of the middle pixel point row of the extraction sky line.
(2) Obtaining the credibility of the waterline through the contrast judgment of sub-areas above and below the waterline;
since the gray level of the area above the waterline is generally higher than that of the area below the waterline, the reliability of the waterline is higher when the contrast of the upper area and the lower area is higher. Calculating by the formula:
<math><mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>&lsqb;</mo> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>x</mi> <mo>&lt;</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>y</mi> <mo>&lt;</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>+</mo> <mi>&Delta;h</mi> <mo>&lt;</mo> <mi>x</mi> <mo>&lt;</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> <mo>+</mo> <mi>&Delta;h</mi> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>y</mi> <msub> <mrow> <mo>&lt;</mo> <mi>w</mi> </mrow> <mn>2</mn> </msub> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>x</mi> <mo>&lt;</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>y</mi> <mo>&lt;</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow></math>
(3) confidence M3 of evidence associated with intermediate pixel points of the waterline extracted from previous and subsequent frames.
M 3 = 1 | y ( t ) - y ( t - 1 ) | , When y (t) is y (t-1), M3=1
y (t), y (t-1) is the middle pixel point line coordinates of the sky-water line extracted by the current frame and the previous frame, and when the difference between the two is larger, the credibility of the sky-water line is lower.
(4) Obtaining the comprehensive credibility M of the sky water line by using the synthesis rule of D-S, wherein M is equal to M1·M2·M3
Let Mt be a threshold of proper selection, when M > Mt, consider the extraction of the sky waterline of this frame of picture to be correct; otherwise, the above criterion cannot confirm the correctness of the sky-water line of the frame. At this time, the position of the waterline of the previous frame and the contrast of the upper and lower areas of the waterline are saved. And continuing to identify the waterline of the next frame.
After the sky waterline is correctly detected, an image area in a certain range above and below the sky waterline is used as a region of interest (ROI).
Evaluating the image quality of the ROI (region of interest) of the image, and adopting an infrared target detection algorithm based on a single-frame image when the evaluation result is 1; when the evaluation result is 0, adopting an infrared target detection algorithm based on an image sequence;
frequency selection and multi-scale decomposition are realized by respectively using two-dimensional wavelet transform in a wavelet analysis method, background noise is suppressed, and a target is enhanced; separating a low-frequency part from a high-frequency part of an original image, then respectively carrying out multi-resolution analysis on each low-frequency component and each high-frequency component, extracting target characteristics, and carrying out target detection;
in the fractal method, according to the characteristic that the fractal characteristics of artificial targets such as ships, piers and the like have violent change along with the scale compared with the natural background, a multiscale fractal characteristic image is extracted from an ROI subimage subjected to image enhancement by a fuzzy filtering method; finally, performing target detection on the multi-scale fractal features by using a probability relaxation method; the fractal model is a mathematical model suitable for describing objects with complex and irregular shapes, and the basic principle is to utilize the difference between a natural scene and an artificial target on a fractal dimension, generally, the fractal dimension is a linear estimation for calculating a logarithmic value between a measurement value and a measurement scale of each scale of summer, namely, the fractal dimension is considered to be a constant value in each scale range, which accords with the characteristics of an ideal fractal model. However, most natural scenes only have fractal features approximately in a certain scale range, and the actual images are influenced by imaging noise, quantization errors and the like, and the natural scenes cannot be described by standard fractal dimensions in many cases, so that the difference between the natural background and the artificial target from the standard fractal dimensions cannot achieve ideal effects. The target detection is considered by utilizing the fractal characteristic difference between the target and the background without accurately calculating the fractal parameters, and the difference between the background and the target in the image along with the scale change is different, so that the target detection is carried out by utilizing the difference of the multi-scale change of the background and the target.
Performing median filtering on the ROI subgraph firstly in a mathematical morphology method, and then taking a pixel with the maximum brightness value in a filtered image as a marked image; performing top hat transformation on an original image, and performing morphological reconstruction by taking an image subjected to iterative threshold segmentation as a mask image to realize infrared ship target detection; morphological reconstruction: the idea of reconstruction is to approximate the mask image by constantly expanding the marker image, thereby restoring some or all of the mask image, depending on the choice of marker image. The reconstruction is characterized in that the interested region in the mask image can be extracted through the selection of the marked image.
The reconstruction g (mask image) from f (marker image) is defined by the following iterative procedure:
initializing h1 to marker image f;
creating a 3x3 structural element B, wherein each element in the B is 1, and defining the rules;
repetition of hk+1=(hk⊕ B) Ig, up to hk+1=hk
Note that: the label f must be a subset of g and the reconstructed image is a subset of the mask image g.
Finally, performing evidence reasoning combination on target detection results from different methods by using evidence reasoning combination, and synthesizing the evidence by using Dempster evidence synthesis rule to obtain an infrared ship target detection result with high confidence coefficient identification; the evidence theory can adopt a trust function instead of probability as a measurement without prior probability and conditional probability density, adopts an interval estimation method instead of a point estimation method for the description of uncertain information, and shows great flexibility in distinguishing unknown and uncertain aspects and accurately reflecting evidence collection.
Preferably, the image processing and analyzing in step b further comprises the steps of:
the intelligent multi-maneuvering infrared ship target tracking is realized by applying an artificial neural network technology and a fuzzy reasoning technology; the artificial neural network is used for multi-maneuvering target tracking, so that the system has good self-adaption, self-organization learning, association and fault tolerance capabilities, has stronger judging and identifying capabilities through learning and training, and can independently find a solution to the problem; threat assessment is carried out on the tracked targets by applying a fuzzy inference technology, and the types and the speeds of the targets are judged, so that the threat levels of the targets are deduced; compared with the traditional random self-adaptive system, the artificial neural network has stronger judging and identifying capabilities through learning and training, and can independently find a solution for the problem. And the fault-tolerant capability of the system is very strong, and the system can still perform target detection, parameter estimation, target feature extraction and identification, system modeling and the like under the condition of uncertain data and environment, particularly under the condition of a large amount of noise and interference. Fuzzy reasoning is a reasoning process for imitating human thinking to real things, and the multi-target tracking process often has the similar situations of 'possible association' or 'possible non-association' between targets, 'possible targets belonging to a class' or 'possible non-belonging to a class', and the like. The fuzzy reasoning is used for carrying out threat assessment on a plurality of tracked targets, and the target types, the target speeds and the like are judged, so that the threat levels of the targets are deduced, a collision avoidance scheme can be accurately provided for ship operators, and a favorable guarantee is provided. The traditional methods are difficult to meet the engineering requirements, and the problems can be simplified by using fuzzy reasoning so as to meet the real-time requirement in target tracking.
Preferably, the image processing and analyzing in step b further comprises the steps of:
the method is characterized in that self-adaption and learning capabilities are added in a classical pattern recognition algorithm, the knowledge base of an artificial intelligence technology is fused by utilizing the front-back relation of images to recognize an infrared ship target, target features are extracted in a partitioned area, position features, shape features, size features, radiation features and features extracted based on wavelet analysis are respectively extracted, and the features are used as the input end of an RBF neural network to recognize the infrared ship target. The RBF neural network designed by the invention has a three-layer network structure, an input layer comprises 8 input parameters, a hidden layer comprises 12 nodes, and an output layer inputs the type of a target.
Preferably, step b further comprises the following steps:
b1, visible light video image target sampling and digital signal processing unit: a visible light image sensor arranged in front of the radar screen is used for shooting the radar screen to obtain the image of the radar screen, the shot image is converted into a digital signal by a video acquisition card, and sent to the programmable logic device in the visible light video image digital signal processing unit for time sequence conversion and bus control, respectively sent to the image memory in the digital signal processing unit for storage, the image memory obtains information and sends it to the programmable logic device in the digital signal processing unit, the processing and analyzing device in the program memory in the digital signal processing unit extracts the image from the image memory for processing and analyzing, and sends the result to the controller of the digital signal processing unit, after the controller obtains information, giving a confirmation signal to the video acquisition card, and finally, carrying out a plurality of targets at the installation position and around the system: distance, azimuth angle, speed and acceleration information among ships, bridges, piers and reefs are sent to an information fusion unit in the central processing system through a serial communication interface for information fusion;
b2, information fusion: the information fusion unit obtains information from the infrared video image digital signal processing unit and the visible light video image digital signal processing unit, and respectively provides confirmation information for the infrared video image digital signal processing unit and the visible light video image digital signal processing unit in the corresponding step b, the information fusion unit obtains the azimuth angle, the speed and the acceleration between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the installation position of the system obtained from the infrared video image digital signal processing unit in the step b, and the distance, the azimuth angle, the speed and the acceleration between the installation position of the system and a plurality of target ships, bridges and reefs around the installation position of the system obtained from the visible light video image digital signal processing unit, and obtains final fusion result information through information fusion processing, namely the installation position of the system and a plurality of target ships, bridges, reefs around the installation position of the system, And accurate information of the distance, the azimuth angle, the speed and the acceleration between the bridge pier and the reef, and simultaneously marking the target in the image, and sending the target to a system decision unit through a serial communication interface.
Preferably, according to the information input in the step a, combining the distances, azimuth angles, speeds and accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the installation position of the system obtained in the step b; adopting a fuzzy expert system method to synthesize the early warning decision elements and the influence degrees thereof into mutually independent and orthogonal principal components, finally determining the ship collision avoidance risk degree, and carrying out active early warning decision on the basis of the mutually independent and orthogonal principal components; if the system is installed on a ship, according to the model number, the size, the control performance parameters, the load, the azimuth of the ship and the real-time ship speed of the ship obtained in the step a, combining the distances, the azimuth angles, the speeds and the accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the installation position of the system obtained in the step b; and (3) integrating various early warning decision elements and the influence degrees thereof into mutually independent and orthogonal main components by adopting a fuzzy expert system method, finally determining the ship collision avoidance risk degree, and carrying out active early warning decision on the basis of the main components.
The active safety early warning system is a complex system consisting of three elements of human, ship and environment. Therefore, factors influencing three factors of people, ships and environment have important influence on the active safety early warning decision. On the basis of collision avoidance information acquired by ship navigation equipment, determining the ship collision avoidance risk degree by adopting a fuzzy expert system method, and carrying out active early warning decision on the basis of the ship collision avoidance risk degree.
During the running of the ship, the early warning decision element obtained by each sensor is x1,x2,x3......xnThe influence degree of each element on the early warning decision is different, and the elements are mutually influencedInfluence.
Let the influence degree of the ith element on the early warning decision be wi(i is more than or equal to 1 and less than or equal to n), and setting the extracted n samples as X: x is the number of1 t,x2 t,...xn t(t is more than or equal to 1 and less than or equal to n)). now n characteristic indexes are synthesized into n mutually orthogonal independent main components y1,y2,...ynWritten in matrix form as
Y=C·X (2)
Wherein
C = C 11 . . . C 1 n . . . . . . . . . C n 1 . . . C nn , Y = y 1 . . . y n
The fuzzy expert system can process uncertain data and propositions, namely values can be taken in [0, 1], and fuzzy technical means such as fuzzy sets, fuzzy numbers, fuzzy relations and the like are adopted to express and process the uncertainty and inaccuracy of knowledge. Because uncertain factors are very many in the running process of the ship, the practicability of the system can be enhanced by adopting a fuzzy expert system for active safety early warning decision.
The active safety early warning system consists of three main parts, namely information collection, data processing, decision making and the like.
Preferably, in the step b2, according to the azimuth angle, speed and acceleration between the intelligent all-weather active safety early warning system for ship running and the multiple target ships, bridges, piers and reefs in front obtained by the infrared video image digital signal processing unit in the step b, and the distance, azimuth angle, speed and acceleration between the intelligent all-weather active safety early warning system for ship running and the multiple target ships, bridges and reefs in front obtained by the visible light video image digital signal processing unit in the step b1, the high-precision angle measurement and radar high-precision distance measurement are utilized, the information complementation is utilized, and the accurate estimation of the target position is given through the information fusion technology; the fusion of the radar and the thermal infrared imager is realized by adopting a centralized processing method in the feature layer fusion, the target centroid in the infrared image is firstly extracted, then the redundant angle technology of the thermal infrared imager is compressed by a least square estimation method to generate pseudo angle measurement aligned with the radar measurement in time, then the pseudo angle measurement and the azimuth angle measurement of the radar are respectively fused to obtain synchronous data fusion estimation, finally the data obtained based on the radar and the infrared fusion are used for updating the target state of a filter, a distributed processing method is adopted in the decision layer fusion, the radar and the infrared are respectively used for establishing a track about the target, and then the association and fusion of the radar and the infrared track are carried out.
The radar as an active sensor can measure and provide complete position information of a target all the time, so that the radar plays an important role in the aspects of target detection and tracking. However, since the radar radiates a high-power electromagnetic wave into the air during operation, the radar is susceptible to electronic interference, and the angle measurement accuracy is low.
The thermal infrared imager does not radiate any energy to the air, the thermal infrared imager carries out detection and positioning by receiving the heat energy radiated by the target, and the thermal infrared imager has stronger anti-jamming capability, simultaneously has the advantages of high angle measurement precision, strong target identification capability and the like, but cannot measure the distance.
By means of high-precision distance measurement of the radar and high-precision angle measurement of the thermal infrared imager, information complementation is utilized, and accurate estimation of the position of a target can be given through an information fusion technology, so that tracking and identification of the target are improved.
(1) Sensor measurement model
The infrared imaging thermal imager measures the azimuth angle and the elevation angle of the brightness center of the target, and if the brightness center of the target is coincided with the mass center, the measurement model is as follows:
<math><mrow> <msub> <mi>&theta;</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>&theta;</mi> <mi>I</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math>
<math><mrow> <msub> <mi>&phi;</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&phi;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>&phi;</mi> <mi>I</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>.</mo> </mrow></math>
in the formula [ theta ]I(k)、φI(k) Is measured in the infrared, theta (k), phi (k) are actual angles, upsilonθI(k)、υφI(k) For angle measurement noise, their mean is zero and variance is white gaussian noise. The target state vector is selected as position, velocity and acceleration in the inertial system, i.e.
<math><mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>&lsqb;</mo> <mi>x</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>x</mi> <mo>.</mo> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>x</mi> <mrow> <mo>.</mo> <mo>.</mo> </mrow> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mi>y</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mrow> <mover> <mi>y</mi> <mo>.</mo> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>y</mi> <mrow> <mo>.</mo> <mo>.</mo> </mrow> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>z</mi> <mo>.</mo> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>z</mi> <mrow> <mo>.</mo> <mo>.</mo> </mrow> </mover> </mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mrow> <mi>T</mi> </msup> </mrow></math> Then there is
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&theta;</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&phi;</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>arctan</mi> <mo>&lsqb;</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>/</mo> <mi>x</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mtd> </mtr> <mtr> <mtd> <mi>arctan</mi> <mo>&lsqb;</mo> <mi>y</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>/</mo> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msqrt> <mo>&rsqb;</mo> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>&theta;</mi> <mi>I</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>&phi;</mi> <mi>I</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow></math>
The radar can directly measure the distance and the azimuth angle of a target, and the measurement model is
<math><mrow> <msub> <mi>r</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>r</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>r</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math>
<math><mrow> <msub> <mi>&theta;</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>&theta;</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math>
<math><mrow> <msub> <mi>&phi;</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&phi;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>&phi;</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>.</mo> </mrow></math>
In the formula rR(k)、θR(k)、φR(k) As measured values of the radar, r (k), θ (k), φ (k) are actual values, υrR(k)、υθR(k)、υφR(k) To measure noise, their mean is zero and the variance is white gaussian noise. The target state vector is selected as above, then there are
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>r</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&theta;</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&phi;</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msqrt> </mtd> </mtr> <mtr> <mtd> <mi>arctan</mi> <mo>&lsqb;</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>/</mo> <mi>x</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mtd> </mtr> <mtr> <mtd> <mi>arctan</mi> <mo>&lsqb;</mo> <mi>y</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>/</mo> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <msup> <mi>z</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msqrt> <mo>&rsqb;</mo> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>r</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>&theta;</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>&phi;</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow></math>
(2) Structural model and algorithm of radar and infrared image signal fusion and decision system
The fusion of the characteristic layers has the following functions: the characteristic information of infrared imaging target identification and tracking is utilized to help the target identification and tracking of the radar, so that the target detection probability of the system is improved and the false alarm probability is reduced; the role of the decision layer fusion is: when the distance from the target is far, the servo control system of the thermal infrared imager is guided to track the target according to the tracking decision information of the radar module, so that the target falls in the visual angle of the thermal infrared imager, the thermal infrared imager can identify and track the target through imaging analysis when the target is close to, the defect that the acting distance of the thermal infrared imager is short can be overcome, and the advantage that the accuracy of the tracking decision information of the thermal infrared imager is high when the target is close to is exerted. When one sensor module loses the target tracking capability or has poor target tracking capability due to interference and the like, the target tracking capability of the interfered sensor module can be corrected according to the tracking decision information of the other sensor module, so that the anti-interference performance of the system is improved, the reliability of the whole target recognition and tracking system is improved, and once one sensor loses the target recognition and tracking capability due to software or hardware faults, the fusion center decision controller can still correctly track the target according to the target recognition and tracking decision signal of the other sensor.
Because the data acquisition rate of the thermal infrared imager is obviously higher than that of a radar, the feature layer fusion adopts a centralized processing method, and the basic idea of realizing the fusion of the radar and the thermal infrared imager is as follows: firstly, extracting a target centroid in an infrared image, then compressing a redundant angle technology of the thermal infrared imager by using a least square estimation method to generate a pseudo angle measurement aligned with a radar measurement in time, then respectively carrying out fusion processing with an azimuth measurement of the radar to obtain a synchronous data fusion estimation, and finally using data obtained based on the radar and infrared fusion to update a target state of a filter.
For the fusion of radar and infrared data in the decision layer, a distributed processing method is adopted, namely, a track about a target is established by the radar and the infrared respectively, and then the association and fusion of the radar and the infrared track are carried out.
The invention has the advantages that:
(1) the system adopts an integrated design, realizes the real-time detection of the distance, the direction, the speed and the acceleration between a running ship and a plurality of targets in front, intelligently analyzes and processes by a central processing unit (computer), automatically carries out the judgment decision of the danger early warning, automatically alarms by means of visual video, characters, sound, light, electricity and the like which accord with the humanized characteristics, and finally realizes the purpose of all-weather active safety early warning of the ship.
(2) By using electronic technology, circuit bus technology and design principle, the central processing unit adopts the optimized integrated design, and the central processing unit integrates the functions of signal modulation, input buffering, sample holding, data acquisition, A/D conversion, CPU processing, D/A conversion, signal amplification, display alarm circuit and execution control system circuit into a whole through optimized combination and integration, thereby realizing the integration of data sampling, conversion, intelligent analysis processing and control.
(3) The infrared thermal imaging technology is applied to safe navigation and collision avoidance of ships and is not influenced by electromagnetic interference.
(4) Information from a shipborne radar and infrared are respectively fused through a special information fusion computer, so that the target detection probability of the system is improved, and the false alarm probability is reduced; the anti-interference performance of the system is improved, and the reliability of the whole target recognition tracking system is improved.
(5) The system can work all weather, especially at night, in foggy days and in rainy days with poor visibility;
(6) the alarm modes are diversified, the active safety early warning decision is intelligent, and ship operating personnel can acquire multi-target information accurately, timely, fully, conveniently and visually;
(7) small volume, complete functions and low price;
(8) the system has stability, real-time performance and accuracy. For a bridge 3 km or so away from the system: the false alarm rate is less than or equal to 1 percent; the rate of missing reports is less than or equal to 1 percent; the detection, tracking and identification accuracy is more than or equal to 98 percent. For a ship about 1 kilometer away from the system: the false alarm rate is less than or equal to 2 percent; the rate of missing reports is less than or equal to 2 percent; the detection, tracking and identification accuracy is more than or equal to 98 percent.
(9) The installation of each part of the system is greatly convenient for installing various ship system parts by adopting the principle that the appearance of the original ship is not damaged, the electric circuit is not changed and the system parts are only additionally installed on the basis of the original ship.
The intelligent all-weather active safety early warning method and the early warning system for ship running can greatly improve the sensing ability of ship operating personnel to the surrounding environment of navigation, improve the active safety navigation guarantee ability of the ship under the condition of poor visibility, assist the ship operating personnel to make decisions, reduce the errors of ship operation, greatly improve the success rate of ship active collision avoidance, greatly reduce or avoid the occurrence of traffic accidents such as ship collision, ship bridge collision, reef touch and the like, guarantee the safety of life and property of personnel, reduce or avoid the occurrence of serious water area pollution and natural environment accidents, and ensure the safety of navigation transportation
Description of the drawings fig. 1 is a schematic structural view of the present invention;
FIG. 2 is a schematic diagram of an IR video image DSP unit according to the present invention;
FIG. 3 is a block diagram illustrating the steps of the infrared target sampling, processing and identification method of the present invention;
FIG. 4 is a block diagram of the steps of the security precaution of the present invention;
the invention will now be further described with reference to specific embodiments, but without thereby limiting the invention to the described embodiments:
example 1: the system is arranged on the bridge
The intelligent all-weather active safety early warning system for ship running is fixedly arranged on a bridge and used for realizing active safety early warning for collision avoidance of a bridge pier of the bridge, moving ships passing through the bridge are detected and identified by utilizing infrared video image processing, tracking and positioning processing is carried out on the moving ships, the future moving direction of the ships is judged, the running ships are monitored in real time, when the ships enter an early warning area, the size of the early warning area is determined according to the actual situation, the early warning area is judged and processed by a computer, when the moving direction of the ships is not directed to the bridge pier, the ships run safely, and an alarm device is not started; if the motion vector direction of the ship points to the bridge pier, the alarm device is started, and an alarm is sent out through broadcasting to remind a ship driver to take avoidance measures such as course change or speed change and the like, so that the safety of both the bridge pier and the ship is ensured.
Of course, according to the difference of the fixed installation position and the field condition of the intelligent all-weather active safety early warning system for ship running, the system can also be fixedly installed on a port traffic safety management monitoring platform, a bridge, the top of a wharf boat, a related navigation aid building of a restricted area and the like in land buildings near ports and wharfs. If the future movement direction of the ship is pointed to bridge piers, restricted areas, and poses threats to important equipment facilities in key areas of ports and wharfs, the alarm device is started, an alarm is sent out through broadcasting, monitoring personnel is reminded, a warning is sent out to ship drivers, the ship drivers are reminded to take avoidance measures such as course change or speed change, and the like, so that various accidents are avoided.
Mounting system
The system installation should ensure that the thermal infrared imager 111 can completely cover the key area and the water area to be monitored through the visual field formed by the driving of the holder 112 without shielding, the vertical distance between the installation position and the horizontal plane is required to be more than 5 meters, and the imaging of the key area and the water area which are completely covered accounts for more than half of the whole image area.
The system part is as follows:
referring to fig. 1 and 2, the intelligent all-weather active safety early warning system for ship driving is composed of a target sampling unit 1, a system central processing unit 2, a display unit 3, a single chip microcomputer 4 and a safety early warning unit 5;
the target sampling unit 1 consists of an outdoor intelligent high-speed holder 112 with a decoder and a thermal infrared imager 111 with a video acquisition card 113, the thermal infrared imager 111 is fixed on the outdoor intelligent high-speed holder 112, the outdoor intelligent high-speed holder 112 is connected with the system central processing unit 2 through the decoder, and the thermal infrared imager 111 is connected with the system central processing unit 2 through the video acquisition card 113;
the central processing system 2 comprises a keyboard 26, an infrared video image digital signal processing unit 21 and a system decision processing unit 25;
wherein,
the keyboard 26 is used for inputting information and control instructions to the system decision processing unit 25;
the infrared video image digital signal processing unit 21 is used for receiving the digital image signals of the target sampling unit 1, calculating the position, the azimuth angle, the speed and the acceleration of the target by utilizing a processing and analyzing device in a memory according to the digital image signals, and sending the calculation result to the system decision processing unit 25 through a serial communication interface for use;
the system decision processing unit 25 is used for performing active safety early warning decision analysis and calculation on the information from the digital signal processing unit in combination with the information input by the keyboard 26 information input unit to obtain a final avoidance scheme, and then performing active safety early warning simultaneously through the display unit 3 and the warning unit 5;
the display unit 3 is used for displaying images marked on targets around the installation position of the system, azimuth angles, speeds and accelerations between the targets and the installation position of the system, and displaying recommended avoidance schemes and danger levels;
the singlechip 4 is used for receiving the decision result of the system decision processing unit 25 and controlling the alarm mode of the alarm unit 5;
and the alarm unit 5 is used for receiving the control signal of the singlechip 4 and using sound and light modes with different frequencies to express the prompt information of the danger level and the danger direction.
The system adopts an integrated design, realizes the real-time detection of the distance, the direction, the speed and the acceleration between a system installation position and a plurality of moving ship targets, bridges, piers and reef targets around, and the intelligent analysis and processing of a central processing unit automatically carries out the decision of danger early warning judgment, and automatically alarms in a way of conforming to the humanized characteristics such as visual video, characters, sound, light, electricity and the like, and finally realizes the purpose of all-weather active safety early warning of ships.
The infrared video image digital signal processing unit 21 is 2 floating point digital signal processors TMS320C6713, and 2 TMS320C6713 are connected with the video acquisition card 113 of the thermal infrared imager 111; the floating-point digital signal processor TMS320C6713 includes an image memory, a programmable logic device, a program memory and a controller, wherein the program memory is provided with a processing and analyzing device. The TMS320C6713 has the characteristics of small volume, low cost, high stability, good real-time performance and the like, can simultaneously realize 4-path video acquisition and processing, and simultaneously, the TMS320C6713 has extremely strong processing capability, high flexibility and programmability and can well meet the requirements of target detection, tracking and identification algorithms.
A large-view-field image registration and splicing device is arranged in a memory in the infrared video image digital signal processing unit 21, the infrared sequence images are automatically spliced into a panoramic view, and the automatically spliced panoramic view is sent to the system decision processing unit 25 for storage. The large-field-of-view image registration and splicing device realizes automatic splicing of panoramic views of sequence images by estimating interframe transformation parameters by using a global motion estimation method. During motion parameter estimation, pyramid layered block matching motion vector estimation is adopted, the running speed of a program is effectively improved, and elimination operation on abnormal blocks is added, so that the precision of global motion estimation is greatly improved. In addition, when the light intensity is different between images. And the balanced light difference operation is adopted, so that a better splicing effect is achieved. The method can accurately and quickly generate the large-view-field image of the video sequence, so that the view field range of video monitoring is enlarged. The automatically spliced panoramic view is sent to the system decision processing unit 25 for storage, and can be used for post analysis and evaluation and accident reason analysis.
The system decision processing unit 25 is a PC computer. The common PC computer has the advantages of low price, strong function and convenient interface, can meet the requirement of decision calculation of the system, is convenient to be connected with each part of the system, and can fully store the infrared video images collected by the target sampling unit 1 and the panoramic view automatically spliced by the image registration and splicing device in the infrared video image digital signal processing unit 21 due to the large capacity of the hard disk of the computer.
The display unit 3 comprises 2 displays, a display 31 and a display 32, wherein the display 31 displays infrared video images of a plurality of marked targets around; and related text information: the distance, azimuth angle, speed and acceleration between the installation position of the intelligent all-weather active safety early warning system for ship running and a plurality of ships, bridges, piers and reef targets around; the display 32 displays a textual description of the recommended avoidance maneuver: the method comprises the steps of adopting variable speed yielding and/or steering yielding, speed, direction, danger level, danger direction and alarm mode.
The alarm unit 5 is a singlechip 4 connected with the serial communication interface of the system decision processing unit 25 and used for controlling an external loudspeaker and/or an alarm lamp.
The intelligent all-weather active safety early warning system for ship running can be fixedly installed on a bridge, a port, a wharf boat, a dangerous river section, a limiting area, a gate or a ship. The system is fixedly installed in key areas such as ports, wharfs, bridges, wharfs, restricted areas and the like in oceans and rivers, moving ships passing through the key areas are detected and identified by utilizing infrared video image processing, tracking and positioning processing is carried out on the moving ships, the future moving direction of the ships is judged, real-time monitoring is carried out on the running ships, when the ships enter an early warning area, the ships are judged and processed by a computer, and if the future moving direction of the ships points to the bridge piers, the restricted areas, and threats and the like are caused to important equipment facilities in the key areas of the ports and the wharfs, monitoring personnel are reminded, and the ship drivers are warned by broadcasting, so that accidents are avoided.
The method comprises the following steps:
referring to fig. 1, 2, 3 and 4, the early warning method of the intelligent all-weather active safety early warning system for ship driving comprises the following steps:
a. starting system
Starting the intelligent all-weather active safety early warning system for ship running, and inputting information and control instructions by a keyboard 26; the intelligent all-weather active safety early warning system for ship running can be fixedly arranged at different positions, the size of an early warning area is determined according to actual conditions, the early warning area is set through a keyboard 26, and the early warning area is sent to a system decision processing unit 25 through a serial communication interface;
b. target sampling and digital signal processing
The infrared video image target sampling and digital signal processing unit: the thermal infrared imager 111 is driven by the holder 112 to scan the surroundings according to the specified time interval and angle step length, so that the thermal infrared imager 111 takes a picture of the surrounding water surface environment, the recorded image is converted into digital signals by the video acquisition card 113 and is sent to the programmable logic device in the digital infrared video image processing unit 21 for time sequence conversion and bus control, the control line signals and the image information are respectively sent to the image memory in the digital signal processing unit for storage, the image memory obtains information and then sends the information to the programmable logic device in the digital signal processing unit for confirmation information, the processing and analyzing device in the program memory in the digital signal processing unit is used for extracting the image from the image memory for processing and analyzing, the processing and analyzing result is sent to the controller of the digital signal processing unit, after the controller obtains the information, a confirmation signal is sent to the video capture card 113, and the final installation position of the system and a plurality of targets around the system are determined as follows: azimuth angle, speed and acceleration information among ships, bridges, piers and reefs are sent to a system decision unit 25 through a serial communication interface;
c. the system decision unit 25 obtains the information from a, and in combination with the information from b, the system decision unit 25 performs active safety early warning decision analysis and calculation to obtain a final avoidance scheme, and then performs active safety early warning simultaneously through the display 3 and the acousto-optic warning terminal 5;
wherein:
the display 3 displays images of a plurality of marked objects around the installation site of the system; and related text information: azimuth angles, speeds and accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the system; and textual description of the recommended avoidance scheme: comprises adopting variable speed yielding and/or steering yielding, speed and direction;
and carrying out intelligent collision avoidance active safety early warning according to the information, if the information is dangerous, judging the danger level, using sound and light modes with different frequencies to express the prompt information of the danger level and the danger direction through the sound and light alarm terminal 5, and controlling an external loudspeaker or an alarm lamp to alarm through the singlechip 4 connected with the serial communication interface of the system decision processing unit 25.
And step b, image registration and splicing are further included, the infrared sequence images are automatically spliced into a panoramic view, and the automatically spliced panoramic view is sent to the system decision processing unit 25 for storage. The large-field-of-view image registration and splicing device realizes automatic splicing of panoramic views of sequence images by estimating interframe transformation parameters by using a global motion estimation method. During motion parameter estimation, pyramid layered block matching motion vector estimation is adopted, the running speed of a program is effectively improved, and elimination operation on abnormal blocks is added, so that the precision of global motion estimation is greatly improved. The method can accurately and quickly generate the large-view-field image of the video sequence, so that the view field range of video monitoring is enlarged. The automatically spliced panoramic view is sent to the system decision processing unit 25 for storage, and can be used for post analysis and evaluation and accident reason analysis.
The image processing and analysis in step b are carried out according to the following steps:
firstly, performing infrared image preprocessing, including image denoising, image enhancement and sharpening, and image correction; an infrared image is a set of real scene images, imaging noise and imaging interference. Assuming that f (x, y) represents the infrared image acquired by the imaging system, the infrared scene image containing the target may be scanned at the speed of:
f(x,y)=fT(x,y)+fB(x,y)+n(x,y)+n1(x,y)
fT(x, y) is a target gray value; f. ofB(x, y) is a background image, n (x, y) represents imaging noise, n1(x, y) represents imaging interference. Back of bodyScene image fB(x, y) typically has a long correlation length, which occupies low frequency information in the spatial frequency of the scene image f (x, y). At the same time, the background image f is due to the non-uniformity of the thermal distribution inside the scene and the sensorB(x, y) is a non-stationary process, where local gray values in the image may vary significantly, and, in addition, fB(x, y) also contains high frequency components in the partial spatial frequency domain, which are mainly distributed at the edges of the respective homogeneous regions of the background image.
Imaging noise n (x, y) is introduced in the imaging process and is a few small disturbances superimposed on random positions of the infrared image; imaging disturbance n1(x, y) is the result of the infrared imaging photosensor response non-uniformity or dummy which forms some erroneous image data points at random or fixed locations in the infrared image. Thus, the imaging disturbance n1(x, y) are shown as isolated points in the image with pixel gray values much larger or smaller than the median value of its surrounding neighborhood, and the purpose of infrared image denoising is to estimate the real scene image from the image f.
Adopting two kinds of different algorithm combinations to carry out the sky waterline and extracting respectively, at first carrying out the image quality to original image and evaluating, whether confirming to carry out the preliminary treatment to the image according to the evaluation result, then carrying out the sky waterline and extracting with two kinds of different algorithm combinations simultaneously, wherein, the step that the extraction algorithm combination of first possible sky waterline includes is in proper order: image iteration threshold segmentation, Roberts gradient operator edge detection, refinement and Hough transformation extraction of a first strip; the combination of the extraction algorithm of the second possible waterline comprises the following steps in sequence: and (4) edge detection and binarization of the Roberts gradient operator, refining, and extracting a second sky waterline by using Hough transformation. And after the two possible sky line extractions are finished, taking a sky line close to the lower end of the image as a sky line for final extraction.
And (3) evaluating the image quality: mean Square Error (MSE) is used to evaluate image quality and to determine whether to pre-process the image.
image preprocessing with if MSE > Kthen
if MSE is less than or equal to K then without image preprocessing
Take K-25.
First-stage image preprocessing: in the sky and ground area above the sky waterline in the infrared image, there are targets such as bridges, ground buildings, continuous rocks, etc., the gray values of these targets are generally in the highest gray level of the image, the gradient between them and the surrounding environment is often greater than the gradient at both sides of the sky waterline, especially when these targets present the continuous linear distribution in the nearly horizontal direction in the image, cause that the sky waterline can not be correctly extracted in the existing algorithm at present. The following method is adopted to eliminate the target interference with high brightness value: assuming that the gray value of a pixel in an image is f (x, y), the total number of pixels in the image is M × N, and the proportion of pixels with high brightness values to be eliminated is R, fM×N×R(x, y) represents the gray scale value of the MxNxR-th pixel when the gray scale values are arranged from high to low, R represents the lowest gray scale value among the eight adjacent pixels of the pixel (x, y), and the gray scale value corresponding to the pixel after preprocessing is g (x, y), then
if f(x,y)≥fM×N×R(x,y)then g(x,y)=r
if f(x,y)<fM×N×R(x,y)then g(x,y)=f(x,y)
In the step, the brightness value of the pixels with partial high brightness values is only reduced, so that the accurate extraction of the waterline is not influenced. And R is 0.05.
And (3) second-stage image preprocessing: in the water surface area below the waterline in the infrared image, the interference of strong water waves is another main factor causing the waterline not to be extracted. The strong water wave interference is more in the pixel occupied by the water surface area, the gray value of the strong water wave interference is distributed near the mean value of the whole image, and the strong water wave interference is removed by adopting the following method: calculating to obtain an image mean value fmean, wherein the gray value corresponding to the pixel after the second-stage preprocessing is h (x, y), and then
if f(x,y)>fmean then h(x,y)=f(x,y)-fmean
if f(x,y)≤fmean then h(x,y)=0
After the second stage of pretreatment, most of strong water wave interference is suppressed. In the same step, the brightness value of the interference pixels of only part of strong water waves is reduced, so that the accurate extraction of the sky water line is not influenced.
Image iterative threshold segmentation: the whole image in a water-borne environment is seen as being composed of two regions, namely a water surface region below the waterline and a sky and ground region above the waterline. When MSE of the original image is less than or equal to K, because the interference is small, the image is directly subjected to iterative threshold segmentation, and effective and reliable segmentation of two regions can be realized; when MSE of the original image is larger than K, most of interference is removed after image preprocessing, and then iterative threshold segmentation is carried out on the image, so that effective and reliable segmentation of the two regions can be realized.
Roberts gradient operator edge detection: the Roberts gradient is separately determined for each pixel point in the image.
Binarization: and carrying out binarization on the gradient image by adopting an edge threshold strategy. In an image, the number of non-edge points occupies a certain proportion of the total number of pixel points of the image, and the corresponding scale factor is represented as Ratio. And gradually accumulating image points from the low gradient value grade according to the image gradient value corresponding histogram, wherein when the accumulation number reaches the Ratio of the total number of pixels of the image, the corresponding image gradient value is the segmentation threshold value. Ratio is 0.95.
Thinning: the binary image may have pixels connected into one piece, which affects the extraction accuracy and real-time performance of Hough transform, so it needs to be refined. The principle of refinement is to reduce the line segment to one pixel along the vertical direction.
Extracting the waterline by Hough transformation:
a straight line is extracted by utilizing Hough transformation, so that mapping from an image space to a parameter space is realized, and the basic idea is duality of a point and a line. Polar equation by straight line: and p is xcos theta + ysin theta, and Hough transformation is carried out, namely points on a straight line in the image space are represented by a sine curve. In this context, the parameter space is discretized into an accumulator array, each point (x, y) in the image is mapped into a series of accumulators corresponding to the parameter space, the corresponding accumulator value is incremented by 1, and if the image space contains a straight line, a local maximum occurs in one of the corresponding accumulators in the parameter space. By detecting this local maximum, a pair of parameters (ρ, θ) corresponding to the straight line can be specified, and the straight line can be detected.
The reliability of the sky-water line is a problem of uncertainty reduction which needs a plurality of evidences to support the same fact, and therefore the evidence theory is used for judging the reliability of the extracted sky-water line.
(1) Whether the waterline is located in a possible waterline area;
when the system is installed, the waterline is bound to be in a sub-area, if the extracted waterline is in the sub-area, the reliability M1 of the waterline is 1, otherwise, the reliability M1 of the waterline is 0. By the formula:
M1=1,y1≤y≤y2
m1 is 0, y < y1 or y > y2
In the formula: y1, y2 are the coordinates of the middle pixel point row of the highest and lowest possible water lines, respectively; and y is the coordinates of the middle pixel point row of the extraction sky line.
(2) Obtaining the credibility of the waterline through the contrast judgment of sub-areas above and below the waterline;
since the gray level of the area above the waterline is generally higher than that of the area below the waterline, the reliability of the waterline is higher when the contrast of the upper area and the lower area is higher. Calculating by the formula:
<math><mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>&lsqb;</mo> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>x</mi> <mo>&lt;</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>y</mi> <mo>&lt;</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>+</mo> <mi>&Delta;h</mi> <mo>&lt;</mo> <mi>x</mi> <mo>&lt;</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> <mo>+</mo> <mi>&Delta;h</mi> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>y</mi> <msub> <mrow> <mo>&lt;</mo> <mi>w</mi> </mrow> <mn>2</mn> </msub> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>x</mi> <mo>&lt;</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>y</mi> <mo>&lt;</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow></math>
(3) confidence M3 of evidence associated with intermediate pixel points of the waterline extracted from previous and subsequent frames.
M 3 = 1 | y ( t ) - y ( t - 1 ) | , When y (t) is y (t-1), M3=1
y (t), y (t-1) is the middle pixel point line coordinates of the sky-water line extracted by the current frame and the previous frame, and when the difference between the two is larger, the credibility of the sky-water line is lower.
(4) Obtaining the synthesis of the sky waterline by using the synthesis rule of D-SConfidence M, M being1·M2·M3
Let Mt be a threshold of proper selection, when M > Mt, consider the extraction of the sky waterline of this frame of picture to be correct; otherwise, the above criterion cannot confirm the correctness of the sky-water line of the frame. At this time, the position of the waterline of the previous frame and the contrast of the upper and lower areas of the waterline are saved. And continuing to identify the waterline of the next frame.
After the sky waterline is correctly detected, an image area in a certain range above and below the sky waterline is used as a region of interest (ROI).
Evaluating the image quality of the ROI (region of interest) of the image, and adopting an infrared target detection algorithm based on a single-frame image when the evaluation result is 1; when the evaluation result is 0, adopting an infrared target detection algorithm based on an image sequence;
frequency selection and multi-scale decomposition are realized by respectively using two-dimensional wavelet transform in a wavelet analysis method, background noise is suppressed, and a target is enhanced; separating a low-frequency part from a high-frequency part of an original image, then respectively carrying out multi-resolution analysis on each low-frequency component and each high-frequency component, extracting target characteristics, and carrying out target detection;
in the fractal method, according to the characteristic that the fractal characteristics of artificial targets such as ships, piers and the like have violent change along with the scale compared with the natural background, a multiscale fractal characteristic image is extracted from an ROI subimage subjected to image enhancement by a fuzzy filtering method; finally, performing target detection on the multi-scale fractal features by using a probability relaxation method; the fractal model is a mathematical model suitable for describing objects with complex and irregular shapes, and the basic principle is to utilize the difference between a natural scene and an artificial target on a fractal dimension, generally, the fractal dimension is a linear estimation for calculating a logarithmic value between a measurement value and a measurement scale of each scale of summer, namely, the fractal dimension is considered to be a constant value in each scale range, which accords with the characteristics of an ideal fractal model. However, most natural scenes only have fractal features approximately in a certain scale range, and the actual images are influenced by imaging noise, quantization errors and the like, and the natural scenes cannot be described by standard fractal dimensions in many cases, so that the difference between the natural background and the artificial target from the standard fractal dimensions cannot achieve ideal effects. The target detection is considered by utilizing the fractal characteristic difference between the target and the background without accurately calculating the fractal parameters, and the difference between the background and the target in the image along with the scale change is different, so that the target detection is carried out by utilizing the difference of the multi-scale change of the background and the target.
Performing median filtering on the ROI subgraph firstly in a mathematical morphology method, and then taking a pixel with the maximum brightness value in a filtered image as a marked image; performing top hat transformation on an original image, and performing morphological reconstruction by taking an image subjected to iterative threshold segmentation as a mask image to realize infrared ship target detection; morphological reconstruction: the idea of reconstruction is to approximate the mask image by constantly expanding the marker image, thereby restoring some or all of the mask image, depending on the choice of marker image. The reconstruction is characterized in that the interested region in the mask image can be extracted through the selection of the marked image.
The reconstruction g (mask image) from f (marker image) is defined by the following iterative procedure:
initializing h1 to marker image f;
creating a 3x3 structural element B, each element in B being 1, defining together with the rules;
repetition of hk+1=(hk⊕ B) Ig, up to hk+1=hk
Note that: the label f must be a subset of g and the reconstructed image is a subset of the mask image g.
Finally, performing evidence reasoning combination on target detection results from different methods by using evidence reasoning combination, and synthesizing the evidence by using Dempster evidence synthesis rule to obtain an infrared ship target detection result with high confidence coefficient identification; the evidence theory can adopt a trust function instead of probability as a measurement without prior probability and conditional probability density, adopts an interval estimation method instead of a point estimation method for the description of uncertain information, and shows great flexibility in distinguishing unknown and uncertain aspects and accurately reflecting evidence collection.
The image processing and analysis in step b further comprises the steps of:
the intelligent multi-maneuvering infrared ship target tracking is realized by applying an artificial neural network technology and a fuzzy reasoning technology; the artificial neural network is used for multi-maneuvering target tracking, so that the system has good self-adaption, self-organization learning, association and fault tolerance capabilities, has stronger judging and identifying capabilities through learning and training, and can independently find a solution to the problem; threat assessment is carried out on the tracked targets by applying a fuzzy inference technology, and the types and the speeds of the targets are judged, so that the threat levels of the targets are deduced; compared with the traditional random self-adaptive system, the artificial neural network has stronger judging and identifying capabilities through learning and training, and can independently find a solution for the problem. And the fault-tolerant capability of the system is very strong, and the system can still perform target detection, parameter estimation, target feature extraction and identification, system modeling and the like under the condition of uncertain data and environment, particularly under the condition of a large amount of noise and interference. Fuzzy reasoning is a reasoning process for imitating human thinking to real things, and the multi-target tracking process often has the similar situations of 'possible association' or 'possible non-association' between targets, 'possible targets belonging to a class' or 'possible non-belonging to a class', and the like. The fuzzy reasoning is used for carrying out threat assessment on a plurality of tracked targets, and the target types, the target speeds and the like are judged, so that the threat levels of the targets are deduced, a collision avoidance scheme can be accurately provided for ship operators, and a favorable guarantee is provided. The traditional methods are difficult to meet the engineering requirements, and the problems can be simplified by using fuzzy reasoning so as to meet the real-time requirement in target tracking.
The image processing and analysis in step b further comprises the steps of:
the method is characterized in that self-adaption and learning capabilities are added in a classical pattern recognition algorithm, the knowledge base of an artificial intelligence technology is fused by utilizing the front-back relation of images to recognize an infrared ship target, target features are extracted in a partitioned area, position features, shape features, size features, radiation features and features extracted based on wavelet analysis are respectively extracted, and the features are used as the input end of an RBF neural network to recognize the infrared ship target. The RBF neural network designed by the invention has a three-layer network structure, an input layer comprises 8 input parameters, a hidden layer comprises 12 nodes, and an output layer inputs the type of a target.
In the step c, according to the information input in the step a, combining the distances, azimuth angles, speeds and accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the system obtained in the step b; adopting a fuzzy expert system method to synthesize the early warning decision elements and the influence degrees thereof into mutually independent and orthogonal principal components, finally determining the ship collision avoidance risk degree, and carrying out active early warning decision on the basis of the mutually independent and orthogonal principal components; if the system is installed on a ship, according to the model number, the size, the control performance parameters, the load, the azimuth of the ship and the real-time ship speed of the ship obtained in the step a, combining the distances, the azimuth angles, the speeds and the accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the installation position of the system obtained in the step b; and (3) integrating various early warning decision elements and the influence degrees thereof into mutually independent and orthogonal main components by adopting a fuzzy expert system method, finally determining the ship collision avoidance risk degree, and carrying out active early warning decision on the basis of the main components.
The active safety early warning system is a complex system consisting of three elements of human, ship and environment. Therefore, factors influencing three factors of people, ships and environment have important influence on the active safety early warning decision. On the basis of collision avoidance information acquired by ship navigation equipment, determining the ship collision avoidance risk degree by adopting a fuzzy expert system method, and carrying out active early warning decision on the basis of the ship collision avoidance risk degree.
During the running of the ship, the early warning decision element obtained by each sensor is x1,x2,x3......xnThe influence degree of each element on the early warning decision is different, and the elements also influence each other.
Let the influence degree of the ith element on the early warning decision be wi(i is more than or equal to 1 and less than or equal to n), and setting the extracted n samples as X: x is the number of1 t,x2 t,...xn t(t is more than or equal to 1 and less than or equal to n)). now n characteristic indexes are synthesized into n mutually orthogonal independent main components y1,y2,...ynWritten in matrix form as
Y=C·X (2)
Wherein
C = C 11 . . . C 1 n . . . . . . . . . C n 1 . . . C nn , Y = y 1 . . . y n
The fuzzy expert system can process uncertain data and propositions, namely values can be taken in [0, 1], and fuzzy technical means such as fuzzy sets, fuzzy numbers, fuzzy relations and the like are adopted to express and process the uncertainty and inaccuracy of knowledge. Because uncertain factors are very many in the running process of the ship, the practicability of the system can be enhanced by adopting a fuzzy expert system for active safety early warning decision.
The active safety early warning system consists of three main parts, namely information collection, data processing, decision making and the like.
Example 2: the system is mounted on a ship
The intelligent all-weather active safety early warning system for ship driving is fixedly installed in front of a ship cab, a plurality of target ships, bridges, piers and reefs around a ship where the system is installed are detected and identified, tracking and positioning processing is carried out on the target ships, the bridge piers and the reefs, real-time monitoring is carried out, whether the target ships, the bridges, the piers and the reefs around the ship form threats to the ship where the system is installed is automatically judged through a computer on the basis of obtaining collision avoidance information, a fuzzy expert system method is adopted to determine the ship collision avoidance risk degree, and active early warning decision is carried out on the basis.
Of course, depending on the field situation, the system may be fixedly installed in the front of the deck of the top roof of the ship, or above the cantilever arm of the roll-on-roll-off ship. When a plurality of target ships, bridges, piers and reefs around threaten ships provided with the system, the system automatically alarms by comprehensively using modes such as video, sound, light, electricity and the like which accord with humanized characteristics, and warns ship drivers to avoid accidents. The method can greatly improve the sensing ability of ship operating personnel on the navigation surrounding environment, improve the active safety navigation guarantee ability of the ship under the condition of poor visibility, assist the ship operating personnel in making decisions, reduce the errors of ship operating, greatly improve the success rate of ship active collision avoidance, guarantee the safety of life and property of personnel, reduce or avoid the occurrence of serious water area pollution and natural environment accidents, and ensure the safety of navigation transportation.
Mounting system
The system installation should ensure that the thermal infrared imager 111 can completely cover the key area and the water area to be monitored through the visual field formed by the driving of the holder 112 without shielding, the vertical distance between the installation position and the horizontal plane is required to be more than 5 meters, and the imaging of the key area and the water area which are completely covered accounts for more than half of the whole image area.
The system part is as follows:
referring to fig. 1 and 2, the intelligent all-weather active safety early warning system for ship driving is composed of a target sampling unit 1, a system central processing unit 2, a display unit 3, a single chip microcomputer 4 and a safety early warning unit 5;
the target sampling unit 1 consists of an outdoor intelligent high-speed holder 112 with a decoder and a thermal infrared imager 111 with a video acquisition card 113, the thermal infrared imager 111 is fixed on the outdoor intelligent high-speed holder 112, the outdoor intelligent high-speed holder 112 is connected with the system central processing unit 2 through the decoder, and the thermal infrared imager 111 is connected with the system central processing unit 2 through the video acquisition card 113;
the central processing system 2 comprises a keyboard 26, an infrared video image digital signal processing unit 21 and a system decision processing unit 25;
wherein,
the keyboard 26 is used for inputting information and control instructions to the system decision processing unit 25;
the infrared video image digital signal processing unit 21 is used for receiving the digital image signals of the target sampling unit 1, calculating the position, the azimuth angle, the speed and the acceleration of the target by utilizing a processing and analyzing device in a memory according to the digital image signals, and sending the calculation result to the system decision processing unit 25 through a serial communication interface for use;
the system decision processing unit 25 is used for performing active safety early warning decision analysis and calculation on the information from the digital signal processing unit in combination with the information input by the keyboard 26 information input unit to obtain a final avoidance scheme, and then performing active safety early warning simultaneously through the display unit 3 and the warning unit 5;
the display unit 3 is used for displaying images marked on targets around the installation position of the system, azimuth angles, speeds and accelerations between the targets and the installation position of the system, and displaying recommended avoidance schemes and danger levels;
the singlechip 4 is used for receiving the decision result of the system decision processing unit 25 and controlling the alarm mode of the alarm unit 5;
and the alarm unit 5 is used for receiving the control signal of the singlechip 4 and using sound and light modes with different frequencies to express the prompt information of the danger level and the danger direction.
The system adopts an integrated design, realizes the real-time detection of the distance, the direction, the speed and the acceleration between a system installation position and a plurality of moving ship targets, bridges, piers and reef targets around, and the intelligent analysis and processing of a central processing unit automatically carries out the decision of danger early warning judgment, and automatically alarms in a way of conforming to the humanized characteristics such as visual video, characters, sound, light, electricity and the like, and finally realizes the purpose of all-weather active safety early warning of ships.
The infrared video image digital signal processing unit 21 is 2 floating point digital signal processors TMS320C6713, and 2 TMS320C6713 are connected with the video acquisition card 113 of the thermal infrared imager 111; the floating-point digital signal processor TMS320C6713 includes an image memory, a programmable logic device, a program memory and a controller, wherein the program memory is provided with a processing and analyzing device. The TMS320C6713 has the characteristics of small volume, low cost, high stability, good real-time performance and the like, can simultaneously realize 4-path video acquisition and processing, and simultaneously, the TMS320C6713 has extremely strong processing capability, high flexibility and programmability and can well meet the requirements of target detection, tracking and identification algorithms.
A large-view-field image registration and splicing device is arranged in a memory in the infrared video image digital signal processing unit 21, the infrared sequence images are automatically spliced into a panoramic view, and the automatically spliced panoramic view is sent to a system decision processing unit for storage. The large-field-of-view image registration and splicing device realizes automatic splicing of panoramic views of sequence images by estimating interframe transformation parameters by using a global motion estimation method. During motion parameter estimation, pyramid layered block matching motion vector estimation is adopted, the running speed of a program is effectively improved, and elimination operation on abnormal blocks is added, so that the precision of global motion estimation is greatly improved. In addition, when the light intensity is different between images. And the balanced light difference operation is adopted, so that a better splicing effect is achieved. The method can accurately and quickly generate the large-view-field image of the video sequence, so that the view field range of video monitoring is enlarged. The automatically spliced panoramic view is sent to the system decision processing unit 25 for storage, and can be used for post analysis and evaluation and accident reason analysis.
The system decision processing unit 25 is a PC computer. The common PC computer has the advantages of low price, strong function and convenient interface, can meet the requirement of decision calculation of the system, is convenient to be connected with each part of the system, and can fully store the infrared video image collected by the target sampling unit and the panoramic view automatically spliced by the image registration and splicing device due to the large capacity of the hard disk of the computer.
The display unit 3 is 2 displays, a display 31 and a display 32, wherein the display 31 displays images of a plurality of marked targets around; and related text information: the distance, azimuth angle, speed and acceleration between the installation position of the intelligent all-weather active safety early warning system for ship running and a plurality of ships, bridges, piers and reef targets around; the display 32 displays a textual description of the recommended avoidance maneuver: the method comprises the steps of adopting variable speed yielding and/or steering yielding, speed, direction, danger level, danger direction and alarm mode. The method adopts the display mode of two displays, utilizes the obtained real-time panoramic view image, and effectively detects, identifies and tracks other multiple targets such as meeting ships (multiple targets), bridges (piers), water surface reefs and the like in the area in front of the running ship. By carrying out color coloring on the water surface area of the spliced panoramic view image and the detected targets to render and mark the targets and simultaneously displaying the spliced panoramic view image and the panoramic view image subjected to the color coloring and rendering marking targets on the display 31, the marked images can ensure that a user can use the system intuitively, and the user can conveniently evaluate the performance of the system by using the text description to meet different requirements of the user.
The alarm unit 5 is a singlechip 4 connected with the serial communication interface of the system decision processing unit 25 and used for controlling an external loudspeaker and/or an alarm lamp. The method comprehensively uses video, sound, light, electricity and other modes which accord with humanized characteristics to automatically alarm, improves the sensing capability of ship operating personnel on the navigation environment, and assists collision avoidance decisions, thereby improving the success rate of ship collision avoidance.
The target sampling unit 1 is also provided with a radar 121, a visible light image sensor 122 with a video capture card 123 is arranged in front of the screen of the radar 121, the visible light image sensor 122 is connected with a visible light video image digital signal processing unit 22 in a system central processing unit 2 through the video capture card 123, and the visible light video image digital signal processing unit 22 is 2 floating point digital signal processors TMS320C 6713; the TMS320C6713 comprises an image memory, a programmable logic unit, a program memory and a controller, wherein a processing and analyzing device is arranged in the program memory; the central processing system 2 is further provided with an information fusion unit 23, and the information fusion unit 23 is used for performing information fusion on various information respectively from the infrared video image digital signal processing unit 21 and the visible light video image digital signal processing unit 22 to obtain the distance, the azimuth angle, the speed and the acceleration of a final target, marking the target in the image, and sending a calculation result to a system decision processing unit 25 through a serial communication interface; the information fusion unit 23 is a high-speed real-time digital signal processor ADSP 21060. By the method, an original ship radar navigation system is not damaged, the use of the radar 121 is not influenced, an electric circuit is not changed, and only the visible light sensor is additionally arranged on the original radar 121, so that great convenience is brought to the acquisition of multi-target information in the radar 121. The ADSP21060 is internally integrated with a 4Mbit dual-port static memory and is provided with a special peripheral I/O bus, so that the main functions of the digital signal processing system are effectively integrated on one chip, a single chip system is easy to form, and the volume of a circuit board is reduced. The on-chip high-speed instruction CACHE (CACHE) enables instructions to be executed in a pipelined manner, ensuring that each instruction is completed in a single cycle (25ns), the floating point peak operation rate is as high as 120MFLOPS (million floating point operations per second), and the normal floating point operation rate is 80 MFLOPS. The controller of the peripheral I/O bus provides six sets of high-speed links and two sets of synchronous serial ports, and a large amount of ADSP21060 can form a loosely-coupled parallel processing system by using the links. In addition, the processor also comprises 3 interrupt pins, 4 mark pins and an MSO-MS3 chip selection pin, so that the interface between the processor and other peripheral equipment is simple. In a word, the ADSP21060 has a huge address space, a powerful addressing mode, and a 48-bit ultra-long instruction word and its floating point arithmetic capability, which can completely meet the real-time requirement of the fusion device software algorithm.
The display unit 3 also displays images of a plurality of marked targets around the ship; and related text information: the model and the size of the ship, the control performance parameters of the ship, the load, the azimuth of the ship, the real-time ship speed of the ship, the distance between the ship and a plurality of surrounding ships, bridges, piers and reef targets, the azimuth, the speed and the acceleration.
The method comprises the following steps:
referring to fig. 1, 2, 3 and 4, the early warning method of the intelligent all-weather active safety early warning system for ship driving comprises the following steps:
a. starting system
Starting the intelligent all-weather active safety early warning system for ship running, and inputting information and control instructions by a keyboard 26; the keyboard 26 inputs the ship model, the ship size, the ship control performance parameters and the load information of the ship, and simultaneously sends the ship direction and the real-time ship speed information in the ship-borne GPS24 to the system decision processing unit 25 through the serial communication interface;
b. target sampling and digital signal processing
The infrared video image target sampling and digital signal processing unit: the thermal infrared imager 111 is driven by the holder 112 to scan the surroundings according to the specified time interval and angle step length, so that the thermal infrared imager 111 takes a picture of the surrounding water surface environment, the recorded image is converted into digital signals by the video acquisition card 113 and is sent to the programmable logic device in the digital infrared video image processing unit 21 for time sequence conversion and bus control, the control line signals and the image information are respectively sent to the image memory in the digital signal processing unit for storage, the image memory obtains information and then sends the information to the programmable logic device in the digital signal processing unit for confirmation information, the processing and analyzing device in the program memory in the digital signal processing unit is used for extracting the image from the image memory for processing and analyzing, the processing and analyzing result is sent to the controller of the digital signal processing unit, after the controller obtains the information, a confirmation signal is sent to the video capture card 113, and the final installation position of the system and a plurality of targets around the system are determined as follows: azimuth angle, speed and acceleration information among ships, bridges, piers and reefs are sent to a system decision unit 25 through a serial communication interface;
c. the system decision unit 25 obtains the information from a, and in combination with the information from b, the system decision unit 25 performs active safety early warning decision analysis and calculation to obtain a final avoidance scheme, and then performs active safety early warning simultaneously through the display 3 and the acousto-optic warning terminal 5;
wherein:
the display 3 displays images of a plurality of marked objects around the installation site of the system; and related text information: the distance, azimuth angle, speed and acceleration between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the system are calculated; if the system is installed on a ship, the model and the size of the ship, the control performance parameters, the load, the azimuth of the ship, the real-time ship speed of the ship and the text description of a recommended avoidance scheme are also displayed: comprises adopting variable speed yielding and/or steering yielding, speed and direction;
and carrying out intelligent collision avoidance active safety early warning according to the information, if the information is dangerous, judging the danger level, using sound and light modes with different frequencies to express the prompt information of the danger level and the danger direction through the sound and light alarm terminal 5, and controlling an external loudspeaker or an alarm lamp to alarm through the singlechip 4 connected with the serial communication interface of the system decision processing unit 25.
And step b, image registration and stitching are further included, the sequence images are automatically stitched into a panoramic view, and the automatically stitched panoramic view is sent to the system decision processing unit 25 for storage. The large-field-of-view image registration and splicing device realizes automatic splicing of panoramic views of sequence images by estimating interframe transformation parameters by using a global motion estimation method. During motion parameter estimation, pyramid layered block matching motion vector estimation is adopted, the running speed of a program is effectively improved, and elimination operation on abnormal blocks is added, so that the precision of global motion estimation is greatly improved. The method can accurately and quickly generate the large-view-field image of the video sequence, so that the view field range of video monitoring is enlarged. The automatically spliced panoramic view is sent to the system decision processing unit 25 for storage, and can be used for post analysis and evaluation and accident reason analysis.
The image processing and analysis in step b are carried out according to the following steps:
firstly, performing infrared image preprocessing, including image denoising, image enhancement and sharpening, image correction and motion background correction; an infrared image is a set of real scene images, imaging noise and imaging interference. Assuming that f (x, y) represents the infrared image acquired by the imaging system, the infrared scene image containing the target may be scanned at the speed of:
f(x,y)=fT(x,y)+fB(x,y)+n(x,y)+n1(x,y)
fT(x, y) is a target gray value; f. ofB(x, y) is a background image, n (x, y) represents imaging noise, n1(x, y) represents imaging interference. Background image fB(x, y) typically has a long correlation length, which occupies low frequency information in the spatial frequency of the scene image f (x, y). At the same time, the background image f is due to the non-uniformity of the thermal distribution inside the scene and the sensorB(x, y) is a non-stationary process, where local gray values in the image may vary significantly, and, in addition, fB(x, y) also contains high frequency components in the partial spatial frequency domain, which are mainly distributed at the edges of the respective homogeneous regions of the background image.
Imaging noise n (x, y) is introduced in the imaging process and is a few small disturbances superimposed on random positions of the infrared image; imaging disturbance n1(x, y) is the result of the infrared imaging photosensor response non-uniformity or dummy which forms some erroneous image data points at random or fixed locations in the infrared image. Thus, the imaging disturbance n1(x, y) is represented in an imageThe purpose of denoising infrared images is to estimate the real scene image from the image f for isolated points with pixel gray values much larger or smaller than the median of their surrounding neighborhoods.
Motion background correction, since image sensors are sometimes mounted on a moving platform, or even on a stationary platform, may cause sensor jitter due to some disturbance, which will cause background jitter. By a global motion parameter estimation technique. It firstly estimates the motion parameter of the background, then uses the estimated motion parameter to make background correction, and corrects the multi-frame image under the same coordinate system.
Directly segmenting an image region, then extracting the characteristics of a bridge, a pier and a reef, and effectively tracking and identifying;
the method comprises the following steps of respectively adopting two different algorithm combinations to extract the waterline, wherein the extraction algorithm combination of the first possible waterline sequentially comprises the following steps: image iteration threshold segmentation, Roberts gradient operator edge detection, refinement and Hough transformation extraction of a first sky line; the combination of the extraction algorithm of the second possible waterline comprises the following steps in sequence: detecting the edge of a Roberts gradient operator, binarizing, refining, and extracting a second sky line by using Hough transformation; taking one of two possible sky waterlines close to the lower end of the image as a finally extracted sky waterline, judging the credibility of the sky waterline, and then taking an image area in a certain range above and below the sky waterline as an ROI (region of interest);
based on the position relation between the sky line and the ship target, the ship target image is generally located in the sky line area. This is determined by mid-range plane imaging. The target may not be located in the sky area completely off the waterline, nor in other areas such as land, canyons, etc. Therefore, the sky line is correctly detected, then the image area in a certain range above and below the sky line is used as a region of interest (ROI), and the subsequent detection, tracking and identification of the infrared ship target greatly reduce the image processing range and greatly avoid the interference of high-radiation areas on clouds, water waves, lands or canyons in the air, so that the calculated amount of various algorithms is greatly reduced, and the requirement of the algorithm on real-time property is ensured.
The extraction process of the waterline adopted by the invention is as follows: firstly, evaluating the image quality of an original image, determining whether to preprocess the image or not according to an evaluation result, and then simultaneously using two different algorithm combinations to extract a sky waterline, wherein the first possible sky waterline extraction algorithm combination sequentially comprises the following steps: image iteration threshold segmentation, Roberts gradient operator edge detection, refinement and Hough transformation extraction of a first strip; the combination of the extraction algorithm of the second possible waterline comprises the following steps in sequence: and (4) edge detection and binarization of the Roberts gradient operator, refining, and extracting a second sky waterline by using Hough transformation. And after the two possible sky line extractions are finished, taking a sky line close to the lower end of the image as a sky line for final extraction.
And (3) evaluating the image quality: mean Square Error (MSE) is used to evaluate image quality and to determine whether to pre-process the image.
image preprocessing with if MSE > Kthen
if MSE is less than or equal to K then without image preprocessing
Take K-25.
First-stage image preprocessing: in the sky and ground area above the sky waterline in the infrared image, there are targets such as bridges, ground buildings, continuous rocks, etc., the gray values of these targets are generally in the highest gray level of the image, the gradient between them and the surrounding environment is often greater than the gradient at both sides of the sky waterline, especially when these targets present the continuous linear distribution in the nearly horizontal direction in the image, cause that the sky waterline can not be correctly extracted in the existing algorithm at present. The following method is adopted to eliminate the target interference with high brightness value: assuming that the gray value of a pixel in an image is f (x, y), the total number of pixels in the image is M × N, and the proportion of pixels with high brightness values to be eliminated is R, fM×N×R(x,y) represents the gray scale value of the MxNxR-th pixel when the gray scale values are arranged from high to low, R represents the lowest gray scale value among eight adjacent pixels of the pixel (x, y), and the gray scale value corresponding to the pixel after the preprocessing is g (x, y), then
if f(x,y)≥fM×N×R(x,y)then g(x,y)=r
if f(x,y)<fM×N×R(x,y)then g(x,y)=f(x,y)
In the step, the brightness value of the pixels with partial high brightness values is only reduced, so that the accurate extraction of the waterline is not influenced. And R is 0.05.
And (3) second-stage image preprocessing: in the water surface area below the waterline in the infrared image, the interference of strong water waves is another main factor causing the waterline not to be extracted. The strong water wave interference is more in the pixel occupied by the water surface area, the gray value of the strong water wave interference is distributed near the mean value of the whole image, and the strong water wave interference is removed by adopting the following method: calculating to obtain an image mean value fmean, wherein the gray value corresponding to the pixel after the second-stage preprocessing is h (x, y), and then
if f(x,y)>fmean then h(x,y)=f(x,y)-fmean
if f(x,y)≤fmean then h(x,y)=0
After the second stage of pretreatment, most of strong water wave interference is suppressed. In the same step, the brightness value of the interference pixels of only part of strong water waves is reduced, so that the accurate extraction of the sky water line is not influenced.
Image iterative threshold segmentation: the whole image in a water-borne environment is seen as being composed of two regions, namely a water surface region below the waterline and a sky and ground region above the waterline. When MSE of the original image is less than or equal to K, because the interference is small, the image is directly subjected to iterative threshold segmentation, and effective and reliable segmentation of two regions can be realized; when MSE of the original image is larger than K, most of interference is removed after image preprocessing, and then iterative threshold segmentation is carried out on the image, so that effective and reliable segmentation of the two regions can be realized.
Roberts gradient operator edge detection: the Roberts gradient is separately determined for each pixel point in the image.
Binarization: and carrying out binarization on the gradient image by adopting an edge threshold strategy. In an image, the number of non-edge points occupies a certain proportion of the total number of pixel points of the image, and the corresponding scale factor is represented as Ratio. And gradually accumulating image points from the low gradient value grade according to the image gradient value corresponding histogram, wherein when the accumulation number reaches the Ratio of the total number of pixels of the image, the corresponding image gradient value is the segmentation threshold value. Ratio is 0.95.
Thinning: the binary image may have pixels connected into one piece, which affects the extraction accuracy and real-time performance of Hough transform, so it needs to be refined. The principle of refinement is to reduce the line segment to one pixel along the vertical direction.
Extracting the waterline by Hough transformation:
a straight line is extracted by utilizing Hough transformation, so that mapping from an image space to a parameter space is realized, and the basic idea is duality of a point and a line. Polar equation by straight line: and p is xcos theta + ysin theta, and Hough transformation is carried out, namely points on a straight line in the image space are represented by a sine curve. In this context, the parameter space is discretized into an accumulator array, each point (x, y) in the image is mapped into a series of accumulators corresponding to the parameter space, the corresponding accumulator value is incremented by 1, and if the image space contains a straight line, a local maximum occurs in one of the corresponding accumulators in the parameter space. By detecting this local maximum, a pair of parameters (ρ, θ) corresponding to the straight line can be specified, and the straight line can be detected.
The reliability of the sky-water line is a problem of uncertainty reduction which needs a plurality of evidences to support the same fact, and therefore the evidence theory is used for judging the reliability of the extracted sky-water line.
(1) Whether the waterline is located in a possible waterline area;
when the system is installed on a ship, the waterline is bound to be in a sub-area, if the extracted waterline is in the sub-area, the reliability M1 of the waterline is 1, otherwise, the reliability M1 of the waterline is 0. By the formula:
M1=1,y1≤y≤y2
m1 is 0, y < y1 or y > y2
In the formula: y1, y2 are the coordinates of the middle pixel point row of the highest and lowest possible water lines, respectively; and y is the coordinates of the middle pixel point row of the extraction sky line.
(2) Obtaining the credibility of the waterline through the contrast judgment of sub-areas above and below the waterline;
since the gray level of the area above the waterline is generally higher than that of the area below the waterline, the reliability of the waterline is higher when the contrast of the upper area and the lower area is higher. Calculating by the formula:
<math><mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>&lsqb;</mo> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>x</mi> <mo>&lt;</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>y</mi> <mo>&lt;</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>+</mo> <mi>&Delta;h</mi> <mo>&lt;</mo> <mi>x</mi> <mo>&lt;</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> <mo>+</mo> <mi>&Delta;h</mi> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>y</mi> <msub> <mrow> <mo>&lt;</mo> <mi>w</mi> </mrow> <mn>2</mn> </msub> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>x</mi> <mo>&lt;</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mi>y</mi> <mo>&lt;</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> </mrow> </munder> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow></math>
(3) confidence M3 of evidence associated with intermediate pixel points of the waterline extracted from previous and subsequent frames.
M 3 = 1 | y ( t ) - y ( t - 1 ) | , When y (t) is y (t-1), M3=1
y (t), y (t-1) is the middle pixel point line coordinates of the sky-water line extracted by the current frame and the previous frame, and when the difference between the two is larger, the credibility of the sky-water line is lower.
(4) Obtaining the comprehensive credibility M of the sky water line by using the synthesis rule of D-S, wherein M is equal to M1·M2·M3
Let Mt be a threshold of proper selection, when M > Mt, consider the extraction of the sky waterline of this frame of picture to be correct; otherwise, the above criterion cannot confirm the correctness of the sky-water line of the frame. At this time, the position of the waterline of the previous frame and the contrast of the upper and lower areas of the waterline are saved. And continuing to identify the waterline of the next frame.
After the sky waterline is correctly detected, an image area in a certain range above and below the sky waterline is used as a region of interest (ROI).
Evaluating the image quality of the ROI (region of interest) of the image, and adopting an infrared target detection algorithm based on a single-frame image when the evaluation result is 1; when the evaluation result is 0, adopting an infrared target detection algorithm based on an image sequence;
frequency selection and multi-scale decomposition are realized by respectively using two-dimensional wavelet transform in a wavelet analysis method, background noise is suppressed, and a target is enhanced; separating a low-frequency part from a high-frequency part of an original image, then respectively carrying out multi-resolution analysis on each low-frequency component and each high-frequency component, extracting target characteristics, and carrying out target detection;
in the fractal method, according to the characteristic that the fractal characteristics of artificial targets such as ships, piers and the like have violent change along with the scale compared with the natural background, a multiscale fractal characteristic image is extracted from an ROI subimage subjected to image enhancement by a fuzzy filtering method; finally, performing target detection on the multi-scale fractal features by using a probability relaxation method; the fractal model is a mathematical model suitable for describing objects with complex and irregular shapes, and the basic principle is to utilize the difference between a natural scene and an artificial target on a fractal dimension, generally, the fractal dimension is a linear estimation for calculating a logarithmic value between a measurement value and a measurement scale of each scale of summer, namely, the fractal dimension is considered to be a constant value in each scale range, which accords with the characteristics of an ideal fractal model. However, most natural scenes only have fractal features approximately in a certain scale range, and the actual images are influenced by imaging noise, quantization errors and the like, and the natural scenes cannot be described by standard fractal dimensions in many cases, so that the difference between the natural background and the artificial target from the standard fractal dimensions cannot achieve ideal effects. The target detection is considered by utilizing the fractal characteristic difference between the target and the background without accurately calculating the fractal parameters, and the difference between the background and the target in the image along with the scale change is different, so that the target detection is carried out by utilizing the difference of the multi-scale change of the background and the target.
Performing median filtering on the ROI subgraph firstly in a mathematical morphology method, and then taking a pixel with the maximum brightness value in a filtered image as a marked image; performing top hat transformation on an original image, and performing morphological reconstruction by taking an image subjected to iterative threshold segmentation as a mask image to realize infrared ship target detection; morphological reconstruction: the idea of reconstruction is to approximate the mask image by constantly expanding the marker image, thereby restoring some or all of the mask image, depending on the choice of marker image. The reconstruction is characterized in that the interested region in the mask image can be extracted through the selection of the marked image.
The reconstruction g (mask image) from f (marker image) is defined by the following iterative procedure:
initializing h1 to marker image f;
creating a 3x3 structural element B, wherein each element in the B is 1, and defining the rules;
repetition of hk+1=(hk⊕ B) Ig, up to hk+1=hk
Note that: the label f must be a subset of g and the reconstructed image is a subset of the mask image g.
Finally, performing evidence reasoning combination on target detection results from different methods by using evidence reasoning combination, and synthesizing the evidence by using Dempster evidence synthesis rule to obtain an infrared ship target detection result with high confidence coefficient identification; the evidence theory can adopt a trust function instead of probability as a measurement without prior probability and conditional probability density, adopts an interval estimation method instead of a point estimation method for the description of uncertain information, and shows great flexibility in distinguishing unknown and uncertain aspects and accurately reflecting evidence collection.
The image processing and analysis in step b further comprises the steps of:
the intelligent multi-maneuvering infrared ship target tracking is realized by applying an artificial neural network technology and a fuzzy reasoning technology; the artificial neural network is used for multi-maneuvering target tracking, so that the system has good self-adaption, self-organization learning, association and fault tolerance capabilities, has stronger judging and identifying capabilities through learning and training, and can independently find a solution to the problem; threat assessment is carried out on the tracked targets by applying a fuzzy inference technology, and the types and the speeds of the targets are judged, so that the threat levels of the targets are deduced; compared with the traditional random self-adaptive system, the artificial neural network has stronger judging and identifying capabilities through learning and training, and can independently find a solution for the problem. And the fault-tolerant capability of the system is very strong, and the system can still perform target detection, parameter estimation, target feature extraction and identification, system modeling and the like under the condition of uncertain data and environment, particularly under the condition of a large amount of noise and interference. Fuzzy reasoning is a reasoning process for imitating human thinking to real things, and the multi-target tracking process often has the similar situations of 'possible association' or 'possible non-association' between targets, 'possible targets belonging to a class' or 'possible non-belonging to a class', and the like. The fuzzy reasoning is used for carrying out threat assessment on a plurality of tracked targets, and the target types, the target speeds and the like are judged, so that the threat levels of the targets are deduced, a collision avoidance scheme can be accurately provided for ship operators, and a favorable guarantee is provided. The traditional methods are difficult to meet the engineering requirements, and the problems can be simplified by using fuzzy reasoning so as to meet the real-time requirement in target tracking.
The image processing and analysis in step b further comprises the steps of:
the method is characterized in that self-adaption and learning capabilities are added in a classical pattern recognition algorithm, the knowledge base of an artificial intelligence technology is fused by utilizing the front-back relation of images to recognize an infrared ship target, target features are extracted in a partitioned area, position features, shape features, size features, radiation features and features extracted based on wavelet analysis are respectively extracted, and the features are used as the input end of an RBF neural network to recognize the infrared ship target. The RBF neural network designed by the invention has a three-layer network structure, an input layer comprises 8 input parameters, a hidden layer comprises 12 nodes, and an output layer inputs the type of a target.
The step b also comprises the following steps:
b1, visible light video image target sampling and digital signal processing unit: the visible light image sensor 122 arranged in front of the screen of the radar 121 is used for shooting the screen of the radar 121 to obtain the screen image of the radar 121, the shot image is converted into a digital signal by a video acquisition card 123 and is sent to a programmable logic device in a visible light video image digital signal processing unit 22 for time sequence conversion and bus control, a control line signal and image information are respectively sent to an image memory in the digital signal processing unit for storage, the image memory obtains the information and then sends the information to the programmable logic device in the digital signal processing unit for confirmation information, a processing and analyzing device in a program memory in the digital signal processing unit is used for extracting the image from the image memory for processing and analyzing, the processed and analyzed result is sent to a controller of the digital signal processing unit, and after the controller obtains the information, a confirmation signal is sent to the video acquisition card 123, and the final installation site of the system and a plurality of surrounding targets: distance, azimuth angle, speed and acceleration information among ships, bridges, piers and reefs are sent to an information fusion unit 23 in the central processing system 2 through a serial communication interface for information fusion;
b2, information fusion: the information fusion unit 23 obtains information from the infrared video image digital signal processing unit 21 and the visible light video image digital signal processing unit 22, and respectively provides confirmation information to the infrared video image digital signal processing unit 21 and the visible light video image digital signal processing unit 22 in the corresponding step b, the information fusion unit 23 obtains the azimuth angles, the speeds and the accelerations between the installation place of the system and the plurality of target ships, bridges, piers and reefs around the installation place of the system obtained from the infrared video image digital signal processing unit 21 in the step b, and obtains the final fusion result information through information fusion processing on the distances, the azimuth angles, the speeds and the accelerations between the installation place of the system and the plurality of target ships, bridges and reefs around the installation place of the system obtained from the visible light video image digital signal processing unit 22, namely the final fusion result information is obtained by the information fusion processing of the distances, the azimuth angles, the speeds and the accelerations between the installation place of the system and the, And accurate information of the distance, azimuth angle, speed and acceleration among the bridge, the bridge pier and the reef is marked at the same time, and the marked object is sent to a system decision unit 25 through a serial communication interface.
According to the information input in the step a, combining the distances, azimuth angles, speeds and accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the system obtained in the step b; adopting a fuzzy expert system method to synthesize the early warning decision elements and the influence degrees thereof into mutually independent and orthogonal principal components, finally determining the ship collision avoidance risk degree, and carrying out active early warning decision on the basis of the mutually independent and orthogonal principal components; if the system is installed on a ship, according to the model number, the size, the control performance parameters, the load, the azimuth of the ship and the real-time ship speed of the ship obtained in the step a, combining the distances, the azimuth angles, the speeds and the accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the installation position of the system obtained in the step b; and (3) integrating various early warning decision elements and the influence degrees thereof into mutually independent and orthogonal main components by adopting a fuzzy expert system method, finally determining the ship collision avoidance risk degree, and carrying out active early warning decision on the basis of the main components.
The active safety early warning system is a complex system consisting of three elements of human, ship and environment. Therefore, factors influencing three factors of people, ships and environment have important influence on the active safety early warning decision. On the basis of collision avoidance information acquired by ship navigation equipment, determining the ship collision avoidance risk degree by adopting a fuzzy expert system method, and carrying out active early warning decision on the basis of the ship collision avoidance risk degree.
During the running of the ship, the early warning decision element obtained by each sensor is x1,x2,x3......xnThe influence degree of each element on the early warning decision is different, and the elements also influence each other.
Let the influence degree of the ith element on the early warning decision be wi(i is more than or equal to 1 and less than or equal to n), and setting the extracted n samples as X: x is the number of1 t,x2 t,...xn t(t is more than or equal to 1 and less than or equal to n)). now n characteristic indexes are synthesized into n mutually orthogonal independent main components y1,y2,...ynWritten in matrix form as
Y=C·X (2)
Wherein
C = C 11 . . . C 1 n . . . . . . . . . C n 1 . . . C nn , Y = y 1 . . . y n
The fuzzy expert system can process uncertain data and propositions, namely values can be taken in [0, 1], and fuzzy technical means such as fuzzy sets, fuzzy numbers, fuzzy relations and the like are adopted to express and process the uncertainty and inaccuracy of knowledge. Because uncertain factors are very many in the running process of the ship, the practicability of the system can be enhanced by adopting a fuzzy expert system for active safety early warning decision.
The active safety early warning system consists of three main parts, namely information collection, data processing, decision making and the like.
B2, according to the azimuth angle, speed and acceleration between the intelligent all-weather active safety early warning system for ship running and a plurality of target ships, bridges, piers and reefs in front obtained by the infrared video image digital signal processing unit 21 in the step b, and the distance, azimuth angle, speed and acceleration between the intelligent all-weather active safety early warning system for ship running and a plurality of target ships, bridges and reefs in front obtained by the visible light video image digital signal processing unit 22 in the step b1, the high-precision angle measurement of the infrared thermal imager 111 and the high-precision distance measurement of the radar 121 are utilized, the information complementation is utilized, and the accurate estimation of the target position is given through the information fusion technology; the fusion of the radar 121 and the thermal infrared imager 111 is realized by adopting a centralized processing method in the feature layer fusion, the target centroid in the infrared image is firstly extracted, then the redundant angle technology of the thermal infrared imager 111 is compressed by a least square estimation method to generate pseudo angle measurement aligned with the radar 121 in time, then the pseudo angle measurement and the azimuth angle measurement of the radar 121 are respectively fused to obtain synchronous data fusion estimation, finally the data obtained by the fusion of the radar 121 and the thermal infrared imager 111 are used for updating the target state of a filter, the decision layer fusion adopts a distributed processing method, firstly, tracks about targets are respectively established by the radar 121 and the thermal infrared imager 111, and then the association and fusion of the radar 121 and the thermal infrared imager 111 tracks are carried out.
The radar 121, as an active sensor, can measure and provide complete position information of a target all the time, and thus plays an important role in target detection and tracking. However, since the radar 121 radiates a high-power electromagnetic wave into the air during operation, it is susceptible to electronic interference and the angle measurement accuracy is low.
The thermal infrared imager 111 does not radiate any energy to the air, and the thermal infrared imager detects and positions by receiving the heat energy radiated by the target, so that the thermal infrared imager 111 has strong anti-jamming capability, and meanwhile, the thermal infrared imager 111 also has the advantages of high angle measurement precision, strong target identification capability and the like, but cannot measure distance.
By means of high-precision distance measurement of the radar 121 and high-precision angle measurement of the thermal infrared imager 111, accurate estimation of the position of the target can be given through information complementation and an information fusion technology, and tracking and identification of the target are improved.
(1) Sensor measurement model
The thermal infrared imager 111 measures the azimuth angle and the elevation angle of the brightness center of the target, and assuming that the brightness center of the target coincides with the centroid, the measurement model is as follows:
<math><mrow> <msub> <mi>&theta;</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>&theta;</mi> <mi>I</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math>
<math><mrow> <msub> <mi>&phi;</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&phi;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>&phi;</mi> <mi>I</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>.</mo> </mrow></math>
in the formula [ theta ]I(k)、φI(k) Is measured in the infrared, theta (k), phi (k) are actual angles, upsilonθI(k)、υφI(k) For angle measurement noise, their mean is zero and variance is white gaussian noise. The target state vector is selected as position, velocity and acceleration in the inertial system, i.e.
<math><mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>&lsqb;</mo> <mi>x</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>x</mi> <mo>.</mo> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>x</mi> <mrow> <mo>.</mo> <mo>.</mo> </mrow> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mi>y</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mrow> <mover> <mi>y</mi> <mo>.</mo> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>y</mi> <mrow> <mo>.</mo> <mo>.</mo> </mrow> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mi>z</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>z</mi> <mo>.</mo> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mover> <mi>z</mi> <mrow> <mo>.</mo> <mo>.</mo> </mrow> </mover> </mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow></math> Then there is
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&theta;</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&phi;</mi> <mi>I</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>arctan</mi> <mo>&lsqb;</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>/</mo> <mi>x</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mtd> </mtr> <mtr> <mtd> <mi>arctan</mi> <mo>&lsqb;</mo> <mi>y</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>/</mo> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msqrt> <mo>&rsqb;</mo> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>&theta;</mi> <mi>I</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>&phi;</mi> <mi>I</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow></math>
The radar 121 can directly measure the distance and azimuth angle of the target by the measurement model of
<math><mrow> <msub> <mi>r</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>r</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>r</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math>
<math><mrow> <msub> <mi>&theta;</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>&theta;</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math>
<math><mrow> <msub> <mi>&phi;</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&phi;</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&upsi;</mi> <msub> <mi>&phi;</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>.</mo> </mrow></math>
In the formula rR(k)、θR(k)、φR(k) The measured values of the radar 121, r (k), θ (k), φ (k) are actual values,υrR(k)、υθR(k)、υφR(k) To measure noise, their mean is zero and the variance is white gaussian noise. The target state vector is selected as above, then there are
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>r</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&theta;</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&phi;</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>z</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msqrt> </mtd> </mtr> <mtr> <mtd> <mi>arctan</mi> <mo>&lsqb;</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>/</mo> <mi>x</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mtd> </mtr> <mtr> <mtd> <mi>arctan</mi> <mo>&lsqb;</mo> <mi>y</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>/</mo> <msqrt> <msup> <mi>x</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <msup> <mi>z</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msqrt> <mo>&rsqb;</mo> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>r</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>&theta;</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&upsi;</mi> <msub> <mi>&phi;</mi> <mi>R</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow></math>
(2) Structural model and algorithm of radar and infrared image signal fusion and decision system
The fusion of the characteristic layers has the following functions: the characteristic information of infrared imaging target identification and tracking is utilized to help the target identification and tracking of the radar, so that the target detection probability of the system is improved and the false alarm probability is reduced; the role of the decision layer fusion is: when the distance from the target is far, the servo control system of the thermal infrared imager 111 is guided to track the target according to the tracking decision information of the radar 121 module, so that the target falls within the visual angle of the thermal infrared imager 111, the thermal infrared imager 111 can identify and track the target through imaging analysis when the target is close to, the defect that the acting distance of the thermal infrared imager 111 is short can be overcome, and the advantage that the accuracy of the tracking decision information of the thermal infrared imager 111 when the target is close to is high is exerted. When one sensor module loses the target tracking capability or has poor target tracking capability due to interference and the like, the target tracking capability of the interfered sensor module can be corrected according to the tracking decision information of the other sensor module, so that the anti-interference performance of the system is improved, the reliability of the whole target recognition and tracking system is improved, and once one sensor loses the target recognition and tracking capability due to software or hardware faults, the fusion center decision controller can still correctly track the target according to the target recognition and tracking decision signal of the other sensor.
Because the data acquisition rate of the thermal infrared imager 111 is obviously higher than that of the radar 121, the feature layer fusion adopts a centralized processing method, and the basic idea of realizing the fusion of the radar 121 and the thermal infrared imager 111 is as follows: firstly, extracting a target centroid in an infrared image, then compressing a redundant angle technology of the thermal infrared imager 111 by a least square estimation method to generate a pseudo angle measurement aligned with the radar 121 measurement in time, then respectively carrying out fusion processing with an azimuth angle measurement of the radar 121 to obtain synchronous data fusion estimation, and finally using data obtained based on fusion of the radar 121 and the thermal infrared imager 111 for updating a target state of a filter.
For the fusion of radar and infrared data in the decision layer, a distributed processing method is adopted, namely, a track about a target is established by the radar and the infrared respectively, and then the association and fusion of the radar and the infrared track are carried out.

Claims (16)

1. Intelligent all-weather initiative safety precaution system of boats and ships driving, its characterized in that: the system is composed of a target sampling unit (1), a system central processing unit (2), a display unit (3), a singlechip (4) and a safety early warning unit (5);
the target sampling unit (1) consists of an outdoor intelligent high-speed holder (112) with a decoder and a thermal infrared imager (111) with a video acquisition card (113), the thermal infrared imager (111) is fixed on the outdoor intelligent high-speed holder (112), the outdoor intelligent high-speed holder (112) is connected with the system central processing unit (2) through the decoder, and the thermal infrared imager (111) is connected with the system central processing unit (2) through the video acquisition card (113);
the central processing system (2) comprises a keyboard (26), an infrared video image digital signal processing unit (21) and a system decision processing unit (25);
wherein,
the keyboard (26) is used for inputting information and control instructions to the system decision processing unit (25);
the infrared video image digital signal processing unit (21) is used for receiving the digital image signals of the target sampling unit (1), calculating the position, the azimuth angle, the speed and the acceleration of a target by utilizing a processing and analyzing device in a memory according to the digital image signals, and sending the calculation result to the system decision processing unit (25) through a serial communication interface;
the system decision processing unit (25) is used for carrying out active safety early warning decision analysis and calculation on information from the digital signal processing unit in combination with information input by the keyboard (26) information input unit to obtain a final avoidance scheme, and then carrying out active safety early warning simultaneously through the display unit (3) and the safety early warning unit (5);
the display unit (3) is used for displaying images of the surroundings of the installation place of the system, wherein the images are marked on the targets, and the azimuth angle, the speed and the acceleration between each target and the installation place of the system, and displaying the recommended avoidance scheme and the danger level;
the single chip microcomputer (4) is used for receiving a decision result of the system decision processing unit (25) and controlling an alarm mode of the safety early warning unit (5);
and the safety early warning unit (5) is used for receiving the control signal of the singlechip (4) and using sound and light modes with different frequencies to express the prompt information of the danger level and the danger direction.
2. The intelligent all-weather active safety precaution system of claim 1, characterized by: the infrared video image digital signal processing unit (21) is 2 floating point digital signal processors TMS320C6713, and 2 TMS320C6713 are connected with a video acquisition card (113) of the thermal infrared imager (111); the floating-point digital signal processor TMS320C6713 includes an image memory, a programmable logic device, a program memory and a controller, wherein the program memory is provided with a processing and analyzing device.
3. The intelligent all-weather active safety precaution system of claim 1, characterized by: and a large-view-field image registration and splicing device is arranged in a memory in the infrared video image digital signal processing unit (21), the infrared sequence images are automatically spliced into a panoramic view, and the automatically spliced panoramic view is sent to a system decision processing unit (25) for storage.
4. The intelligent all-weather active safety precaution system of claim 1, characterized by: the system decision processing unit (25) is a PC computer.
5. The intelligent all-weather active safety precaution system of claim 1, characterized by: the display unit (3) comprises 2 displays, a display (31) and a display (32), wherein the display (31) displays infrared video images of a plurality of marked targets around; and related text information: the distance, azimuth angle, speed and acceleration between the installation position of the intelligent all-weather active safety early warning system for ship running and a plurality of ships, bridges, piers and reef targets around; a display (32) displays a textual description of the recommended avoidance maneuver: the method comprises the steps of adopting variable speed yielding and/or steering yielding, speed, direction, danger level, danger direction and alarm mode.
6. The intelligent all-weather active safety precaution system of claim 1, characterized by: the safety early warning unit (5) controls an external loudspeaker and/or an external alarm lamp through a singlechip (4) connected with a serial communication interface of the system decision processing unit (25).
7. The intelligent all-weather active safety precaution system of claim 1, characterized by: the target sampling unit (1) is also provided with a radar (121), a visible light image sensor (122) with a video acquisition card (123) is arranged in front of the screen of the radar (121), the visible light image sensor (122) is connected with a visible light video image digital signal processing unit (22) in a system central processing unit (2) through the video acquisition card (123), and the visible light video image digital signal processing unit (22) is 2 floating point digital signal processors TMS320C 6713; the TMS320C6713 comprises an image memory, a programmable logic unit, a program memory and a controller, wherein a processing and analyzing device is arranged in the program memory; the central processing system (2) is also provided with an information fusion unit (23), the information fusion unit (23) is used for carrying out information fusion on various information respectively from the infrared video image digital signal processing unit (21) and the visible light video image digital signal processing unit (22) to obtain the distance, the azimuth angle, the speed and the acceleration of a final target, simultaneously marking the target in the image and sending a calculation result to a system decision processing unit (25) through a serial communication interface; the information fusion unit (23) is a high-speed real-time digital signal processor ADSP 21060.
8. The intelligent all-weather active safety precaution system of claim 1, characterized by: the intelligent all-weather active safety early warning system for ship running is arranged on a bridge, a port, a wharf, a wharfboat, a dangerous river section, a restricted area, a gate or a ship; when the intelligent all-weather active safety early warning system for ship running is arranged on a ship, the display unit (3) also displays images of a plurality of marked targets around the ship; and related text information: the model and the size of the ship, the control performance parameters of the ship, the load, the azimuth of the ship, the real-time ship speed of the ship, the distance between the ship and a plurality of surrounding ships, bridges, piers and reef targets, the azimuth, the speed and the acceleration.
9. The early warning method for realizing the intelligent all-weather active safety early warning system for ship driving according to claim 1 is characterized by comprising the following steps:
a. starting system
Starting the intelligent all-weather active safety early warning system for ship running, and inputting information and control instructions by a keyboard (26); if the intelligent all-weather active safety early warning system for ship running is installed on a ship, the ship model, the ship size, the ship control performance parameters and the load information of the ship are input through a keyboard (26), and the ship direction and the real-time ship speed information in a ship-borne GPS (24) of the ship are simultaneously sent to a system decision processing unit (25) through a serial communication interface;
b. target sampling and digital signal processing
The infrared video image target sampling and digital signal processing unit: an infrared thermal imager (111) is driven by a holder (112) to scan the surroundings according to a specified time interval and angle step length, so that the infrared thermal imager (111) shoots the surrounding water surface environment, the shot image is converted into a digital signal by a video acquisition card (113), the digital signal is sent to a programmable logic device in an infrared video image digital signal processing unit (21) for time sequence conversion and bus control, a control line signal and image information are respectively sent to an image memory in the digital signal processing unit for storage, the image memory obtains information and then sends the information to the programmable logic device in the digital signal processing unit for confirmation information, a processing and analyzing device in a program memory in the digital signal processing unit is used for extracting the image from the image memory for processing and analyzing, and the processed and analyzed result is sent to a controller of the digital signal processing unit, after the controller obtains the information, a confirmation signal is sent to a video acquisition card (113), and a plurality of targets at the final installation position and around the system are determined: azimuth angle, speed and acceleration information among ships, bridges, piers and reefs are sent to a system decision processing unit (25) through a serial communication interface;
c. the system decision processing unit (25) obtains the information from a, and in combination with the information from b, the system decision processing unit (25) performs active safety early warning decision analysis and calculation to obtain a final avoidance scheme, and then performs active safety early warning simultaneously through the display (3) and the acousto-optic safety early warning unit (5);
wherein:
the display (3) displays images of a plurality of marked objects around the installation site of the system; and related text information: azimuth angles, speeds and accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the system; if the system is installed on a ship, the model and the size of the ship, the control performance parameters, the load, the azimuth of the ship, the real-time ship speed of the ship and the text description of a recommended avoidance scheme are also displayed: comprises adopting variable speed yielding and/or steering yielding, speed and direction;
and carrying out intelligent collision avoidance active safety early warning according to the information, judging the danger level if the information is dangerous, using sound and light modes with different frequencies to express the prompt information of the danger level and the danger direction through the sound and light safety early warning unit (5), and controlling an external loudspeaker or an alarm lamp to give an alarm through the singlechip (4) connected with the serial communication interface of the system decision processing unit (25).
10. The intelligent all-weather active safety precaution method for ship driving according to claim 9, characterized by: and step b, image registration and splicing are further included, the infrared sequence images are automatically spliced into a panoramic view, and the automatically spliced panoramic view is sent to a system decision processing unit (25) to be stored.
11. The intelligent all-weather active safety precaution method for ship driving according to claim 9, characterized in that the image processing and analysis in step b is performed according to the following steps:
firstly, performing infrared image preprocessing, including image denoising, image enhancement and sharpening, image correction and motion background correction;
directly segmenting an image region, then extracting the characteristics of a bridge, a pier and a reef, and effectively tracking and identifying;
the method comprises the following steps of respectively adopting two different algorithm combinations to extract the waterline, wherein the extraction algorithm combination of the first possible waterline sequentially comprises the following steps: image iteration threshold segmentation, Roberts gradient operator edge detection, refinement and Hough transformation extraction of a first sky line; the combination of the extraction algorithm of the second possible waterline comprises the following steps in sequence: detecting the edge of a Roberts gradient operator, binarizing, refining, and extracting a second sky line by using Hough transformation; taking one of two possible sky waterlines close to the lower end of the image as a finally extracted sky waterline, judging the credibility of the sky waterline, and then taking an image area in a certain range above and below the sky waterline as an ROI (region of interest);
evaluating the image quality of the ROI (region of interest) of the image, and adopting an infrared target detection algorithm based on a single-frame image when the evaluation result is 1; when the evaluation result is 0, adopting an infrared target detection algorithm based on an image sequence;
frequency selection and multi-scale decomposition are realized by respectively using two-dimensional wavelet transform in a wavelet analysis method, background noise is suppressed, and a target is enhanced; separating a low-frequency part from a high-frequency part of an original image, then respectively carrying out multi-resolution analysis on each low-frequency component and each high-frequency component, extracting target characteristics, and carrying out target detection; in the fractal method, according to the characteristic that the fractal characteristics of artificial targets such as ships, piers and the like have violent change along with the scale compared with the natural background, a multiscale fractal characteristic image is extracted from an ROI subimage subjected to image enhancement by a fuzzy filtering method; finally, performing target detection on the multi-scale fractal features by using a probability relaxation method; performing median filtering on the ROI subgraph firstly in a mathematical morphology method, and then taking a pixel with the maximum brightness value in a filtered image as a marked image; performing top hat transformation on an original image, and performing morphological reconstruction by taking an image subjected to iterative threshold segmentation as a mask image to realize infrared ship target detection; and finally, performing evidence reasoning combination on target detection results from different methods by using evidence reasoning combination, and synthesizing the evidence by using a Dempster evidence synthesis rule to obtain an infrared ship target detection result with high confidence coefficient identification.
12. The intelligent all-weather active safety precaution method for ship driving according to claim 9, characterized by: the image processing and analysis in step b further comprises the steps of:
the intelligent multi-maneuvering infrared ship target tracking is realized by applying an artificial neural network technology and a fuzzy reasoning technology; the artificial neural network is used for multi-maneuvering target tracking, so that the system has good self-adaption, self-organization learning, association and fault tolerance capabilities, has stronger judging and identifying capabilities through learning and training, and can independently find a solution to the problem; and (3) carrying out threat assessment on the tracked targets by applying a fuzzy inference technology, and judging the types and speeds of the targets so as to deduce the threat levels of the targets.
13. The intelligent all-weather active safety precaution method for ship driving according to claim 9, characterized by: the image processing and analysis in step b further comprises the steps of:
the method is characterized in that self-adaption and learning capabilities are added in a classical pattern recognition algorithm, the knowledge base of an artificial intelligence technology is fused by utilizing the front-back relation of images to recognize an infrared ship target, target features are extracted in a partitioned area, position features, shape features, size features, radiation features and features extracted based on wavelet analysis are respectively extracted, and the features are used as the input end of an RBF neural network to recognize the infrared ship target.
14. The intelligent all-weather active safety precaution method for ship driving according to claim 9, characterized by: the step b also comprises the following steps:
b1, visible light video image target sampling and digital signal processing unit: the visible light image sensor (122) arranged in front of the screen of the radar (121) is used for shooting the screen of the radar (121) to obtain the screen image of the radar (121), the shot image is converted into a digital signal by a video acquisition card (123) and is sent to a programmable logic device in a visible light video image digital signal processing unit (22) for time sequence conversion and bus control, a control line signal and image information are respectively sent to an image memory in the digital signal processing unit for storage, the image memory obtains information and then sends the information to the programmable logic device in the digital signal processing unit for confirmation information, a processing and analyzing device in a program memory in the digital signal processing unit is used for extracting the image from the image memory for processing and analyzing, the processed and analyzed result is sent to a controller of the digital signal processing unit, and after the controller obtains the information, a confirmation signal is sent to a video acquisition card (123), and the final installation position of the system and a plurality of targets around the system are determined: distance, azimuth angle, speed and acceleration information among ships, bridges, piers and reefs are sent to an information fusion unit (23) in the central processing system (2) through a serial communication interface for information fusion;
b2, information fusion: the information fusion unit (23) obtains information from the infrared video image digital signal processing unit (21) and the visible light video image digital signal processing unit (22), and respectively provides confirmation information for the infrared video image digital signal processing unit (21) and the visible light video image digital signal processing unit (22) in the corresponding step b, the information fusion unit (23) obtains the azimuth angle, the speed and the acceleration between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the installation position of the system obtained by the infrared video image digital signal processing unit (21) in the step b, and obtains the final fusion result information by information fusion processing of the distance, the azimuth angle, the speed and the acceleration between the installation position of the system and a plurality of target ships, bridges and reefs around the installation position obtained by the visible light video image digital signal processing unit (22), namely, the accurate information of the distance, azimuth angle, speed and acceleration between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the system, simultaneously marks the targets in the image, and sends the marked targets to a system decision unit (25) through a serial communication interface.
15. The intelligent all-weather active safety precaution method for ship driving according to claim 9, characterized by: according to the information input in the step a, combining the distances, azimuth angles, speeds and accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the system obtained in the step b; adopting a fuzzy expert system method to synthesize the early warning decision elements and the influence degrees thereof into mutually independent and orthogonal principal components, finally determining the ship collision avoidance risk degree, and carrying out active early warning decision on the basis of the mutually independent and orthogonal principal components; if the system is installed on a ship, according to the model number, the size, the control performance parameters, the load, the azimuth of the ship and the real-time ship speed of the ship obtained in the step a, combining the distances, the azimuth angles, the speeds and the accelerations between the installation position of the system and a plurality of target ships, bridges, piers and reefs around the installation position of the system obtained in the step b; and (3) integrating various early warning decision elements and the influence degrees thereof into mutually independent and orthogonal main components by adopting a fuzzy expert system method, finally determining the ship collision avoidance risk degree, and carrying out active early warning decision on the basis of the main components.
16. The intelligent all-weather active safety precaution method for ship driving according to claim 14, characterized by: b2, according to the azimuth angle, speed and acceleration between the intelligent all-weather active safety early warning system for ship running and a plurality of target ships, bridges, piers and reefs in front obtained by the infrared video image digital signal processing unit (21) in the step b, and the distance, azimuth angle, speed and acceleration between the intelligent all-weather active safety early warning system for ship running and a plurality of target ships, bridges and reefs in front obtained by the visible light video image digital signal processing unit (22) in the step b1, the high-precision angle measurement and the high-precision distance measurement of a radar (121) are utilized, the information complementation is utilized, and the accurate estimation of the target position is given out through the information fusion technology; the fusion of the radar (121) and the thermal infrared imager (111) is realized by adopting a centralized processing method in the feature layer fusion, the target centroid in the infrared image is firstly extracted, then the redundant angle technology of the thermal infrared imager 111 is compressed by using a least square estimation method to generate pseudo angle measurement aligned with the radar (121) in time, then the pseudo angle measurement is fused with the azimuth angle measurement of the radar (121) respectively to obtain synchronous data fusion estimation, finally the data obtained by the fusion of the radar (121) and the thermal infrared imager (111) is used for updating the target state of a filter, the fusion of the decision layer adopts a distributed processing method, the radar (121) and the thermal infrared imager (111) respectively establish a track related to the target, and then the association and fusion of the radar (121) and the thermal infrared imager (111) are carried out.
CN2008100692312A 2008-01-10 2008-01-10 Intelligent all-weather actively safety early warning system and early warning method thereof for ship running Expired - Fee Related CN101214851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008100692312A CN101214851B (en) 2008-01-10 2008-01-10 Intelligent all-weather actively safety early warning system and early warning method thereof for ship running

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100692312A CN101214851B (en) 2008-01-10 2008-01-10 Intelligent all-weather actively safety early warning system and early warning method thereof for ship running

Publications (2)

Publication Number Publication Date
CN101214851A true CN101214851A (en) 2008-07-09
CN101214851B CN101214851B (en) 2010-12-01

Family

ID=39621379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100692312A Expired - Fee Related CN101214851B (en) 2008-01-10 2008-01-10 Intelligent all-weather actively safety early warning system and early warning method thereof for ship running

Country Status (1)

Country Link
CN (1) CN101214851B (en)

Cited By (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834989A (en) * 2010-05-25 2010-09-15 广州科易光电技术有限公司 Real-time data acquisition and storage system of helicopter in electric inspection process
CN101986170A (en) * 2010-10-25 2011-03-16 安徽中超信息系统有限公司 Bridge collision prevention radar alarming management system
CN102393375A (en) * 2011-08-24 2012-03-28 北京广微积电科技有限公司 Passive gas imaging system
CN102414715A (en) * 2009-04-23 2012-04-11 丰田自动车株式会社 Object detection device
CN102426454A (en) * 2011-12-15 2012-04-25 上海市城市建设设计研究院 Ship self-correcting device for preventing ship from impacting bridge
CN102510482A (en) * 2011-11-29 2012-06-20 蔡棽 Image splicing reconstruction and overall monitoring method for improving visibility and visual distance
CN102522003A (en) * 2011-12-14 2012-06-27 天津七一二通信广播有限公司 ATS integrated machine having data recording and playback functions
CN102708706A (en) * 2011-12-07 2012-10-03 上海市城市建设设计研究院 Pre-warning method for preventing object from impacting bridge
CN102844722A (en) * 2010-04-14 2012-12-26 胡斯华纳有限公司 Robotic garden tool following wires at distance using multiple signals
CN102980586A (en) * 2012-11-16 2013-03-20 北京小米科技有限责任公司 Navigation terminal and navigation method using the same
CN103139482A (en) * 2013-03-08 2013-06-05 上海海事大学 Marine peril search and rescue machine vision system
CN103176185A (en) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 Method and system for detecting road barrier
CN103440786A (en) * 2013-07-26 2013-12-11 浙江海洋学院 Ship collision avoidance early-warning device and intelligent early-warning method thereof
CN104071311A (en) * 2014-07-15 2014-10-01 无锡北斗星通信息科技有限公司 Ship early warning method
CN104092980A (en) * 2014-06-30 2014-10-08 华南理工大学 A low-cost active near-infrared night vision system and its working method
CN104103198A (en) * 2014-07-15 2014-10-15 无锡北斗星通信息科技有限公司 Ship periphery target detection alarm system
CN104200466A (en) * 2014-08-20 2014-12-10 深圳市中控生物识别技术有限公司 Early warning method and camera
CN104215963A (en) * 2013-05-31 2014-12-17 上海仪电电子股份有限公司 Marine navigation radar enhancing infrared and visible light
CN104359478A (en) * 2014-11-27 2015-02-18 哈尔滨金都太阳能科技有限公司 Electronic track plotter
CN104596902A (en) * 2014-07-28 2015-05-06 白薇 Ship gas control method
CN104637347A (en) * 2014-07-15 2015-05-20 王晓东 Detection and alarm system for peripheral target of ship
CN104648628A (en) * 2014-07-15 2015-05-27 李�荣 Early warning method for ship
CN104648627A (en) * 2014-07-15 2015-05-27 李�荣 Early warning method for ship
CN104680037A (en) * 2015-03-27 2015-06-03 华北水利水电大学 Method for monitoring and evaluating waterway transport security
CN104700660A (en) * 2014-07-15 2015-06-10 王晓东 Peripheral target detection alarm system for ship
CN105141887A (en) * 2015-07-06 2015-12-09 国家电网公司 Submarine cable area video alarming method based on thermal imaging
CN105447956A (en) * 2015-11-06 2016-03-30 东方通信股份有限公司 Spliced banknote detection method
CN105551013A (en) * 2015-11-03 2016-05-04 西安电子科技大学 SAR image sequence registering method based on movement platform parameters
CN105744229A (en) * 2016-02-25 2016-07-06 江苏科技大学 Unmanned ship automatic anchoring system and working method thereof based on integration of infrared and panoramic technologies
CN105761491A (en) * 2016-04-26 2016-07-13 安徽大学 Detection and early warning system for traffic accident
CN105807266A (en) * 2016-05-19 2016-07-27 中国人民解放军军械工程学院 Compression method for early-warning radar track data transmission
CN105812747A (en) * 2016-04-25 2016-07-27 科盾科技股份有限公司 Shipborne safety navigation electro-optical aided system
CN105872470A (en) * 2016-04-25 2016-08-17 科盾科技股份有限公司 Shipborne safe navigation photoelectric auxiliary system
CN105923135A (en) * 2016-04-18 2016-09-07 太仓弘杉环保科技有限公司 Intelligent rudder with self-correcting function and operating method thereof
CN106101590A (en) * 2016-06-23 2016-11-09 上海无线电设备研究所 The detection of radar video complex data and processing system and detection and processing method
CN106971630A (en) * 2017-03-27 2017-07-21 中公智联(北京)科技有限公司 Navigation bridge pier anticollision monitoring early-warning system
KR101781759B1 (en) 2016-11-15 2017-09-25 주식회사 리영에스엔디 Ship slope detection system using detection radar
CN107209993A (en) * 2014-07-03 2017-09-26 通用汽车环球科技运作有限责任公司 Vehicle cognition radar method and system
WO2018006659A1 (en) * 2016-07-08 2018-01-11 杭州海康威视数字技术股份有限公司 Method and apparatus for acquiring channel monitoring target
CN107659614A (en) * 2017-08-28 2018-02-02 安徽四创电子股份有限公司 A kind of more base station type waters surveillance control systems and its monitoring control method
CN108227606A (en) * 2018-01-29 2018-06-29 李颖 A ship security intelligent management system based on multi-source perception
CN108288252A (en) * 2018-02-13 2018-07-17 北京旷视科技有限公司 Image batch processing method, device and electronic equipment
CN108470079A (en) * 2017-10-26 2018-08-31 北京特种工程设计研究院 Space launching site relates to core operation radiation safety assessment emulation mode
CN108573482A (en) * 2018-03-22 2018-09-25 苏海英 Warn rifle trigger-type computer operation platform
CN108698681A (en) * 2016-03-29 2018-10-23 B·泰尔斯 Automatic Positioning and Placement System
CN109190667A (en) * 2018-07-31 2019-01-11 中国电子科技集团公司第二十九研究所 A kind of Object Threat Evaluation method, model and model building method based on electronic reconnaissance signal
CN109191916A (en) * 2018-10-11 2019-01-11 苏州大学 A kind of ship collision early warning system based on image
CN109308702A (en) * 2018-09-14 2019-02-05 南京理工技术转移中心有限公司 A kind of real-time recognition positioning method of target
CN109444897A (en) * 2018-09-13 2019-03-08 中国船舶重工集团公司第七〇五研究所 A kind of more gusts of Data Associations based on multiple features
CN109523834A (en) * 2018-12-24 2019-03-26 云南北方驰宏光电有限公司 Safety of ship DAS (Driver Assistant System)
CN109831655A (en) * 2019-03-21 2019-05-31 苏州大学 Ship environment perception and early warning system based on multi-cam data fusion
CN110033431A (en) * 2019-02-26 2019-07-19 北方工业大学 Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge
CN110110964A (en) * 2019-04-04 2019-08-09 深圳市云恩科技有限公司 A kind of ship and ferry supervisory systems based on deep learning
CN110703746A (en) * 2019-09-17 2020-01-17 钟子骅 Inland river boats and ships autopilot
CN110789662A (en) * 2019-11-07 2020-02-14 陈丽丽 Yacht
CN111025284A (en) * 2019-11-19 2020-04-17 宁波盛域海洋电子科技有限公司 Collision prevention system combining marine radar system and infrared photoelectric scanning
CN111174765A (en) * 2020-02-24 2020-05-19 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance
CN111309201A (en) * 2020-01-19 2020-06-19 青岛海狮网络科技有限公司 Multi-window display method for autonomous collision avoidance of autonomously driven ship
CN111645821A (en) * 2020-06-11 2020-09-11 上海船舶研究设计院(中国船舶工业集团公司第六0四研究院) Ship safety control system, control method and ship
CN111722305A (en) * 2020-07-03 2020-09-29 河海大学 Early warning method and early warning device for ensuring marine construction safety
CN111791997A (en) * 2020-07-15 2020-10-20 广东海洋大学 A ship-oriented intelligent marine ship distress warning system
CN112100917A (en) * 2020-09-14 2020-12-18 中国船级社 Intelligent ship collision avoidance simulation test system and method based on expert confrontation system
CN112634658A (en) * 2020-12-18 2021-04-09 武汉欣海远航科技研发有限公司 Acousto-optic early warning method and system for safety supervision of offshore wind farm
CN112835024A (en) * 2021-01-07 2021-05-25 南京晓庄学院 An Underwater Object Tracking Method Using Doppler Principle
US11029686B2 (en) 2010-11-19 2021-06-08 Maid Ip Holdings P/L Automatic location placement system
CN112918632A (en) * 2021-03-09 2021-06-08 武汉理工大学 Ship design method based on intelligent reasoning
CN113291427A (en) * 2021-06-10 2021-08-24 丁鸿雨 Ocean engineering ship is with preventing striking reef system based on thing networking
US20210357655A1 (en) * 2018-10-04 2021-11-18 Seadronix Corp. Ship and harbor monitoring device and method
CN114125228A (en) * 2021-11-23 2022-03-01 智慧航海(青岛)科技有限公司 Wide dynamic image processing method of marine 360-degree panoramic image system
CN114619443A (en) * 2020-12-14 2022-06-14 苏州大学 Robot working space setting method and robot active safety system
CN114954827A (en) * 2022-07-01 2022-08-30 中国舰船研究设计中心 A high-reliability cab top comprehensive information display device
CN115222758A (en) * 2022-09-21 2022-10-21 北京九章星图科技有限公司 Low-resolution wide-area sequence remote sensing image ship moving target real-time detection algorithm
CN115410419A (en) * 2022-08-23 2022-11-29 交通运输部天津水运工程科学研究所 Ship mooring early warning method and system, electronic device and storage medium
US11556130B2 (en) 2010-11-19 2023-01-17 Maid Ip Holdings Pty/Ltd Automatic location placement system
CN115917262A (en) * 2020-07-15 2023-04-04 舍弗勒技术股份两合公司 Method and detection system for detecting angular position
CN116088542A (en) * 2023-04-12 2023-05-09 中国水产科学研究院南海水产研究所 Fishing boat operation safety early warning method and system based on remote sensing technology

Cited By (109)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102414715A (en) * 2009-04-23 2012-04-11 丰田自动车株式会社 Object detection device
CN102414715B (en) * 2009-04-23 2014-03-12 丰田自动车株式会社 Object detection device
CN102844722A (en) * 2010-04-14 2012-12-26 胡斯华纳有限公司 Robotic garden tool following wires at distance using multiple signals
CN101834989B (en) * 2010-05-25 2012-05-30 广州科易光电技术有限公司 Helicopter electric power inspection real-time data acquisition and storage system
CN101834989A (en) * 2010-05-25 2010-09-15 广州科易光电技术有限公司 Real-time data acquisition and storage system of helicopter in electric inspection process
CN101986170A (en) * 2010-10-25 2011-03-16 安徽中超信息系统有限公司 Bridge collision prevention radar alarming management system
US11029686B2 (en) 2010-11-19 2021-06-08 Maid Ip Holdings P/L Automatic location placement system
US11556130B2 (en) 2010-11-19 2023-01-17 Maid Ip Holdings Pty/Ltd Automatic location placement system
US11853064B2 (en) 2010-11-19 2023-12-26 Maid Ip Holdings Pty/Ltd Automatic location placement system
US11768492B2 (en) 2010-11-19 2023-09-26 MAI IP Holdings Pty/Ltd Automatic location placement system
US11774971B2 (en) 2010-11-19 2023-10-03 Maid Ip Holdings P/L Automatic location placement system
CN102393375A (en) * 2011-08-24 2012-03-28 北京广微积电科技有限公司 Passive gas imaging system
CN102510482A (en) * 2011-11-29 2012-06-20 蔡棽 Image splicing reconstruction and overall monitoring method for improving visibility and visual distance
CN102708706A (en) * 2011-12-07 2012-10-03 上海市城市建设设计研究院 Pre-warning method for preventing object from impacting bridge
CN102522003A (en) * 2011-12-14 2012-06-27 天津七一二通信广播有限公司 ATS integrated machine having data recording and playback functions
CN102426454A (en) * 2011-12-15 2012-04-25 上海市城市建设设计研究院 Ship self-correcting device for preventing ship from impacting bridge
CN103176185A (en) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 Method and system for detecting road barrier
CN103176185B (en) * 2011-12-26 2015-01-21 上海汽车集团股份有限公司 Method and system for detecting road barrier
CN102980586A (en) * 2012-11-16 2013-03-20 北京小米科技有限责任公司 Navigation terminal and navigation method using the same
CN103139482A (en) * 2013-03-08 2013-06-05 上海海事大学 Marine peril search and rescue machine vision system
CN103139482B (en) * 2013-03-08 2015-08-12 上海海事大学 Vision Builder for Automated Inspection is searched and rescued in the perils of the sea
CN104215963A (en) * 2013-05-31 2014-12-17 上海仪电电子股份有限公司 Marine navigation radar enhancing infrared and visible light
CN103440786A (en) * 2013-07-26 2013-12-11 浙江海洋学院 Ship collision avoidance early-warning device and intelligent early-warning method thereof
CN103440786B (en) * 2013-07-26 2016-01-20 浙江海洋学院 A kind of intelligent early-warning method of ship collision prevention prior-warning device
CN104092980A (en) * 2014-06-30 2014-10-08 华南理工大学 A low-cost active near-infrared night vision system and its working method
CN107209993B (en) * 2014-07-03 2020-08-04 通用汽车环球科技运作有限责任公司 Vehicle cognitive radar method and system
CN107209993A (en) * 2014-07-03 2017-09-26 通用汽车环球科技运作有限责任公司 Vehicle cognition radar method and system
CN104103198A (en) * 2014-07-15 2014-10-15 无锡北斗星通信息科技有限公司 Ship periphery target detection alarm system
CN104648627A (en) * 2014-07-15 2015-05-27 李�荣 Early warning method for ship
CN104700660A (en) * 2014-07-15 2015-06-10 王晓东 Peripheral target detection alarm system for ship
CN104408974A (en) * 2014-07-15 2015-03-11 王晓东 Ship peripheral target detection alarm system
CN104637347B (en) * 2014-07-15 2016-09-28 江苏韩通赢吉重工有限公司 Boats and ships peripheral object detection warning system
CN104494794A (en) * 2014-07-15 2015-04-08 李�荣 Ship prewarning method
CN104700660B (en) * 2014-07-15 2016-09-07 南通亿硕新材料科技有限公司 Boats and ships peripheral object detection warning system
CN104648627B (en) * 2014-07-15 2017-01-25 罗普特(厦门)科技集团有限公司 Early warning method for ship
CN104648628B (en) * 2014-07-15 2018-11-30 泰兴市东城水处理工程有限公司 A kind of ship method for early warning
CN104648628A (en) * 2014-07-15 2015-05-27 李�荣 Early warning method for ship
CN104071311A (en) * 2014-07-15 2014-10-01 无锡北斗星通信息科技有限公司 Ship early warning method
CN104637347A (en) * 2014-07-15 2015-05-20 王晓东 Detection and alarm system for peripheral target of ship
CN104596902A (en) * 2014-07-28 2015-05-06 白薇 Ship gas control method
CN104200466B (en) * 2014-08-20 2017-05-31 深圳市中控生物识别技术有限公司 A kind of method for early warning and video camera
CN104200466A (en) * 2014-08-20 2014-12-10 深圳市中控生物识别技术有限公司 Early warning method and camera
CN104359478A (en) * 2014-11-27 2015-02-18 哈尔滨金都太阳能科技有限公司 Electronic track plotter
CN104680037A (en) * 2015-03-27 2015-06-03 华北水利水电大学 Method for monitoring and evaluating waterway transport security
CN105141887A (en) * 2015-07-06 2015-12-09 国家电网公司 Submarine cable area video alarming method based on thermal imaging
CN105551013A (en) * 2015-11-03 2016-05-04 西安电子科技大学 SAR image sequence registering method based on movement platform parameters
CN105551013B (en) * 2015-11-03 2018-09-25 西安电子科技大学 SAR image sequence method for registering based on motion platform parameter
CN105447956A (en) * 2015-11-06 2016-03-30 东方通信股份有限公司 Spliced banknote detection method
CN105744229B (en) * 2016-02-25 2019-01-15 江苏科技大学 The automatic mooring system of unmanned boat and its working method for looking around fusion based on infrared panorama
CN105744229A (en) * 2016-02-25 2016-07-06 江苏科技大学 Unmanned ship automatic anchoring system and working method thereof based on integration of infrared and panoramic technologies
CN113928526A (en) * 2016-03-29 2022-01-14 B·泰尔斯 Automatic positioning and placement system
CN108698681B (en) * 2016-03-29 2022-08-09 梅德Ip控股有限公司 Automatic positioning and placing system
CN108698681A (en) * 2016-03-29 2018-10-23 B·泰尔斯 Automatic Positioning and Placement System
CN105923135A (en) * 2016-04-18 2016-09-07 太仓弘杉环保科技有限公司 Intelligent rudder with self-correcting function and operating method thereof
CN105923135B (en) * 2016-04-18 2018-10-19 太仓弘杉环保科技有限公司 A kind of intelligent rudder for ship and its working method with self-checking function
CN105872470A (en) * 2016-04-25 2016-08-17 科盾科技股份有限公司 Shipborne safe navigation photoelectric auxiliary system
CN105812747A (en) * 2016-04-25 2016-07-27 科盾科技股份有限公司 Shipborne safety navigation electro-optical aided system
CN105761491A (en) * 2016-04-26 2016-07-13 安徽大学 Detection and early warning system for traffic accident
CN105807266A (en) * 2016-05-19 2016-07-27 中国人民解放军军械工程学院 Compression method for early-warning radar track data transmission
CN106101590A (en) * 2016-06-23 2016-11-09 上海无线电设备研究所 The detection of radar video complex data and processing system and detection and processing method
CN106101590B (en) * 2016-06-23 2019-07-19 上海无线电设备研究所 The detection of radar video complex data and processing system and detection and processing method
CN107613244A (en) * 2016-07-08 2018-01-19 杭州海康威视数字技术股份有限公司 A kind of navigation channel monitoring objective acquisition methods and device
WO2018006659A1 (en) * 2016-07-08 2018-01-11 杭州海康威视数字技术股份有限公司 Method and apparatus for acquiring channel monitoring target
KR101781759B1 (en) 2016-11-15 2017-09-25 주식회사 리영에스엔디 Ship slope detection system using detection radar
CN106971630A (en) * 2017-03-27 2017-07-21 中公智联(北京)科技有限公司 Navigation bridge pier anticollision monitoring early-warning system
CN107659614A (en) * 2017-08-28 2018-02-02 安徽四创电子股份有限公司 A kind of more base station type waters surveillance control systems and its monitoring control method
CN108470079A (en) * 2017-10-26 2018-08-31 北京特种工程设计研究院 Space launching site relates to core operation radiation safety assessment emulation mode
CN108470079B (en) * 2017-10-26 2023-04-07 北京特种工程设计研究院 Simulation method for radiation safety evaluation of nuclear operation of space launching field
CN108227606A (en) * 2018-01-29 2018-06-29 李颖 A ship security intelligent management system based on multi-source perception
CN108288252A (en) * 2018-02-13 2018-07-17 北京旷视科技有限公司 Image batch processing method, device and electronic equipment
CN108288252B (en) * 2018-02-13 2022-03-25 北京旷视科技有限公司 Image batch processing method and device and electronic equipment
CN108573482A (en) * 2018-03-22 2018-09-25 苏海英 Warn rifle trigger-type computer operation platform
CN109190667A (en) * 2018-07-31 2019-01-11 中国电子科技集团公司第二十九研究所 A kind of Object Threat Evaluation method, model and model building method based on electronic reconnaissance signal
CN109444897A (en) * 2018-09-13 2019-03-08 中国船舶重工集团公司第七〇五研究所 A kind of more gusts of Data Associations based on multiple features
CN109308702A (en) * 2018-09-14 2019-02-05 南京理工技术转移中心有限公司 A kind of real-time recognition positioning method of target
US20210357655A1 (en) * 2018-10-04 2021-11-18 Seadronix Corp. Ship and harbor monitoring device and method
US12057018B2 (en) * 2018-10-04 2024-08-06 Seadronix Corp. Device and method for monitoring vessel and harbor
EP3862997A4 (en) * 2018-10-04 2022-08-10 Seadronix Corp. DEVICE AND METHOD FOR CONTROLLING A SHIP AND PORT
CN109191916A (en) * 2018-10-11 2019-01-11 苏州大学 A kind of ship collision early warning system based on image
CN109523834A (en) * 2018-12-24 2019-03-26 云南北方驰宏光电有限公司 Safety of ship DAS (Driver Assistant System)
CN110033431A (en) * 2019-02-26 2019-07-19 北方工业大学 Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge
CN110033431B (en) * 2019-02-26 2021-04-27 北方工业大学 Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge
CN109831655A (en) * 2019-03-21 2019-05-31 苏州大学 Ship environment perception and early warning system based on multi-cam data fusion
CN110110964A (en) * 2019-04-04 2019-08-09 深圳市云恩科技有限公司 A kind of ship and ferry supervisory systems based on deep learning
CN110703746A (en) * 2019-09-17 2020-01-17 钟子骅 Inland river boats and ships autopilot
CN110789662A (en) * 2019-11-07 2020-02-14 陈丽丽 Yacht
CN111025284A (en) * 2019-11-19 2020-04-17 宁波盛域海洋电子科技有限公司 Collision prevention system combining marine radar system and infrared photoelectric scanning
CN111309201A (en) * 2020-01-19 2020-06-19 青岛海狮网络科技有限公司 Multi-window display method for autonomous collision avoidance of autonomously driven ship
CN111174765A (en) * 2020-02-24 2020-05-19 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance
CN111645821A (en) * 2020-06-11 2020-09-11 上海船舶研究设计院(中国船舶工业集团公司第六0四研究院) Ship safety control system, control method and ship
CN111722305A (en) * 2020-07-03 2020-09-29 河海大学 Early warning method and early warning device for ensuring marine construction safety
CN115917262A (en) * 2020-07-15 2023-04-04 舍弗勒技术股份两合公司 Method and detection system for detecting angular position
CN111791997A (en) * 2020-07-15 2020-10-20 广东海洋大学 A ship-oriented intelligent marine ship distress warning system
CN111791997B (en) * 2020-07-15 2024-05-03 广东海洋大学 Intelligent marine ship distress early warning system for ship
CN112100917B (en) * 2020-09-14 2023-12-22 中国船级社 Expert countermeasure system-based intelligent ship collision avoidance simulation test system and method
CN112100917A (en) * 2020-09-14 2020-12-18 中国船级社 Intelligent ship collision avoidance simulation test system and method based on expert confrontation system
CN114619443A (en) * 2020-12-14 2022-06-14 苏州大学 Robot working space setting method and robot active safety system
CN112634658A (en) * 2020-12-18 2021-04-09 武汉欣海远航科技研发有限公司 Acousto-optic early warning method and system for safety supervision of offshore wind farm
CN112835024A (en) * 2021-01-07 2021-05-25 南京晓庄学院 An Underwater Object Tracking Method Using Doppler Principle
CN112918632A (en) * 2021-03-09 2021-06-08 武汉理工大学 Ship design method based on intelligent reasoning
CN113291427A (en) * 2021-06-10 2021-08-24 丁鸿雨 Ocean engineering ship is with preventing striking reef system based on thing networking
CN114125228A (en) * 2021-11-23 2022-03-01 智慧航海(青岛)科技有限公司 Wide dynamic image processing method of marine 360-degree panoramic image system
CN114954827A (en) * 2022-07-01 2022-08-30 中国舰船研究设计中心 A high-reliability cab top comprehensive information display device
CN115410419A (en) * 2022-08-23 2022-11-29 交通运输部天津水运工程科学研究所 Ship mooring early warning method and system, electronic device and storage medium
CN115410419B (en) * 2022-08-23 2024-02-02 交通运输部天津水运工程科学研究所 Ship mooring early warning method, system, electronic equipment and storage medium
CN115222758B (en) * 2022-09-21 2023-01-10 北京九章星图科技有限公司 Real-time detection method for ship moving target of low-resolution wide-area sequence remote sensing image
CN115222758A (en) * 2022-09-21 2022-10-21 北京九章星图科技有限公司 Low-resolution wide-area sequence remote sensing image ship moving target real-time detection algorithm
CN116088542B (en) * 2023-04-12 2023-08-18 中国水产科学研究院南海水产研究所 Fishing boat operation safety early warning method and system based on remote sensing technology
CN116088542A (en) * 2023-04-12 2023-05-09 中国水产科学研究院南海水产研究所 Fishing boat operation safety early warning method and system based on remote sensing technology

Also Published As

Publication number Publication date
CN101214851B (en) 2010-12-01

Similar Documents

Publication Publication Date Title
CN101214851B (en) Intelligent all-weather actively safety early warning system and early warning method thereof for ship running
Chen et al. Ship detection from coastal surveillance videos via an ensemble Canny-Gaussian-morphology framework
US12198418B2 (en) System and method for measuring the distance to an object in water
CN102081801B (en) Multi-feature adaptive fused ship tracking and track detecting method
CN112560671B (en) Ship detection method based on rotating convolutional neural network
CN110443201B (en) Target identification method based on multi-source image joint shape analysis and multi-attribute fusion
CN109409283A (en) A kind of method, system and the storage medium of surface vessel tracking and monitoring
CN113050121A (en) Ship navigation system and ship navigation method
CN111163290B (en) A method for detecting and tracking ships sailing at night
CN111160293A (en) Small target ship detection method and system based on characteristic pyramid network
Zhang et al. A warning framework for avoiding vessel‐bridge and vessel‐vessel collisions based on generative adversarial and dual‐task networks
CN113933828B (en) An adaptive multi-scale target detection method and system for unmanned boat environment
Lalasa et al. Maritime Security-Illegal Fishing Detection Using Deep Learning
Shi et al. Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles
CN113484864B (en) Unmanned ship-oriented navigation radar and photoelectric pod collaborative environment sensing method
CN111105419B (en) Vehicle and ship detection method and device based on polarized SAR image
Xu et al. Hydrographic data inspection and disaster monitoring using shipborne radar small range images with electronic navigation chart
CN118570475A (en) Sea level segmentation method based on deep learning
Amabdiyil et al. Marine vessel detection comparing GPRS and satellite images for security applications
Li et al. TKP-net: A three keypoint detection network for ships using SAR imagery
Li et al. A sea–sky–line detection method for long wave infrared image based on improved Swin Transformer
CN116189001A (en) A Ship Detection Method Based on Scattering Characteristics Sensing in Full Polarization SAR
Zhu et al. Saliency detection for underwater moving object with sonar based on motion estimation and multi-trajectory analysis
Wang et al. Radar target tracking coordinated control with ptz cameras for monitoring nearshore buoys
van den Broek et al. Discriminating small extended targets at sea from clutter and other classes of boats in infrared and visual light imagery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20101201

Termination date: 20160110