CN113505699A - Ship detection method based on RetinaNet algorithm - Google Patents
Ship detection method based on RetinaNet algorithm Download PDFInfo
- Publication number
- CN113505699A CN113505699A CN202110781771.9A CN202110781771A CN113505699A CN 113505699 A CN113505699 A CN 113505699A CN 202110781771 A CN202110781771 A CN 202110781771A CN 113505699 A CN113505699 A CN 113505699A
- Authority
- CN
- China
- Prior art keywords
- ship
- image
- ship target
- model
- method based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of ship detection, and particularly discloses a ship detection method based on RetinaNet algorithm, which comprises the following steps: s1: establishing a ship target detection model based on a RetinaNet algorithm; s2: acquiring an image to be detected of a ship target; s3: and inputting the image to be detected of the ship target into a ship target detection model to obtain a ship target detection result image. The invention solves the problems of difficult ship target extraction, weak generalization capability of the detection model and limited detection precision in the prior art.
Description
Technical Field
The invention belongs to the technical field of ship detection, and particularly relates to a ship detection method based on a RetinaNet algorithm.
Background
The marine transportation is the most dominant transportation mode in international logistics, and the transportation total amount of the marine transportation accounts for over 2/3 of the global freight transportation total amount. However, with the rapid increase of shipping flow, marine violation events occur at times, ship accidents, pirates, illegal fishing, drug trafficking, illegal cargo transportation and other events which damage the environment occur at times, the marine transportation order is seriously damaged, and the economic and safety of China are seriously affected, so that a plurality of organizations are forced to monitor the sea more closely, and therefore, the marine effective detection and identification have important theoretical significance and application value.
The problems of the prior art are as follows:
1) the ship detection is different from the common problems of face detection, vehicle detection and the like, the background of the ship image is complex, and various interference information including illumination factors, cloud and fog weather, water surface fluctuation and small ship target exists, so that the extraction of the ship target is difficult, the processing time is prolonged, and even a large amount of missed detection and false detection are caused. In addition, the data set is scarce, the samples are unbalanced, and the generalization capability of a plurality of detection models is not strong, so that the current ship detection work is very challenging.
2) In the prior art, most of target detection is performed by a manual feature extraction method, such as image processing, statistical analysis and other means, and then a proper model is constructed according to the obtained features or the detection and identification of the target are performed by model integration, so that high-level semantic information is difficult to obtain, and the detection precision is limited.
Disclosure of Invention
The present invention aims to solve at least one of the above technical problems to a certain extent.
Therefore, the invention aims to provide a ship detection method based on a RetinaNet algorithm, which is used for solving the problems of difficulty in extracting a ship target, poor generalization capability of a detection model and limited detection precision in the prior art.
The technical scheme adopted by the invention is as follows:
a ship detection method based on RetinaNet algorithm comprises the following steps:
s1: establishing a ship target detection model based on a RetinaNet algorithm;
s2: acquiring an image to be detected of a ship target;
s3: and inputting the image to be detected of the ship target into a ship target detection model to obtain a ship target detection result image.
Further, the specific method of step S1 is:
s1-1: acquiring an initial ship image dataset based on the satellite ship image dataset, and preprocessing the initial ship image dataset to obtain a preprocessed ship image dataset;
s1-2: obtaining a preselection frame of each sample in the preprocessed ship image data set to obtain a final ship image data set with the preselection frame;
s1-3: establishing a RetinaShip model based on a RetinaNet algorithm;
s1-4: and inputting the final ship image data set into a RetinaShip model for training to obtain a ship target detection model.
Further, in step S1-1, the preprocessing includes image format processing and data enhancement processing performed in sequence.
Further, the data enhancement processing includes geometric transformation processing, optical transformation processing, noise addition processing, and normalization processing performed on the image.
Further, in step S1-2, obtaining a pre-selected frame of each sample in the pre-processed ship image dataset by using a K-Means clustering method;
the types of samples include simple positive samples, difficult positive samples, simple negative samples, and difficult negative samples.
Further, in step S1-3, the retinas model includes a residual error network ResNet, a feature pyramid network FPN, a security shield SSH module group, and a classification regression subnetwork group, which are connected in sequence.
Furthermore, the SSH module group comprises a plurality of SSH modules which are arranged in parallel, the classification regression sub-network group comprises a plurality of classification regression sub-networks which are arranged in parallel, the plurality of SSH modules are connected with the classification regression sub-networks in a one-to-one correspondence mode, and each classification regression sub-network comprises a boundary frame regression sub-network and a target classification sub-network which are arranged in parallel.
Further, the specific method of step S1-4 is:
s1-4-1: carrying out balance processing on all samples in the final ship image data set, namely obtaining the intersection ratio IoU value of all negative samples in the final ship image data set, and sequencing and screening all negative samples according to IoU values to obtain screened negative samples;
s1-4-2: inputting the screened negative sample and other samples into a RetinaShip model for training to obtain an initial ship target detection model;
s1-4-3: and optimizing the initial ship target detection model by using the focusing loss function to obtain an optimal ship target detection model.
Further, the focus loss function is formulated as:
lcls=-αt(1-pt)γlog(pt)
where lcls is the focus loss function; p is a radical oftProbability of being a positive sample; alpha is alphatWeights for different samples; gamma is the sample label.
Further, the specific method of step S3 is:
s3-1: obtaining a preselection frame of an image to be detected of a ship target;
s3-2: inputting the ship target to-be-detected image with the pre-selection frame into a ship target detection model for detection to obtain a ship target detection result;
s3-3: and if the ship target detection result is that the current preselection frame contains the ship target, adjusting the corresponding preselection frame to obtain a prediction frame, and outputting a ship target detection result image with the prediction frame.
The invention has the beneficial effects that:
1) according to the method, the ship image data set is obtained based on the satellite ship image data set, the ship image data set comprises various complex environments, the richness of the data set is improved, the ship image data set is enhanced, and the generalization capability of the model is improved.
2) The RetinaShip model is established based on the RetinaNet algorithm, the image data characteristics are automatically extracted based on the neural network, the target is detected and identified, and the detection precision is improved.
Other advantageous effects of the present invention will be described in detail in the detailed description.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a ship detection method based on a RetinaNet algorithm.
Fig. 2 is an example of a satellite vessel image dataset.
FIG. 3 is an example of a pre-processed ship image.
Fig. 4 is an example of a final vessel image with preselected boxes.
FIG. 5 is a flow chart of a K-Means clustering method.
Fig. 6 is a sample species diagram.
Fig. 7 is a schematic structural diagram of the retinas model.
Fig. 8 is an example of a ship target detection result image.
FIG. 9 is a graph of model analysis.
Fig. 10 is a structure diagram of an SSH module.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. When the terms "comprises," "comprising," "includes," and/or "including" are used herein, they specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
It should be understood that specific details are provided in the following description to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
Example 1
A ship detection method based on RetinaNet algorithm is shown in figure 1, and comprises the following steps:
s1: a ship target detection model is established based on a RetinaNet algorithm, and the specific method comprises the following steps:
s1-1: acquiring an initial ship image dataset based on the satellite ship image datasets shown in (a) to (d) of fig. 2, and preprocessing the initial ship image dataset to obtain a preprocessed ship image dataset, wherein the preprocessed ship images are shown in (a) to (h) of fig. 3;
the satellite and ship image data set comprises 192556 JPEG images extracted from a satellite, the resolution is within 1.5m, and the picture size is 768 x 768; the images comprise mail ships, commercial ships and fishing ships with various shapes and sizes, a large number of images do not comprise ships, some images can comprise a plurality of ships, the ships in the images are different in size and can be obtained at different shooting places (open sea and wharf) and under different weathers (dark night, rainy days and foggy days), the types of the images are rich, and the images are very suitable for ship detection and analysis;
the preprocessing comprises image format processing and data enhancement processing which are sequentially carried out, wherein the data enhancement processing comprises geometric transformation processing, optical transformation processing, noise addition processing and normalization processing which are carried out on the image;
the geometric transformation process enriches the positions, scales and the like of objects appearing in the image, so that the translation invariance and scale invariance of the model, such as translation, turning, scaling, clipping and the like, are met; in the extraterrestrial images, the directions can be arbitrary, and the direction of the ship bow can be any, so that the data set can be well expanded by performing horizontal and vertical overturning, and in the training process, each iteration has a certain probability of performing horizontal and vertical overturning or 90-degree rotation on the images;
the optical transformation processing is used for increasing images under different illumination and scenes, and typical operations comprise random disturbance of brightness, contrast, hue and saturation and transformation between channel color gamuts;
the noise processing is added, certain disturbance such as Gaussian noise is added in an original image, so that the model has anti-interference performance on the noise which is possibly encountered, and the generalization capability of the model is improved;
after the normalization processing of the image is completed, the image needs to be cut, and the image is scaled to a fixed size;
s1-2: obtaining a pre-selection frame of each sample in the preprocessed ship image data set by using a K-Means clustering method to obtain a final ship image data set with the pre-selection frame, wherein the final ship image with the pre-selection frame is shown in FIG. 4;
replacing manual design with K-means clustering, and automatically generating a group of Anchor pre-selection frames more suitable for the data set; the K-Means clustering is a typical clustering algorithm, and the algorithm idea is to determine K initial objects as seed objects, then assign other objects to a category closest to the initial objects, then use the average value of each category object as a new clustering center, and repeat the process until each category does not change any more, as shown in fig. 5, the specific method is as follows:
a-1: selecting K frame values (w, h) of the preprocessed ship image as an initial clustering center, wherein the (w, h) is the width and the height of a real frame of the normalized ship image;
a-2: calculating the distance from each real box to each cluster center, and then assigning the bounding box to the closest class, where the distance is IoU distance in this embodiment, and the formula is:
d(box,centroid)=1-IoU(box,centroid)
in the formula, d (box, centroid) is the distance from the real frame to the clustering center; IoU (box, centroid) is the IoU distance of the real box to the cluster center;
a-3: updating the clustering center of each category, and taking the average value of all frames in the category as a new clustering center;
a-4: repeating the steps A-2 to A-3 until the cluster center is not changed or the categories of all the borders are not changed;
each sample has 9 pre-selection frames, and the size of the pre-selection frame is determined according to the step size of the image compression; for example, after down-sampling three times, the image is scaled to 1/8, the step size is 8, and 8 × 8 is used as the basic scale of the layer of pre-selected frame, and on this basis, scaling is performed with a scaling factor of 203、213And 223Then generating three preselection frames with different length-width ratios, and generating 9 preselection frames with different sizes and length-width ratios in total;
obtaining a plurality of groups of preselected frames through a K-Means algorithm, selecting one group with the highest score as a final preselected frame of the whole data set, and when the quality of the preselected frame is measured, using the average IoU of a real frame and the preselected frame as a standard, and obtaining the accuracy of 68.6% through the preselected frames obtained through K-Means clustering;
besides positive and negative samples, the ship target detection also has the problem of serious imbalance of difficult and easy samples, and the types of the samples comprise simple positive samples, difficult positive samples, simple negative samples and difficult negative samples according to whether the ship target detection is easy to learn and the overlapping degree of labels, as shown in fig. 6;
s1-3: establishing a RetinaShip model shown in FIG. 7 based on a RetinaNet algorithm, wherein the RetinaShip model comprises a residual error network ResNet, a feature pyramid network FPN, a safety shield SSH module group and a classification regression sub-network group which are sequentially connected, the SSH module group comprises 3 SSH modules which are arranged in parallel, the classification regression sub-network group comprises 3 classification regression sub-networks which are arranged in parallel, the SSH modules are connected with the classification regression sub-networks in a one-to-one correspondence manner, and each classification regression sub-network comprises a boundary frame regression sub-network and a target classification sub-network which are arranged in parallel;
the RetinaShip model uses an FPN structure, a network pyramid is constructed on three basic characteristic layers output by a skeleton network through the FPN, the number of channels is adjusted by convolution of 1x1, then the characteristics are fused by using upsampling, and finally three effective characteristic layers are output for training and prediction; in order to further extract features, the retinas model uses an SSH module as shown in fig. 10 to enhance the receptive field; SSH uses three convolutions to extract features, two 3 × 3 convolutions, three 3 × 3 convolutions instead of 5 × 5 and 7 × 7 convolutions, and then stacking of channels;
s1-4: inputting the final ship image data set into a RetinaShip model for training to obtain a ship target detection model, wherein the specific method comprises the following steps:
s1-4-1: carrying out balance processing on all samples in the final ship image data set, namely obtaining the intersection ratio IoU value of all negative samples in the final ship image data set, and sequencing and screening all negative samples according to IoU values to obtain screened negative samples;
s1-4-2: inputting the screened negative sample and other samples into a RetinaShip model for training to obtain an initial ship target detection model;
the RetinaShip model finally outputs a characteristic diagram of three scales; through the processing of each feature layer and each pre-selection frame, two contents can be obtained, namely 4 adjusting parameters of each pre-selection frame and the confidence degrees of the corresponding K categories;
the output of the RetinaShip model is respectively two parts, namely classification prediction and regression prediction; the classification prediction is used for judging whether a ship target is contained in a preselection frame or not, and the RetinaNet uses softmax for judgment; here, the number of channels is adjusted to 2A by using a convolution of 1x1, namely twice the number of the prior frames, and the prior frames are used for judging the probability that each pre-selected frame contains the ship; the regression prediction is used for adjusting the preselection frame to obtain a prediction frame, four parameters are needed to adjust the preselection frame, and the number of channels is adjusted to 4A by using convolution of 1x1, wherein the number of channels comprises 4 adjustment parameters of each preselection frame;
s1-4-3: optimizing the initial ship target detection model by using a focusing loss function to obtain an optimal ship target detection model;
according to the matching with the real frame, a loss function can be established; the loss function comprises two parts, namely classification loss and regression loss, and for the problem of unbalanced sample number in a classification sub-network, the classification loss is calculated by using focusing loss;
the formula for the focus loss function is:
lcls=-αt(1-pt)γlog(pt)
where lcls is the focus loss function; p is a radical oftProbability of being a positive sample; alpha is alphatWeights for different samples; gamma is a sample label;
the regression loss was calculated using the Smooth L1 loss function, and the formula is:
wherein lbox is the Smooth L1 loss value;as a Smooth L1 loss function; t is tx、ty、tw、thAdjusting parameters of a prediction frame obtained by model learning relative to a prior frame at four positions; x, y, w and h are respectively four position indication quantities of the prediction frame;
with the loss function, a final ship target detection model is established, parameters are optimized through backward propagation, and finally, final ship target detection model parameters are obtained and used for ship detection;
s2: acquiring an image to be detected of a ship target, and preprocessing the ship target detection image;
the preprocessing comprises image format processing and data enhancement processing which are sequentially carried out, wherein the data enhancement processing comprises geometric transformation processing, optical transformation processing, noise addition processing and normalization processing which are carried out on the image;
s3: inputting an image to be detected of a ship target into a ship target detection model to obtain a ship target detection result image, wherein the specific method comprises the following steps:
s3-1: obtaining a preselection frame of an image to be detected of a ship target;
s3-2: inputting the ship target to-be-detected image with the pre-selection frame into a ship target detection model for detection to obtain a ship target detection result;
carrying out ship target detection on the interior of the preselection frame to obtain a ship target detection result, wherein the current preselection frame contains a ship target or the current preselection frame does not contain the ship target;
s3-3: if the ship target detection result indicates that the current preselection frame contains the ship target, adjusting the corresponding preselection frame according to 4 adjustment parameters of each preselection frame to obtain a prediction frame, and outputting ship target detection images shown in (a) to (e) of fig. 8;
as shown in fig. 9 (a) and (b), with a confidence score of 0.5, the accuracy can reach 93.33%, the recall rate can reach 88.87%, the average accuracy reaches 93.28%, and the detection speed reaches 15 FPS; from the experimental result, the RetinaShip model can well inhibit the generation of negative samples, so that the false detection rate is greatly reduced, and in addition, the confidence score and the accuracy of target positioning are greatly enhanced due to the use of a preselection frame generated by K-Means clustering; when the IoU value is higher, the effect of detecting performance of the RetinaShip model is improved obviously; in a general view, the RetinaShip model used in the scheme can reach 93.28% accuracy rate in the ship target detection.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
The embodiments described above are merely illustrative, and may or may not be physically separate, if referring to units illustrated as separate components; if a component displayed as a unit is referred to, it may or may not be a physical unit, and may be located in one place or distributed over a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. Can be understood and implemented by those skilled in the art without inventive effort.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications of the technical solutions described in the embodiments or equivalent replacements of some technical features may still be made. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
The present invention is not limited to the above-described alternative embodiments, and various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.
Claims (10)
1. A ship detection method based on RetinaNet algorithm is characterized in that: the method comprises the following steps:
s1: establishing a ship target detection model based on a RetinaNet algorithm;
s2: acquiring an image to be detected of a ship target;
s3: and inputting the image to be detected of the ship target into a ship target detection model to obtain a ship target detection result image.
2. The ship detection method based on RetinaNet algorithm according to claim 1, wherein: the specific method of step S1 is as follows:
s1-1: acquiring an initial ship image dataset based on the satellite ship image dataset, and preprocessing the initial ship image dataset to obtain a preprocessed ship image dataset;
s1-2: obtaining a preselection frame of each sample in the preprocessed ship image data set to obtain a final ship image data set with the preselection frame;
s1-3: establishing a RetinaShip model based on a RetinaNet algorithm;
s1-4: and inputting the final ship image data set into a RetinaShip model for training to obtain a ship target detection model.
3. The ship detection method based on RetinaNet algorithm according to claim 2, characterized in that: in step S1-1, the preprocessing includes image format processing and data enhancement processing performed in sequence.
4. The ship detection method based on RetinaNet algorithm according to claim 3, wherein: the data enhancement processing comprises geometric transformation processing, optical transformation processing, noise increasing processing and normalization processing which are carried out on the image.
5. The ship detection method based on RetinaNet algorithm according to claim 2, characterized in that: in the step S1-2, a pre-selection frame of each sample in the preprocessed ship image data set is obtained by using a K-Means clustering method;
the types of the samples comprise simple positive samples, difficult positive samples, simple negative samples and difficult negative samples.
6. The ship detection method based on RetinaNet algorithm according to claim 2, characterized in that: in the step S1-3, the retinas model includes a residual error network ResNet, a feature pyramid network FPN, a safety shield SSH module group, and a classification regression sub-network group, which are sequentially connected.
7. The ship detection method based on RetinaNet algorithm according to claim 6, wherein: the SSH module group comprises a plurality of SSH modules which are arranged in parallel, the classification regression sub-network group comprises a plurality of classification regression sub-networks which are arranged in parallel, the plurality of SSH modules are connected with the classification regression sub-networks in a one-to-one correspondence mode, and each classification regression sub-network comprises a boundary frame regression sub-network and a target classification sub-network which are arranged in parallel.
8. The ship detection method based on RetinaNet algorithm according to claim 2, characterized in that: the specific method of the step S1-4 comprises the following steps:
s1-4-1: carrying out balance processing on all samples in the final ship image data set, namely obtaining the intersection ratio IoU value of all negative samples in the final ship image data set, and sequencing and screening all negative samples according to IoU values to obtain screened negative samples;
s1-4-2: inputting the screened negative sample and other samples into a RetinaShip model for training to obtain an initial ship target detection model;
s1-4-3: and optimizing the initial ship target detection model by using the focusing loss function to obtain an optimal ship target detection model.
9. The ship detection method based on RetinaNet algorithm according to claim 8, wherein: the formula of the focus loss function is as follows:
lcls=-αt(1-pt)γlog(pt)
where lcls is the focus loss function; p is a radical oftProbability of being a positive sample; alpha is alphatWeights for different samples; gamma is the sample label.
10. The ship detection method based on RetinaNet algorithm according to claim 1, wherein: the specific method of step S3 is as follows:
s3-1: obtaining a preselection frame of an image to be detected of a ship target;
s3-2: inputting the ship target to-be-detected image with the pre-selection frame into a ship target detection model for detection to obtain a ship target detection result;
s3-3: and if the ship target detection result is that the current preselection frame contains the ship target, adjusting the corresponding preselection frame to obtain a prediction frame, and outputting a ship target detection result image with the prediction frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110781771.9A CN113505699A (en) | 2021-07-09 | 2021-07-09 | Ship detection method based on RetinaNet algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110781771.9A CN113505699A (en) | 2021-07-09 | 2021-07-09 | Ship detection method based on RetinaNet algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113505699A true CN113505699A (en) | 2021-10-15 |
Family
ID=78012688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110781771.9A Pending CN113505699A (en) | 2021-07-09 | 2021-07-09 | Ship detection method based on RetinaNet algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113505699A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117523390A (en) * | 2023-11-07 | 2024-02-06 | 中国人民解放军战略支援部队航天工程大学 | A method and device for ship target detection in SAR images and its model construction |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150202256A1 (en) * | 2012-07-06 | 2015-07-23 | Kyoto Prefectural Public University Corporation | Differentiation marker and differentiation control of eye cell |
CN110390691A (en) * | 2019-06-12 | 2019-10-29 | 合肥合工安驰智能科技有限公司 | A kind of ore scale measurement method and application system based on deep learning |
US20200012283A1 (en) * | 2018-07-05 | 2020-01-09 | Vu Xuan Nguyen | System and method for autonomous maritime vessel security and safety |
CN111368671A (en) * | 2020-02-26 | 2020-07-03 | 电子科技大学 | SAR image ship target detection and identification integrated method based on deep learning |
CN111597941A (en) * | 2020-05-08 | 2020-08-28 | 河海大学 | A target detection method for dam defect images |
CN111914935A (en) * | 2020-08-03 | 2020-11-10 | 哈尔滨工程大学 | Ship image target detection method based on deep learning |
CN112464883A (en) * | 2020-12-11 | 2021-03-09 | 武汉工程大学 | Automatic detection and identification method and system for ship target in natural scene |
CN112507818A (en) * | 2020-11-25 | 2021-03-16 | 奥比中光科技集团股份有限公司 | Illumination estimation method and system based on near-infrared image |
-
2021
- 2021-07-09 CN CN202110781771.9A patent/CN113505699A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150202256A1 (en) * | 2012-07-06 | 2015-07-23 | Kyoto Prefectural Public University Corporation | Differentiation marker and differentiation control of eye cell |
US20200012283A1 (en) * | 2018-07-05 | 2020-01-09 | Vu Xuan Nguyen | System and method for autonomous maritime vessel security and safety |
CN110390691A (en) * | 2019-06-12 | 2019-10-29 | 合肥合工安驰智能科技有限公司 | A kind of ore scale measurement method and application system based on deep learning |
CN111368671A (en) * | 2020-02-26 | 2020-07-03 | 电子科技大学 | SAR image ship target detection and identification integrated method based on deep learning |
CN111597941A (en) * | 2020-05-08 | 2020-08-28 | 河海大学 | A target detection method for dam defect images |
CN111914935A (en) * | 2020-08-03 | 2020-11-10 | 哈尔滨工程大学 | Ship image target detection method based on deep learning |
CN112507818A (en) * | 2020-11-25 | 2021-03-16 | 奥比中光科技集团股份有限公司 | Illumination estimation method and system based on near-infrared image |
CN112464883A (en) * | 2020-12-11 | 2021-03-09 | 武汉工程大学 | Automatic detection and identification method and system for ship target in natural scene |
Non-Patent Citations (3)
Title |
---|
MAHYAR NAJIBI 等: "SSH: Single Stage Headless Face Detector", 《ARXIV》 * |
TSUNG-YI LIN 等: "Focal Loss for Dense Object Detection", 《ARXIV》 * |
贾旭强: "基于深度学习的船舶目标检测方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117523390A (en) * | 2023-11-07 | 2024-02-06 | 中国人民解放军战略支援部队航天工程大学 | A method and device for ship target detection in SAR images and its model construction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103886285B (en) | Optical remote sensing image Ship Detection under priori geography information auxiliary | |
CN113486819A (en) | A Ship Target Detection Method Based on YOLOv4 Algorithm | |
CN108898065A (en) | Candidate regions quickly screen and the depth network Ship Target Detection method of dimension self-adaption | |
CN112348758B (en) | Optical remote sensing image data enhancement method and target identification method | |
CN114565824B (en) | Single-stage rotating ship detection method based on full convolution network | |
CN113628180A (en) | Semantic segmentation network-based remote sensing building detection method and system | |
CN116109942A (en) | A method for ship target detection in visible light remote sensing images | |
CN113538387B (en) | Multi-scale inspection image identification method and device based on deep convolutional neural network | |
CN112733686A (en) | Target object identification method and device used in image of cloud federation | |
CN117292269A (en) | Ship image information extraction method and system based on satellite remote sensing | |
Huang et al. | A correlation context-driven method for sea fog detection in meteorological satellite imagery | |
CN114140753A (en) | Method, device and system for marine ship identification | |
CN116363535A (en) | Ship detection method in unmanned aerial vehicle aerial image based on convolutional neural network | |
CN108764145A (en) | One kind is towards Dragon Wet Soil remote sensing images density peaks clustering method | |
CN114663743B (en) | Ship target re-identification method, terminal equipment and storage medium | |
CN110069987B (en) | Single-stage ship detection algorithm and device based on improved VGG network | |
CN113505699A (en) | Ship detection method based on RetinaNet algorithm | |
CN114898290A (en) | Real-time detection method and system for marine ship | |
Cai et al. | Obstacle detection of unmanned surface vessel based on faster RCNN | |
CN113205139A (en) | Unmanned ship water sphere detection method based on density clustering | |
CN112633158A (en) | Power transmission line corridor vehicle identification method, device, equipment and storage medium | |
CN109871731A (en) | The method, apparatus and computer storage medium of ship detecting | |
CN117893732A (en) | Gangue detection and sorting method based on PLC and deep learning | |
CN117037042A (en) | Automatic numbering method and related device for photovoltaic strings based on visual recognition model | |
CN116797941A (en) | Marine oil spill risk source rapid intelligent identification and classification method for high-resolution remote sensing image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211015 |
|
RJ01 | Rejection of invention patent application after publication |