[go: up one dir, main page]

CN111462220B - Method, device, equipment and medium for extracting shadow area of object to be detected - Google Patents

Method, device, equipment and medium for extracting shadow area of object to be detected Download PDF

Info

Publication number
CN111462220B
CN111462220B CN202010262682.9A CN202010262682A CN111462220B CN 111462220 B CN111462220 B CN 111462220B CN 202010262682 A CN202010262682 A CN 202010262682A CN 111462220 B CN111462220 B CN 111462220B
Authority
CN
China
Prior art keywords
image
detected
preset
shadow
shadow area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010262682.9A
Other languages
Chinese (zh)
Other versions
CN111462220A (en
Inventor
邹冲
朱超杰
侯鑫
汪飙
吴海山
殷磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010262682.9A priority Critical patent/CN111462220B/en
Publication of CN111462220A publication Critical patent/CN111462220A/en
Application granted granted Critical
Publication of CN111462220B publication Critical patent/CN111462220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/28Measuring arrangements characterised by the use of optical techniques for measuring areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种待侦测物体阴影面积提取方法、装置、设备和介质,该方法包括:接收遥感影像,获取所述遥感影像中待侦测物体的外拓图像;对所述外拓图像进行预设语义分割处理,得到所述待侦测物体的目标初始阴影图像;对所述目标初始阴影图像进行噪声去除处理,得到每个待侦测物体的待侦测物体阴影面积。本申请解决现有技术中提取待侦测物体阴影面积效果不稳定的技术问题。

The present application discloses a method, device, equipment and medium for extracting the shadow area of an object to be detected, the method comprising: receiving a remote sensing image, obtaining an extension image of the object to be detected in the remote sensing image; performing a preset semantic segmentation process on the extension image to obtain a target initial shadow image of the object to be detected; performing a noise removal process on the target initial shadow image to obtain the shadow area of each object to be detected. The present application solves the technical problem that the effect of extracting the shadow area of an object to be detected is unstable in the prior art.

Description

Method, device, equipment and medium for extracting shadow area of object to be detected
Technical Field
The application relates to the technical field of artificial intelligence of financial science and technology (Fintech), in particular to a method, a device, equipment and a medium for extracting shadow areas of objects to be detected.
Background
With the continuous development of financial science and technology, especially internet science and technology finance, more and more technologies are applied in the finance field, but the finance industry also puts forward higher requirements on technology, such as extracting the shadow area of the object to be detected in the finance industry.
The current times are times of crude oil, crude oil reserve data of various countries are known in real time, the national security is directly related, crude oil reserve estimation or prediction is carried out on ports of various countries through satellites at present to obtain remote sensing images so as to calculate the capacity of each large object to be detected such as a large oil tank in the ports, shadow area extraction is the most important link in the process of calculating the capacity of each large oil tank in the ports, and the shadow area of the large oil tank at present is often extracted by adopting a mode of carrying out color threshold segmentation in RGB (three primary colors, red, green, blue, red, green and Blue) color space, so that the shadow extraction effect of the object to be detected is unstable.
Disclosure of Invention
The application mainly aims to provide a method, a device, equipment and a medium for extracting the shadow area of an object to be detected, and aims to solve the technical problem that the extraction effect is unstable in the existing extraction process of the shadow area of the object to be detected in the prior art.
In order to achieve the above object, the present application provides a method for extracting a shadow area of an object to be detected, the method for extracting the shadow area of the object to be detected comprising:
Receiving a remote sensing image, and acquiring an exterior image of an object to be detected in the remote sensing image;
Performing preset semantic segmentation processing on the exterior image to obtain a target initial shadow image of the object to be detected;
And performing noise removal processing on the target initial shadow image to obtain the shadow area of the object to be detected of each object to be detected.
Optionally, the step of performing preset semantic segmentation processing on the extension image to obtain the target initial shadow image of the object to be detected includes:
Inputting the extension image into a preset semantic segmentation network model to perform preset semantic segmentation processing on the extension image so as to obtain a target initial shadow image of the object to be detected;
The preset semantic segmentation network model is a model of a predicted shadow image obtained by training a preset basic model to be trained based on preset object image data to be detected with a preset shadow tag.
Optionally, before the step of inputting the extension image into a preset semantic segmentation network model to obtain the target initial shadow image of each object to be detected, the method includes:
Acquiring preset object image data to be detected, and performing iterative training on the preset basic model to be trained based on the preset object image data to be detected so as to update the preset prediction model to be trained through iterative training;
Judging whether the preset to-be-trained prediction model after iterative training is updated meets preset training completion conditions, and if so, obtaining the preset semantic segmentation network model.
Optionally, the step of obtaining the preset object image data to be detected, iteratively training the preset basic model to be trained based on the preset object image data to be detected, so as to update the preset prediction model to be trained by iterative training includes:
Acquiring preset object image data to be detected, and inputting the preset object image data to be detected into the preset basic model to be trained to obtain predictive probability image data, wherein the preset basic model to be trained comprises a layer jump connecting layer;
Acquiring preset shadow tag data of the image data of the object to be detected, and comparing the preset shadow tag data with the predictive probability image data to obtain difference data;
And updating the preset prediction model to be trained according to the difference data iterative training.
Optionally, the step of receiving the remote sensing image and obtaining an exterior image of the object to be detected in the remote sensing image includes:
receiving a remote sensing image, and acquiring an external rectangular frame image of an object to be detected in the remote sensing image so as to acquire the position information of the external rectangular frame image;
and cutting the image which comprises the external rectangular frame image and is a preset multiple of the external rectangular frame image from the remote sensing image by taking the external rectangular frame image as a center according to the position information of the external rectangular frame image, and setting the image which comprises the external rectangular frame image and is a preset multiple of the external rectangular frame image as the external image.
Optionally, the step of performing noise removal processing on the target initial shadow image to obtain a shadow area of the object to be detected of each object to be detected includes:
Determining an exterior rectangular frame of each object to be detected according to the exterior image of each object to be detected;
determining an intersecting image intersecting with the extension rectangular frame in the initial shadow image, and removing the intersecting image to obtain a first processed image;
And acquiring a first preset shadow area threshold value, and extracting an image with an area larger than the preset shadow area threshold value from the first processed image to obtain the shadow area of each object to be detected.
Optionally, the step of obtaining a first preset shadow area threshold, extracting an image with an area greater than the preset shadow area threshold from the first processed image, and obtaining the shadow area of the object to be detected of each object to be detected includes:
Acquiring a first preset shadow area threshold value, and extracting an image with an area larger than the first preset shadow area threshold value from the first processed image to obtain a second processed image;
Acquiring a second preset shadow area threshold value, extracting an image with an area larger than the second preset shadow area threshold value from the second processed image to obtain a third processed image, and extracting an image intersected with an external frame of the external frame image from the third processed image to obtain the shadow area of the object to be detected of each object to be detected.
Optionally, the step of inputting the extension image into a preset semantic segmentation network model to perform preset semantic segmentation processing on the extension image to obtain the target initial shadow image of the object to be detected includes:
Preprocessing the exterior image to obtain a preprocessed image;
Inputting the preprocessed image into a preset semantic segmentation network model to perform preset semantic segmentation processing on the exterior image so as to obtain a target initial shadow image of the object to be detected.
The application also provides a device for extracting the shadow area of the object to be detected, which comprises:
the receiving module is used for receiving the remote sensing image and acquiring an exterior image of an object to be detected in the remote sensing image;
The first acquisition module is used for carrying out preset semantic segmentation processing on the extension image to obtain a target initial shadow image of the object to be detected;
And the noise removing module is used for carrying out noise removing processing on the target initial shadow image to obtain the shadow area of the object to be detected of each object to be detected.
Optionally, the first acquisition module includes:
The voice segmentation unit is used for inputting the extension image into a preset semantic segmentation network model so as to perform preset semantic segmentation processing on the extension image to obtain a target initial shadow image of the object to be detected;
The preset semantic segmentation network model is a model of a predicted shadow image obtained by training a preset basic model to be trained based on preset object image data to be detected with a preset shadow tag.
Optionally, the device for extracting the shadow area of the object to be detected includes:
the second acquisition module is used for acquiring preset object image data to be detected, and carrying out iterative training on the preset basic model to be trained based on the preset object image data to be detected so as to update the preset prediction model to be trained through iterative training;
The judging module is used for judging whether the preset to-be-trained prediction model after iterative training updating meets preset training completion conditions, and if so, obtaining the preset semantic segmentation network model.
Optionally, the second obtaining module includes:
The first acquisition unit is used for acquiring preset object image data to be detected, inputting the preset object image data to be detected into the preset basic model to be trained to obtain prediction probability image data, wherein the preset basic model to be trained comprises a layer jump connecting layer;
the second acquisition unit is used for acquiring preset shadow tag data of the image data of the object to be detected, and comparing the preset shadow tag data with the predictive probability image data to obtain difference data;
and the training unit is used for iteratively training and updating the preset prediction model to be trained according to the difference data.
Optionally, the receiving module includes:
the receiving unit is used for receiving the remote sensing image, and acquiring an external rectangular frame image of an object to be detected in the remote sensing image so as to acquire the position information of the external rectangular frame image;
And the external rubbing unit is used for cutting the external rectangular frame image serving as a center from the remote sensing image according to the position information of the external rectangular frame image, wherein the image comprises the external rectangular frame image and is a preset multiple of the external rectangular frame image, and the image comprising the external rectangular frame image and is a preset multiple of the external rectangular frame image is set as the external rubbing image.
Optionally, the noise removal module includes:
The first determining unit is used for determining an extension rectangular frame of each object to be detected according to the extension image of each object to be detected;
The second determining unit is used for determining an intersecting image intersecting the extension rectangular frame in the initial shadow image, and removing the intersecting image to obtain a first processed image;
And the third acquisition unit is used for acquiring a first preset shadow area threshold value, extracting an image with the area larger than the preset shadow area threshold value from the first processed image, and obtaining the shadow area of the object to be detected of each object to be detected.
Optionally, the third obtaining unit includes:
a first obtaining subunit, configured to obtain a first preset shadow area threshold, extract an image with an area greater than the first preset shadow area threshold from the first processed image, and obtain a second processed image;
the second acquisition subunit is used for acquiring a second preset shadow area threshold value, extracting an image with an area larger than the two preset shadow area threshold values from the second processed image to obtain a third processed image, and extracting an image intersected with an external frame of the external frame image from the third processed image to obtain the shadow area of the object to be detected of each object to be detected.
Optionally, the first acquisition module includes:
the preprocessing unit is used for preprocessing the extension image to obtain a preprocessed image;
The input unit is used for inputting the preprocessed image into a preset semantic segmentation network model so as to perform preset semantic segmentation processing on the exterior image, so as to obtain a target initial shadow image of the object to be detected.
The application also provides an object shadow area extraction device to be detected, which is a physical device, and comprises a memory, a processor and a program of the object shadow area extraction method to be detected, wherein the program of the object shadow area extraction method to be detected is stored on the memory and can run on the processor, and the steps of the object shadow area extraction method to be detected can be realized when the program of the object shadow area extraction method to be detected is executed by the processor.
The application also provides a medium, wherein the medium is stored with a program for realizing the method for extracting the shadow area of the object to be detected, and the program for extracting the shadow area of the object to be detected realizes the steps of the method for extracting the shadow area of the object to be detected when being executed by a processor.
The method comprises the steps of receiving a remote sensing image, obtaining an exterior image of an object to be detected in the remote sensing image, carrying out preset semantic segmentation processing on the exterior image to obtain a target initial shadow image of the object to be detected, and carrying out noise removal processing on the target initial shadow image to obtain the shadow area of the object to be detected of each object to be detected. In the application, after an exterior image of an object to be detected is obtained based on a remote sensing image, preset semantic segmentation processing is carried out on the exterior image to obtain a target initial shadow image of the object to be detected (the extraction accuracy of the shadow area of the object to be detected is improved because the process of the preset semantic segmentation processing is not influenced by natural factors), and further, after the noise removal processing is carried out on the target initial shadow image, the shadow area of the object to be detected of each object to be detected is accurately obtained. In the application, the preset semantic segmentation network model is not influenced by illumination when the remote sensing image is acquired, so that unstable shadow extraction effect of the object to be detected is avoided.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flowchart of a method for extracting a shadow area of an object to be detected according to a first embodiment of the present application;
FIG. 2 is a flowchart of a method for extracting a shadow area of an object to be detected according to a second embodiment of the present application;
FIG. 3 is a schematic diagram of a device architecture of a hardware operating environment according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a first scene in the method for extracting shadow areas of objects to be detected according to the present application;
FIG. 5 is a schematic diagram of a second scenario illustrating a method for extracting a shadow area of an object to be detected according to the present application;
Fig. 6 is a schematic diagram of a third scenario in the method for extracting a shadow area of an object to be detected according to the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In a first embodiment of the method for extracting a shadow area of an object to be detected according to the present application, referring to fig. 1, the method for extracting a shadow area of an object to be detected includes:
step S10, receiving a remote sensing image, and acquiring an exterior image of an object to be detected in the remote sensing image;
step S20, carrying out preset semantic segmentation processing on the exterior image to obtain a target initial shadow image of the object to be detected;
step S30, performing noise removal processing on the target initial shadow image to obtain the shadow area of each object to be detected.
The method comprises the following specific steps:
step S10, receiving a remote sensing image, and acquiring an exterior image of an object to be detected in the remote sensing image;
in the present era, various data reserves of other countries, such as data reserves of objects to be detected, are known in real time, and the data of crude oil reserves of countries are known in real time, which is more direct-relation to national security, that is, in this implementation, specific description is given by taking an object to be detected as an oil tank, especially a large oil tank (but not limited to an oil tank, for example, the object to be detected may also be a container, etc.), at present, crude oil reserve estimation or prediction is usually performed by monitoring crude oil ports of the countries through satellites to obtain remote sensing images, specifically, to obtain port remote sensing images, so that in the process of calculating the capacity of each large oil tank in a port based on the port remote sensing images, shadow area extraction is the most important link, that is, calculating the capacity of each large oil tank in the port can generally be divided into three steps: oil tank detection and positioning, oil tank shadow area extraction and oil tank volume calculation, wherein oil tank shadow area extraction is an important link, at present, the oil tank shadow area is often extracted by adopting a mode of carrying out color threshold segmentation in RGB (three primary colors, red, green and Blue) color space, the mode of carrying out color threshold segmentation in RGB color space leads to the fact that the oil tank shadow extraction is greatly influenced by illumination when images are collected before, the extraction effect is unstable, namely, when the oil tank shadow area is expressed by adopting RGB color space, the threshold value calculated when the oil tank images extracted under different illumination are extracted after the shadow area is changed excessively, such as the calculated A threshold value is excessively different from the calculated B threshold value or excessively fluctuates when sunset occurs, this causes the size of the extracted shadow area to be unstable, thereby affecting the extraction effect.
In this embodiment, a remote sensing image sent by a satellite is received, an external frame image of each object to be detected is determined based on the remote sensing image to obtain position information of each external frame image, specifically, if the object to be detected is a circular oil tank, an external rectangular frame image of each object to be detected is determined based on a port remote sensing image to obtain position information of each external frame image, if the object to be detected is a container, an external container image of each object to be detected is determined based on the remote sensing image to obtain position information of each external container image, it is to be noted that in this embodiment, if the object to be detected is a circular oil tank, the external rectangular frame image of each object to be detected is determined based on the port remote sensing image, taking the position information of each external frame image as an example to specifically explain, specifically, analyzing the remote sensing image through a preset Rotated-fast-R-CNN model (which is a trained model capable of accurately positioning the position information of the external frame image of the oil tank in the port remote sensing image), determining the external frame image of each object to be detected, so as to obtain the position information of each external frame image, where it is required to explain that the position information of each external frame image can be represented by (c_x, c_y, w, h), where (c_x, c_y) represents the central coordinate point of each external frame image, (w) represents the width of each external frame, and (h) represents the height of each external frame.
In this embodiment, since the shadow area of the object to be detected needs to be extracted, an image including the shadow area needs to be extracted first, specifically, according to the position information of the circumscribed frame image, the circumscribed frame image is taken as the center, and the extension image including the shadow of the object to be detected is obtained.
Specifically, the step of receiving a remote sensing image and acquiring an exterior image of an object to be detected in the remote sensing image includes:
Step S11, receiving a remote sensing image, and acquiring an external rectangular frame image of an object to be detected in the remote sensing image so as to acquire the position information of the external rectangular frame image;
Receiving a remote sensing image sent by a satellite, determining an external rectangular frame image of each object to be detected based on the remote sensing image to obtain the position information of each external rectangular frame image, specifically, analyzing the remote sensing image through a preset Rotated-fast-R-CNN model (which is a trained model capable of accurately positioning the position information of the external rectangular frame image of an oil tank in the remote sensing image), and determining the external rectangular frame image of each object to be detected to obtain the position information of each external rectangular frame image, wherein the position information of each external rectangular frame image can be expressed by (c_x, c_y, w, h), wherein (c_x, c_y) represents the central coordinate point of each external rectangular frame image, (w) represents the width of each external rectangular frame, and (h) represents the height of each external rectangular frame.
And step S12, cutting the image which comprises the external rectangular frame image and has the size of a preset multiple of the external rectangular frame image from the remote sensing image by taking the external rectangular frame image as a center according to the position information of the external rectangular frame image, and setting the image which comprises the external rectangular frame image and has the size of the preset multiple of the external rectangular frame image as the extension image.
Specifically, according to the position information of each circumscribed rectangular frame image, each object to be detected in each remote sensing image is subjected to image extension, and a plurality of objects to be detected are arranged in each remote sensing image, so that when a certain object to be detected is subjected to extension (the circumscribed rectangular frame image is taken as a center, an image which comprises the circumscribed rectangular frame image and is a preset multiple of the circumscribed rectangular frame image is obtained from the remote sensing image) the image which possibly comprises images or shadow images of other objects to be detected, wherein the image comprises the circumscribed rectangular frame image and the image which is a preset multiple of the circumscribed rectangular frame image is set as the extension image. As shown in fig. 6, (a) is an image of a circumscribed rectangular frame before no extension, and (b) is an image of extension after extension.
In this embodiment, the image including the external rectangular frame image is obtained by cutting from the remote sensing image with the external rectangular frame image as a center, and the size of the image is a preset multiple of the external rectangular frame image, the preset multiple may be 1-2, in particular, the preset expansion ratio may be 1.8, the preset expansion ratio is set to obtain the image including the shadow of the object to be detected (i.e. to avoid incomplete shadow), meanwhile, excessive images are not needed, so as to avoid increasing the processing load, after the external image is obtained, specifically, the external rectangular frame width width= (1+rates) ×w, the external rectangular frame height= (1+rates) ×h, and the new rectangular frame may be represented as (new_x, new_y, new_w, new_h) and the rates represent the difference between the preset multiple and 1, as shown in fig. 4.
It should be noted that, in order to process only the extension image, after the extension image is obtained, the extension image is cut so as to input the extension image into a preset semantic segmentation network model.
Step S20, carrying out preset semantic segmentation processing on the exterior image to obtain a target initial shadow image of the object to be detected;
the pre-setting semantic segmentation processing is performed on the external image, wherein the semantic may refer to the content of the image in the field of image processing, and the pre-setting semantic segmentation processing may refer to the segmentation of pixels in the external image through pre-setting tag features or pre-setting encoding features, wherein the segmentation of pixels in the external image may be performed specifically through machine learning or a neural network.
Specifically, the step of performing preset semantic segmentation processing on the extension image to obtain the target initial shadow image of the object to be detected includes:
Step S21, inputting the extension image into a preset semantic segmentation network model to perform preset semantic segmentation processing on the extension image so as to obtain a target initial shadow image of the object to be detected;
The preset semantic segmentation network model is a model of a predicted shadow image obtained by training a preset basic model to be trained based on preset object image data to be detected with a preset shadow tag.
In this embodiment, it should be noted that, the preset semantic segmentation network model is a model for accurately predicting an initial shadow image obtained by training a preset basic model to be trained based on image data of a preset object to be detected with a preset shadow tag, and because the preset semantic segmentation network model is a model capable of accurately predicting the initial shadow image after training, the target initial shadow image of the object to be detected can be accurately obtained by performing preset semantic segmentation processing on the target external image after inputting the external image into the preset semantic segmentation network model. It should be noted that, the target initial shadow image is the current initial shadow image of the object to be detected that needs to be processed currently, or the latest initial shadow image of the object to be detected that needs to be processed newly, because the object to be detected can be an oil tank or a container, in this embodiment, the preset semantic segmentation network model is provided with a plurality of sub-models, such as a preset oil tank semantic segmentation network sub-model and a preset container semantic segmentation network sub-model, therefore, after the extension image is input into the preset semantic segmentation network model, the category of the extension image needs to be acquired, and then, according to the category of the extension image, the extension image is input into the corresponding sub-model, such as the preset oil tank semantic segmentation network sub-model.
Before the step of inputting the extension image into a preset semantic segmentation network model to obtain a target initial shadow image of each object to be detected, the method comprises the following steps:
Step A1, acquiring preset object image data to be detected, and carrying out iterative training on the preset basic model to be trained based on the preset object image data to be detected so as to update the preset prediction model to be trained through iterative training;
In this embodiment, a preset semantic segmentation network model is accurately obtained, specifically, first, preset object image data to be detected is obtained, where the preset object image data to be detected includes both each preset object image to be detected and a preset shadow tag corresponding to each preset object image to be detected, that is, in this embodiment, first, a mask (mask) image (including the preset shadow tag) corresponding to an original image of the preset object image to be detected is obtained, and in the mask image, a shadow area is labeled as "1", and a background area is labeled as "0". Acquiring preset object image data to be detected, and performing iterative training on the preset basic model to be trained based on each preset object image to be detected in the preset object image data to be detected so as to update the preset prediction model to be trained through iterative training. Specifically, based on the training result (or the predictive probability image data) of the iterative training, the training result (or the predictive probability image data) of the iterative training is compared with the expected result of the preset shadow tag in the mask (mask) diagram, so as to update the preset prediction model to be trained through the iterative training, specifically, update the network weight variable in the preset prediction model to be trained through the iterative training.
And step A2, judging whether the preset to-be-trained prediction model after iterative training update meets a preset training completion condition, and if so, obtaining the preset semantic segmentation network model.
Judging whether the preset to-be-trained prediction model after iterative training is updated meets preset training completion conditions, if so, obtaining the preset semantic segmentation network model, wherein the preset training completion conditions can be iteration reaching preset times or preset loss function convergence, and it is to be noted that each time of iterative training, difference data are determined based on comparison of the result of iterative training and expected results of preset shadow labels in a mask (mask) diagram, and adjustment in the preset to-be-trained prediction model, particularly adjustment of network weight variables, is performed directionally according to the difference data so as to finally obtain the preset semantic segmentation network model. It should be noted that the preset prediction model to be trained includes a feature extraction portion and an up-sampling portion, where the feature extraction portion includes a convolution layer, a pooling layer, and the up-sampling portion includes a deconvolution layer, and the feature extraction portion and the up-sampling portion may belong to the same network architecture or different network architectures, for example, the feature extraction portion and the up-sampling portion may both belong to Unet network architecture (similar to a plain network), or the feature extraction portion belongs to Unet network architecture, and the up-sampling portion belongs to ResNet network architecture (similar to a non-plain network).
Referring to fig. 2, the step of inputting the extension image into a preset semantic segmentation network model to obtain a target initial shadow image of each object to be detected includes:
Step S211, preprocessing the exterior image to obtain a preprocessed image;
It should be noted that, because of the difference of different image extension ratios (preset multiples), if the extension ratio of the current extension image is 1.8 and the extension ratio of the other extension image (preset multiple) is 1.5, that is, the sizes of the extension images obtained by the two are different, and the preset semantic segmentation network model processes the images with determined sizes or determined darkness, etc., in order to improve the convenience of model processing, in this embodiment, after the extension image is obtained, the extension image is preprocessed (that is, the extension image is stretched to the preset size or the brightness adjustment is performed on the extension image) before the extension image is input into the preset semantic segmentation network model, so as to obtain a preprocessed image.
Step S212, inputting the preprocessed image into a preset semantic segmentation network model to perform preset semantic segmentation processing on the exterior image so as to obtain a target initial shadow image of the object to be detected.
After the preprocessed image is obtained, the preprocessed image is input into a preset semantic segmentation network model, so that the exterior image is subjected to preset semantic segmentation processing, and a target initial shadow image of each object to be detected is obtained.
Step S30, performing noise removal processing on the target initial shadow image to obtain the shadow area of each object to be detected.
And performing noise removal processing on the target initial shadow image, namely removing noise shadows to obtain the shadow area of the object to be detected of each object to be detected.
The method comprises the steps of receiving a remote sensing image, obtaining an exterior image of an object to be detected in the remote sensing image, carrying out preset semantic segmentation processing on the exterior image to obtain a target initial shadow image of the object to be detected, and carrying out noise removal processing on the target initial shadow image to obtain the shadow area of the object to be detected of each object to be detected. In the application, after an exterior image of an object to be detected is obtained based on a remote sensing image, preset semantic segmentation processing is carried out on the exterior image to obtain a target initial shadow image of the object to be detected (the extraction accuracy of the shadow area of the object to be detected is improved because the process of the preset semantic segmentation processing is not influenced by natural factors), and further, after the noise removal processing is carried out on the target initial shadow image, the shadow area of the object to be detected of each object to be detected is accurately obtained. In the application, the preset semantic segmentation network model is not influenced by illumination when the remote sensing image is acquired, so that unstable shadow extraction effect of the object to be detected is avoided.
Further, referring to fig. 2, according to a first embodiment of the present application, in another embodiment of the present application, the step of obtaining preset object image data to be detected, iteratively training the preset basic model to be trained based on the preset object image data to be detected, to iteratively train and update the preset prediction model to be trained includes:
Step B1, acquiring preset object image data to be detected, and inputting the preset object image data to be detected into the preset basic model to be trained to obtain prediction probability image data, wherein the preset basic model to be trained comprises a layer jump connecting layer;
After the preset object image data to be detected is obtained and input into the preset basic model to be trained, prediction probability image data is obtained, in this embodiment, the preset basic model to be trained includes a layer jump connection layer, particularly, in the feature extraction part includes a layer jump connection layer, specifically, in the preset basic model to be trained, if the feature extraction part includes a Unet network coding stage or a Unet network architecture (excluding the layer jump connection layer), the feature extraction part is changed into a ResNet network coding stage or a ResNet network architecture (including the layer jump connection layer), specifically, the layer jump connection layer has the function of adding the discarded information before the convolution preset times into the data of the convolution preset times after the convolution preset times so as to avoid the information loss caused in the convolution process of the image data. In this embodiment, in the network decoding process or in the upsampling portion, unet is still used to decode the network stage to obtain a probability map with a size of (C, W, H), where C is the number of prediction categories (c=the number of target categories n+background category, e.g. c=2, only using prediction shadow regions).
Step B2, obtaining preset shadow tag data of the image data of the object to be detected, and comparing the preset shadow tag data with the predictive probability image data to obtain difference data;
and step B3, iteratively training and updating the preset prediction model to be trained according to the difference data.
In this embodiment, preset shadow tag data of the image data of the object to be detected is obtained, the preset shadow tag data is compared with the predictive probability image data to obtain difference data, and the predictive model to be trained is updated according to the iterative training of the difference data. Before obtaining the difference data, if the prediction probability image data is onehot (single thermal coding) label graph form and the preset shadow label data is mask label graph form, the onehot label graph form needs to be converted into the mask label graph form or the mask label graph form needs to be converted into the onehot label graph form, wherein the conversion of the two forms can be achieved through stripping or merging of pixel probability, for example, the pixel point a is a shadow, the mask label graph form is expressed by {0.7 (shadow probability), 0.3 (background probability) }, the onehot (single thermal coding) label graph form is directly expressed by {1 (shadow probability) }, after the unified form, data comparison is performed, the difference data is obtained, and the preset prediction model to be trained is updated according to the difference data iterative training.
According to the embodiment, the image data of the preset object to be detected is obtained and input into the preset basic model to be trained to obtain the predictive probability image data, wherein the preset basic model to be trained comprises a layer jump connecting layer, the preset shadow tag data of the image data of the preset object to be detected is obtained, the preset shadow tag data is compared with the predictive probability image data to obtain the difference data, and the predictive model to be trained is iteratively trained and updated according to the difference data. In this embodiment, the preset basic model to be trained includes the layer jump connection layer, so that the preset semantic segmentation network model is obtained through training, and therefore, the condition of inaccurate training caused by information loss in the training process is avoided, and the training accuracy is improved.
Further, according to the first embodiment and the second embodiment of the present application, the step of performing noise removal processing on the target initial shadow image to obtain a shadow area of each object to be detected includes:
Step D1, determining an external extension rectangular frame of each object to be detected according to the external extension image of each object to be detected;
and determining an external rectangular frame of the object to be detected according to the external image of the object to be detected, wherein the external rectangular frame refers to a boundary image or a frame image of the external image.
Step D2, determining an intersecting image intersecting with the external-expansion rectangular frame in the initial shadow image, and removing the intersecting image to obtain a first processed image;
As shown in fig. 4, an intersecting image (specifically, a shadow image intersecting an extension rectangular frame but not intersecting an external frame image, such as the outermost rectangular frame in fig. 5) intersecting the extension rectangular frame in the initial shadow image is determined, and a first processed image is obtained by performing a removal process on the intersecting image, where it is noted that the first processed image may also have other noise shadow areas that do not intersect the extension rectangular frame.
And D3, acquiring a first preset shadow area threshold value, and extracting an image with the area larger than the preset shadow area threshold value from the first processed image to obtain the shadow area of the object to be detected of each object to be detected.
Obtaining a first preset shadow area threshold, extracting an image with an area larger than the first preset shadow area threshold from the first processed image, and obtaining the shadow area of the object to be detected of each object to be detected, namely in the embodiment, removing small noise shadows in the image through preset morphological opening and closing operation, specifically determining the outlines of all shadows in the image, removing small spot shadows or small outline shadows with an area smaller than a preset specified threshold (t 3), and finally obtaining the shadow area of the object to be detected of each object to be detected.
The step of obtaining a first preset shadow area threshold value, extracting an image with an area larger than the first preset shadow area threshold value from the first processed image, and obtaining the shadow area of the object to be detected of each object to be detected comprises the following steps:
e1, acquiring a first preset shadow area threshold value, and extracting an image with an area larger than the first preset shadow area threshold value from the first processed image to obtain a second processed image;
And E2, acquiring a second preset shadow area threshold, extracting an image with an area larger than the two preset shadow area thresholds from the second processed image to obtain a third processed image, and extracting an image intersected with an external frame of the external frame image from the third processed image to obtain the shadow area of the object to be detected of each object to be detected.
It should be noted that, the first processed image may also have other noise shadow areas that do not intersect the extension image. Thus, after obtaining the first processed image, obtaining a first preset shadow area threshold value, extracting an image with an area larger than the first preset shadow area threshold value from the first processed image to obtain a second processed image, specifically, extracting an image with an area larger than the first preset shadow area threshold value from the first processed image through preset morphological opening and closing operation (only used for small spot processing), realizing removing small noise shadows or small spots (small noise shadows or small spots refer to small spots smaller than the first preset shadow area threshold value) in the image, such as removing small spots (shadow contour areas) smaller than a preset specified threshold value, namely a first preset shadow area threshold value (t 3) according to the areas of all shadow contours in the target shadow image, finally, a second processed image is obtained, most of noise is removed after the first denoising process, but other large noise still exists, therefore, after the second processed image is obtained, the area of a single shadow contour is counted, the area of the shadow contour with the area larger than a specified threshold (t 4), namely, the area of a shadow contour with a second preset shadow area threshold is extracted, it is to be noted that in the embodiment, the noise shadow is removed in multiple times, the efficiency of removing the noise shadow can be improved (the removal of stuck caused by one-time processing is avoided or morphological opening and closing operation is avoided, etc.), a third processed image is obtained, an embedded coordinate frame corresponding to the external coordinate frame (such as a small rectangular frame in fig. 5) is determined, a shadow image intersecting with the embedded coordinate frame in the third processed image is determined, and setting the shadow image intersecting with the embedded coordinate frame in the third processing image as an absolute shadow image of the object to be detected. That is, in the present embodiment, it is emphasized that the absolute shadow image intersects the embedded coordinate frame.
The method comprises the steps of determining an external extension rectangular frame of each object to be detected according to the external extension image of each object to be detected, determining an intersecting image intersecting with the external extension rectangular frame in the initial shadow image, removing the intersecting image to obtain a first processed image, obtaining a first preset shadow area threshold value, and extracting an image with an area larger than the first preset shadow area threshold value from the first processed image to obtain the shadow area of each object to be detected. In this embodiment, noise shadow is removed, and the shadow area of the object to be detected is accurately obtained.
Referring to fig. 3, fig. 3 is a schematic device structure diagram of a hardware running environment according to an embodiment of the present application.
As shown in fig. 3, the apparatus for extracting shadow area of an object to be detected may include a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. Wherein a communication bus 1002 is used to enable connected communication between the processor 1001 and a memory 1005. The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the apparatus for extracting shadow area of the object to be detected may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on. The rectangular user interface may include a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also include a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
It will be appreciated by those skilled in the art that the structure of the object shadow area extraction apparatus to be detected shown in fig. 3 does not constitute a limitation of the object shadow area extraction apparatus to be detected, and may include more or less components than those illustrated, or may combine some components, or may be a different arrangement of components.
As shown in fig. 3, a memory 1005, which is a computer storage medium, may include an operating system, a network communication module, and a shadow area extraction program for an object to be detected. The operating system is a program for managing and controlling hardware and software resources of the object shadow area extraction device to be detected, and supports the operation of the object shadow area extraction program to be detected and other software and/or programs. The network communication module is used for realizing communication among components in the memory 1005 and communication with other hardware and software in the shadow area extraction system of the object to be detected.
In the apparatus for extracting a shadow area of an object to be detected shown in fig. 3, the processor 1001 is configured to execute a program for extracting a shadow area of an object to be detected stored in the memory 1005, to implement the steps of the method for extracting a shadow area of an object to be detected described above.
The specific implementation of the device for extracting the shadow area of the object to be detected is basically the same as the above embodiments of the method for extracting the shadow area of the object to be detected, and will not be described herein again.
The application also provides an object shadow area extraction device to be detected, which is a virtual device and is applied to first equipment or object shadow area extraction equipment to be detected, and the object shadow area extraction device to be detected comprises:
the receiving module is used for receiving the remote sensing image and acquiring an exterior image of an object to be detected in the remote sensing image;
The first acquisition module is used for carrying out preset semantic segmentation processing on the extension image to obtain a target initial shadow image of the object to be detected;
And the noise removing module is used for carrying out noise removing processing on the target initial shadow image to obtain the shadow area of the object to be detected of each object to be detected.
Optionally, the first acquisition module includes:
The voice segmentation unit is used for inputting the extension image into a preset semantic segmentation network model so as to perform preset semantic segmentation processing on the extension image to obtain a target initial shadow image of the object to be detected;
The preset semantic segmentation network model is a model of a predicted shadow image obtained by training a preset basic model to be trained based on preset object image data to be detected with a preset shadow tag.
Optionally, the device for extracting the shadow area of the object to be detected includes:
the second acquisition module is used for acquiring preset object image data to be detected, and carrying out iterative training on the preset basic model to be trained based on the preset object image data to be detected so as to update the preset prediction model to be trained through iterative training;
The judging module is used for judging whether the preset to-be-trained prediction model after iterative training updating meets preset training completion conditions, and if so, obtaining the preset semantic segmentation network model.
Optionally, the second obtaining module includes:
The first acquisition unit is used for acquiring preset object image data to be detected, inputting the preset object image data to be detected into the preset basic model to be trained to obtain prediction probability image data, wherein the preset basic model to be trained comprises a layer jump connecting layer;
the second acquisition unit is used for acquiring preset shadow tag data of the image data of the object to be detected, and comparing the preset shadow tag data with the predictive probability image data to obtain difference data;
and the training unit is used for iteratively training and updating the preset prediction model to be trained according to the difference data.
Optionally, the receiving module includes:
the receiving unit is used for receiving the remote sensing image, and acquiring an external rectangular frame image of an object to be detected in the remote sensing image so as to acquire the position information of the external rectangular frame image;
And the external rubbing unit is used for cutting the external rectangular frame image serving as a center from the remote sensing image according to the position information of the external rectangular frame image, wherein the image comprises the external rectangular frame image and is a preset multiple of the external rectangular frame image, and the image comprising the external rectangular frame image and is a preset multiple of the external rectangular frame image is set as the external rubbing image.
Optionally, the noise removal module includes:
The first determining unit is used for determining an extension rectangular frame of each object to be detected according to the extension image of each object to be detected;
The second determining unit is used for determining an intersecting image intersecting the extension rectangular frame in the initial shadow image, and removing the intersecting image to obtain a first processed image;
And the third acquisition unit is used for acquiring a first preset shadow area threshold value, extracting an image with the area larger than the preset shadow area threshold value from the first processed image, and obtaining the shadow area of the object to be detected of each object to be detected.
Optionally, the third obtaining unit includes:
a first obtaining subunit, configured to obtain a first preset shadow area threshold, extract an image with an area greater than the first preset shadow area threshold from the first processed image, and obtain a second processed image;
the second acquisition subunit is used for acquiring a second preset shadow area threshold value, extracting an image with an area larger than the two preset shadow area threshold values from the second processed image to obtain a third processed image, and extracting an image intersected with an external frame of the external frame image from the third processed image to obtain the shadow area of the object to be detected of each object to be detected.
Optionally, the first acquisition module includes:
the preprocessing unit is used for preprocessing the extension image to obtain a preprocessed image;
The input unit is used for inputting the preprocessed image into a preset semantic segmentation network model so as to perform preset semantic segmentation processing on the exterior image, so as to obtain a target initial shadow image of the object to be detected.
The specific implementation of the device for extracting the shadow area of the object to be detected is basically the same as the above embodiments of the method for extracting the shadow area of the object to be detected, and will not be repeated here.
The embodiment of the application provides a medium, and one or more programs are stored in the medium, and the one or more programs can be further executed by one or more processors to implement the steps of the method for extracting the shadow area of the object to be detected.
The specific embodiment of the medium of the present application is substantially the same as the above embodiments of the method for extracting the shadow area of the object to be detected, and will not be described herein.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein, or any application, directly or indirectly, within the scope of the application.

Claims (9)

1. The method for extracting the shadow area of the object to be detected is characterized by comprising the following steps of:
Receiving a remote sensing image, and acquiring an exterior image of an object to be detected in the remote sensing image;
Performing preset semantic segmentation processing on the exterior image to obtain a target initial shadow image of the object to be detected;
determining an exterior rectangular frame of each object to be detected according to the exterior image of each object to be detected;
determining an intersecting image intersecting with the extension rectangular frame in the initial shadow image, and removing the intersecting image to obtain a first processed image;
Acquiring a first preset shadow area threshold value, and extracting an image with an area larger than the first preset shadow area threshold value from the first processed image to obtain a second processed image;
Obtaining a second preset shadow area threshold value, extracting an image with an area larger than the second preset shadow area threshold value from the second processed image to obtain a third processed image, and extracting an image intersected with an external frame of an external frame image from the third processed image to obtain the shadow area of each object to be detected, wherein the external frame image is an image in an embedded coordinate frame corresponding to the external image.
2. The method for extracting shadow areas of an object to be detected according to claim 1, wherein the step of performing a preset semantic segmentation process on the exterior image to obtain a target initial shadow image of the object to be detected comprises:
Inputting the extension image into a preset semantic segmentation network model to perform preset semantic segmentation processing on the extension image so as to obtain a target initial shadow image of the object to be detected;
The preset semantic segmentation network model is a model of a predicted shadow image obtained by training a preset basic model to be trained based on preset object image data to be detected with a preset shadow tag.
3. The method for extracting shadow areas of objects to be detected according to claim 2, wherein before the step of inputting the exterior image into a preset semantic segmentation network model to obtain a target initial shadow image of each object to be detected, the method comprises:
acquiring preset object image data to be detected, and performing iterative training on the preset basic model to be trained based on the preset object image data to be detected so as to update the preset basic model to be trained through iterative training;
Judging whether the preset basic model to be trained after iterative training is updated meets preset training completion conditions, and if so, obtaining the preset semantic segmentation network model.
4. The method for extracting shadow areas of objects to be detected according to claim 3, wherein the step of obtaining the image data of the object to be detected, iteratively training the basic model to be trained based on the image data of the object to be detected, and updating the basic model to be trained by iterative training comprises:
Acquiring preset object image data to be detected, and inputting the preset object image data to be detected into the preset basic model to be trained to obtain predictive probability image data, wherein the preset basic model to be trained comprises a layer jump connecting layer;
Acquiring preset shadow tag data of the image data of the object to be detected, and comparing the preset shadow tag data with the predictive probability image data to obtain difference data;
and iteratively training and updating the preset basic model to be trained according to the difference data.
5. The method for extracting shadow areas of an object to be detected according to any one of claims 1 to 4, wherein the step of receiving the remote sensing image and obtaining an exterior image of the object to be detected in the remote sensing image comprises:
receiving a remote sensing image, and acquiring an external rectangular frame image of an object to be detected in the remote sensing image so as to acquire the position information of the external rectangular frame image;
and cutting the image which comprises the external rectangular frame image and is a preset multiple of the external rectangular frame image from the remote sensing image by taking the external rectangular frame image as a center according to the position information of the external rectangular frame image, and setting the image which comprises the external rectangular frame image and is a preset multiple of the external rectangular frame image as the external image.
6. The method for extracting shadow areas of an object to be detected according to claim 2, wherein the step of inputting the extension image into a preset semantic segmentation network model to perform a preset semantic segmentation process on the extension image to obtain a target initial shadow image of the object to be detected comprises the steps of:
Preprocessing the exterior image to obtain a preprocessed image;
Inputting the preprocessed image into a preset semantic segmentation network model to perform preset semantic segmentation processing on the exterior image so as to obtain a target initial shadow image of the object to be detected.
7. An object shadow area extraction device to be detected, which is characterized by comprising:
the receiving module is used for receiving the remote sensing image and acquiring an exterior image of an object to be detected in the remote sensing image;
The first acquisition module is used for carrying out preset semantic segmentation processing on the extension image to obtain a target initial shadow image of the object to be detected;
A noise removal module, the noise removal module comprising:
The first determining unit is used for determining an external extension rectangular frame of each object to be detected according to the external extension image of each object to be detected;
The second determining unit is used for determining an intersecting image intersecting the extension rectangular frame in the initial shadow image, and removing the intersecting image to obtain a first processed image;
a third acquisition unit, which is configured to acquire the first and second data, the third acquisition unit includes:
a first obtaining subunit, configured to obtain a first preset shadow area threshold, extract an image with an area greater than the first preset shadow area threshold from the first processed image, and obtain a second processed image;
The second obtaining subunit is configured to obtain a second preset shadow area threshold, extract an image with an area greater than the two preset shadow area thresholds from the second processed image, obtain a third processed image, extract an image intersecting with an external frame of the external frame image from the third processed image, and obtain a shadow area of the object to be detected of each object to be detected, where the external frame image is an image in the embedded coordinate frame corresponding to the external frame image.
8. An object shadow area extraction device to be detected is characterized by comprising a memory, a processor and a program stored on the memory for realizing the object shadow area extraction method to be detected,
The memory is used for storing a program for realizing the method for extracting the shadow area of the object to be detected;
The processor is configured to execute a program for implementing the method for extracting a shadow area of an object to be detected, so as to implement the steps of the method for extracting a shadow area of an object to be detected according to any one of claims 1 to 6.
9. A computer-readable storage medium, on which a program for realizing the object shadow area extraction method to be detected is stored, the program being executed by a processor to realize the steps of the object shadow area extraction method to be detected according to any one of claims 1 to 6.
CN202010262682.9A 2020-04-03 2020-04-03 Method, device, equipment and medium for extracting shadow area of object to be detected Active CN111462220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010262682.9A CN111462220B (en) 2020-04-03 2020-04-03 Method, device, equipment and medium for extracting shadow area of object to be detected

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010262682.9A CN111462220B (en) 2020-04-03 2020-04-03 Method, device, equipment and medium for extracting shadow area of object to be detected

Publications (2)

Publication Number Publication Date
CN111462220A CN111462220A (en) 2020-07-28
CN111462220B true CN111462220B (en) 2025-01-24

Family

ID=71681630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010262682.9A Active CN111462220B (en) 2020-04-03 2020-04-03 Method, device, equipment and medium for extracting shadow area of object to be detected

Country Status (1)

Country Link
CN (1) CN111462220B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837310A (en) * 2021-03-02 2021-05-25 四川兆纪光电科技有限公司 A kind of detection method and system of backlight substrate
CN114972786A (en) * 2022-05-20 2022-08-30 深圳大学 Shadow positioning method, device, medium and terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577825A (en) * 2012-07-25 2014-02-12 中国科学院声学研究所 Automatic target identification method and system for synthetic aperture sonar image
CN106408529A (en) * 2016-08-31 2017-02-15 浙江宇视科技有限公司 Shadow removal method and apparatus

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855627B (en) * 2012-08-09 2015-05-13 武汉大学 City remote sensing image shadow detection method based on spectral characteristic and topological relation
JP6178280B2 (en) * 2014-04-24 2017-08-09 日立建機株式会社 Work machine ambient monitoring device
CN104637073B (en) * 2014-12-30 2017-09-15 华中科技大学 It is a kind of based on the banding underground structure detection method for shining upon shadow compensation
US9760801B2 (en) * 2015-05-12 2017-09-12 Lawrence Livermore National Security, Llc Identification of uncommon objects in containers
CN106447721B (en) * 2016-09-12 2021-08-10 北京旷视科技有限公司 Image shadow detection method and device
CN106886801B (en) * 2017-04-14 2021-12-17 北京图森智途科技有限公司 Image semantic segmentation method and device
KR101994112B1 (en) * 2017-12-05 2019-06-28 한국항공대학교산학협력단 Apparatus and method for compose panoramic image based on image segment
CN109993749B (en) * 2017-12-29 2024-08-20 北京京东尚科信息技术有限公司 Method and device for extracting target image
CN109064449B (en) * 2018-07-04 2021-01-05 中铁大桥科学研究院有限公司 Method for detecting bridge surface diseases
CN109446992B (en) * 2018-10-30 2022-06-17 苏州中科天启遥感科技有限公司 Remote sensing image building extraction method and system based on deep learning, storage medium and electronic equipment
CN110197505B (en) * 2019-05-30 2022-12-02 西安电子科技大学 Binocular Stereo Matching Method for Remote Sensing Images Based on Deep Network and Semantic Information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577825A (en) * 2012-07-25 2014-02-12 中国科学院声学研究所 Automatic target identification method and system for synthetic aperture sonar image
CN106408529A (en) * 2016-08-31 2017-02-15 浙江宇视科技有限公司 Shadow removal method and apparatus

Also Published As

Publication number Publication date
CN111462220A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN108596184B (en) Training method of image semantic segmentation model, readable storage medium and electronic device
CN110163080B (en) Face key point detection method and device, storage medium and electronic equipment
CN108304775B (en) Remote sensing image recognition method and device, storage medium and electronic equipment
WO2018108129A1 (en) Method and apparatus for use in identifying object type, and electronic device
US20190171866A1 (en) Apparatus and method for data processing
CN111292337A (en) Image background replacing method, device, equipment and storage medium
CN111462222B (en) Method, device, equipment and medium for determining reserves of objects to be detected
CN111462098B (en) Method, device, equipment and medium for detecting overlapping of shadow areas of objects to be detected
CN115018805B (en) Segmentation model training method, image segmentation method, device, equipment and medium
CN111462220B (en) Method, device, equipment and medium for extracting shadow area of object to be detected
CN113034514A (en) Sky region segmentation method and device, computer equipment and storage medium
CN113537037A (en) Pavement disease identification method, system, electronic device and storage medium
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN113012189A (en) Image recognition method and device, computer equipment and storage medium
CN117253110A (en) Diffusion model-based target detection model generalization capability improving method
CN114926849A (en) Text detection method, device, equipment and storage medium
CN111383191B (en) Image processing method and device for vascular fracture repair
CN111462221A (en) Method, device, device and storage medium for extracting shadow area of object to be detected
WO2018053710A1 (en) Morphological processing method of digital image and digital image processing device
CN112906819B (en) Image recognition method, device, equipment and storage medium
CN117557777A (en) Sample image determining method and device, electronic equipment and storage medium
CN114092709B (en) Method, device, equipment and storage medium for identifying target contour in image
CN117649358B (en) Image processing method, device, equipment and storage medium
CN114821071B (en) A method, device and equipment for extracting adhesion bubbles from dynamic ice images
CN119399238A (en) Image processing method and device, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant