[go: up one dir, main page]

CN113160386B - Image acquisition method, apparatus, device and computer readable storage medium - Google Patents

Image acquisition method, apparatus, device and computer readable storage medium Download PDF

Info

Publication number
CN113160386B
CN113160386B CN202110376075.XA CN202110376075A CN113160386B CN 113160386 B CN113160386 B CN 113160386B CN 202110376075 A CN202110376075 A CN 202110376075A CN 113160386 B CN113160386 B CN 113160386B
Authority
CN
China
Prior art keywords
image
preset
obtaining
target
underwater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110376075.XA
Other languages
Chinese (zh)
Other versions
CN113160386A (en
Inventor
毕胜
丁亚慧
李胜全
付先平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Peng Cheng Laboratory
Original Assignee
Dalian Maritime University
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University, Peng Cheng Laboratory filed Critical Dalian Maritime University
Priority to CN202110376075.XA priority Critical patent/CN113160386B/en
Publication of CN113160386A publication Critical patent/CN113160386A/en
Application granted granted Critical
Publication of CN113160386B publication Critical patent/CN113160386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image acquisition method, which comprises the following steps: acquiring an original atmosphere image of a target area; processing the original atmospheric image by utilizing a three-dimensional modeling technology to obtain a three-dimensional model of the target area; acquiring an initial image based on preset distance information, preset angle information and the three-dimensional model; obtaining transmissivity based on the preset distance information and a preset attenuation coefficient; and obtaining a target underwater image of the target area based on the transmittance and the initial image. The invention also discloses an image acquisition device, a terminal device and a computer readable storage medium. By using the image obtaining method, the technical effect of improving the effectiveness of the target underwater image is achieved.

Description

Image acquisition method, apparatus, device and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image obtaining method, apparatus, device, and computer readable storage medium.
Background
The underwater image refers to an image of an underwater environment photographed by an imaging device, and mainly includes light reflected by an object in water into the imaging device, and background light caused by light scattered by the underwater environment into the imaging device at a small angle, and impurities such as suspended matter in water.
At present, the deep learning technology is increasingly applied to the field of underwater image analysis, and because the underwater image analysis method based on the deep learning technology needs a large amount of underwater images as training data, meanwhile, compared with the land image acquisition, the difficulty of acquiring the underwater images by the imaging equipment is larger, so that the difficulty of acquiring a large amount of underwater images is larger.
In the related art, an image obtaining method is provided, in which an imaging device is used to collect an actual underwater image, and sample expansion is performed on the actual collected underwater image, so as to obtain a large number of underwater images which can be used as training samples. Typically, the actual acquired underwater image is subjected to operations such as rotation, geometric transformation, scaling, blurring, noise addition or warping to obtain a new underwater image, which is used as a training image.
However, the effectiveness of the obtained training image which can be used as a training sample is poor by adopting the existing image obtaining method.
Disclosure of Invention
The invention mainly aims to provide an image acquisition method, an image acquisition device, image acquisition equipment and a computer readable storage medium, and aims to solve the technical problem that the effectiveness of an obtained extended training image which can be used as a training sample is poor by adopting the existing image acquisition method in the prior art.
To achieve the above object, the present invention provides an image obtaining method comprising the steps of:
acquiring an original atmosphere image of a target area;
Processing the original atmospheric image by utilizing a three-dimensional modeling technology to obtain a three-dimensional model of the target area;
acquiring an initial image based on preset distance information, preset angle information and the three-dimensional model;
obtaining transmissivity based on the preset distance information and a preset attenuation coefficient;
and obtaining a target underwater image of the target area based on the transmittance and the initial image.
Optionally, before the step of obtaining the target underwater image of the target area based on the transmittance and the initial image, the method further includes:
Acquiring depth information of pixel points in the initial image;
The step of obtaining a target underwater image of the target area based on the transmittance and the initial image includes:
The target underwater image is obtained based on the transmittance, the initial image, and the depth information.
Optionally, before the step of obtaining the target underwater image based on the transmittance, the initial image and the depth information, the method further includes:
Acquiring a preset motion blur operator;
the step of obtaining the target underwater image based on the transmittance, the initial image, and the depth information includes:
And obtaining the target underwater image based on the transmissivity, the initial image, the depth information and the preset motion blur operator.
Optionally, before the step of obtaining the target underwater image based on the transmittance, the initial image, the depth information and the preset motion blur operator, the method further includes:
acquiring a preset real underwater image set;
Acquiring an average pixel value of a selected area in each real underwater image in the preset real underwater image set;
Obtaining a result pixel value corresponding to the preset real underwater image set based on the average pixel value of the selected area in each real underwater image;
Normalizing the result pixel value to obtain a background light coefficient;
The step of obtaining the target underwater image based on the transmissivity, the initial image, the depth information and the preset motion blur operator comprises the following steps:
and obtaining the target underwater image based on the transmissivity, the initial image, the depth information, the preset motion blur operator and the background light coefficient.
Optionally, the step of obtaining the average pixel value of the selected area in each real underwater image in the preset real underwater image set includes:
Dividing each real underwater image by using a hierarchical search technology of quadtree subdivision to obtain four rectangular areas corresponding to each real underwater image;
obtaining standard deviation and average pixel values of the four rectangular areas;
determining a selected rectangular area with the largest difference between the standard deviation of the pixel values and the average pixel value in the four rectangular areas;
dividing the selected rectangular area by using a hierarchical search technology of quadtree subdivision to update the four rectangular areas, and returning to the step of acquiring the standard deviation and the average pixel value of each pixel value of the four rectangular areas until the size of the selected rectangular area meets a preset condition, and determining the selected rectangular area meeting the preset condition as the selected area;
An average pixel value for the selected region is calculated.
Optionally, the step of obtaining the transmittance based on the preset distance information and the preset attenuation coefficient includes:
Based on the preset distance information and the preset attenuation coefficient, obtaining the transmissivity by using a formula I;
The first formula is:
tc(x)=exp(-βc·dph(x))
Wherein t c (x) is the transmittance corresponding to the x-th pixel of the initial image in the transmittances, β c is the preset attenuation coefficient, dph (x) is preset distance information corresponding to the x-th pixel in the preset distance information, c is a color channel, and c e { r, g, b }.
Optionally, the step of obtaining the target underwater image based on the transmittance, the initial image, the depth information, the preset motion blur operator, and the background light coefficient includes:
Obtaining the target underwater image by using a formula II based on the transmissivity, the initial image, the depth information, the preset motion blur operator and the background light coefficient;
The formula II is as follows:
Wherein I c (x) is a pixel value corresponding to the x-th pixel in the target underwater image, J c is a pixel value corresponding to the x-th pixel in the initial image, D (x) is depth information corresponding to the x-th pixel in the depth information, F is the motion blur operator, x is convolution operation, and a c is the backlight coefficient.
In addition, in order to achieve the above object, the present invention also proposes an image obtaining apparatus comprising the steps of:
The acquisition module is used for acquiring an original atmospheric image of the target area;
The modeling module is used for processing the original atmospheric image by utilizing a three-dimensional modeling technology so as to obtain a three-dimensional model of the target area;
the first obtaining module is used for obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model;
the second obtaining module is used for obtaining the transmissivity based on the preset distance information and the preset attenuation coefficient;
and a third obtaining module, configured to obtain a target underwater image of the target area based on the transmittance and the initial image.
In addition, to achieve the above object, the present invention also proposes a terminal device including: a memory, a processor, and an image acquisition program stored on the memory and running on the processor, which when executed by the processor, implements the steps of the image acquisition method as set forth in any one of the preceding claims.
Furthermore, to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon an image obtaining program which, when executed by a processor, implements the steps of the image obtaining method according to any one of the above.
The technical scheme of the invention provides an image acquisition method, which comprises the steps of acquiring an original atmosphere image of a target area; processing the original atmospheric image by utilizing a three-dimensional modeling technology to obtain a three-dimensional model of the target area; acquiring an initial image based on preset distance information, preset angle information and the three-dimensional model; obtaining transmissivity based on the preset distance information and a preset attenuation coefficient; and obtaining a target underwater image of the target area based on the transmittance and the initial image.
According to the existing image obtaining method, rotation, geometric transformation, scaling, blurring, noise adding or twisting and other operations are carried out on an actually collected underwater image, a training image which can be used as a training sample is obtained, the obtained training image does not consider the influence of light transmittance of an underwater environment, so that the training image cannot accurately reflect real information of an underwater area corresponding to the training image, and the effectiveness of the training image is poor. Therefore, the image obtaining method achieves the technical effect of improving the effectiveness of the target underwater image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a terminal device structure of a hardware running environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a first embodiment of an image acquisition method according to the present invention;
FIG. 3 is a schematic view of an original target underwater image and a target underwater image with depth information added;
FIG. 4 is a schematic diagram of an original target underwater image and a target underwater image subjected to motion blur processing;
FIG. 5 is a schematic view of an original target underwater image and a target underwater image with a background light coefficient changed;
FIG. 6 is a schematic view of an actual underwater image and a target underwater image of the present invention;
fig. 7 is a block diagram showing the structure of a first embodiment of the image obtaining apparatus of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a terminal device structure of a hardware running environment according to an embodiment of the present invention.
The terminal device may be a Mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet personal computer (PAD), or other User Equipment (UE), a handheld device, a vehicle mounted device, a wearable device, a computing device, or other processing device connected to a wireless modem, a Mobile Station (MS), or the like. The terminal device may be referred to as a user terminal, a portable terminal, a desktop terminal, etc.
In general, a terminal device includes: at least one processor 301, a memory 302 and an image acquisition program stored on said memory and executable on said processor, said image acquisition program being configured to implement the steps of the image acquisition method as described above.
Processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 301 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). Processor 301 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central ProcessingUnit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 301 may integrate a GPU (Graphics Processing Unit, image acquirer) for rendering and drawing of content required to be displayed by the display screen. The processor 301 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing the relevant image acquisition method operations so that the image acquisition method model may be trained and learned autonomously, improving efficiency and accuracy.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the image acquisition method provided by the method embodiments of the present application.
In some embodiments, the terminal may further optionally include: a communication interface 303, and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the communication interface 303 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power supply 306.
The communication interface 303 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 301, the memory 302, and the communication interface 303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 304 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 304 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 304 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 305 is a touch screen, the display 305 also has the ability to collect touch signals at or above the surface of the display 305. The touch signal may be input as a control signal to the processor 301 for processing. At this point, the display 305 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 305 may be one, the front panel of an electronic device; in other embodiments, the display screen 305 may be at least two, respectively disposed on different surfaces of the electronic device or in a folded design; in still other embodiments, the display 305 may be a flexible display disposed on a curved surface or a folded surface of the electronic device. Even more, the display screen 305 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 305 may be made of LCD (LiquidCrystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The power supply 306 is used to power the various components in the electronic device. The power source 306 may be alternating current, direct current, disposable or rechargeable. When the power source 306 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the terminal device, and may include more or less components than illustrated, or may combine certain components, or may be arranged in different components.
Furthermore, an embodiment of the present application also proposes a computer-readable storage medium having stored thereon an image obtaining program which, when executed by a processor, implements the steps of the image obtaining method as described above. Therefore, a detailed description will not be given here. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present application, please refer to the description of the method embodiments of the present application. As an example, the program instructions may be deployed to be executed on one terminal device or on multiple terminal devices located at one site or on multiple terminal devices distributed across multiple sites and interconnected by a communication network.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of computer programs, which may be stored on a computer-readable storage medium, and which, when executed, may comprise the steps of the embodiments of the methods described above. The computer readable storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random access Memory (Random AccessMemory, RAM), or the like.
Based on the above hardware structure, an embodiment of the image obtaining method of the present invention is presented.
Referring to fig. 2, fig. 2 is a flowchart of a first embodiment of an image obtaining method according to the present invention, where the method is used for a terminal device, and the method includes the following steps:
step S11: an original atmospheric image of the target area is acquired.
The execution subject of the present invention is a terminal device, the terminal device is provided with an image acquisition method, and the terminal device implements the image acquisition method of the present invention when executing an image acquisition program. The original atmosphere image may be obtained by photographing the target area by the photographing module of the terminal device, or may be obtained by photographing the target area by the photographing device, and the terminal device obtains the original atmosphere image of the target area from the photographing device. The target area may be any type of land area, and the invention is not limited thereto, but preferably the target area needs to include a target object (for use as a reference object). The original atmospheric image may refer to an image obtained by directly photographing a target area on land, and the original atmospheric image does not include any influence of underwater environmental factors.
In addition, the invention aims to process the original atmospheric image to obtain a simulated target underwater image, wherein the target underwater image is obtained by performing underwater environment simulation on a target area, and the obtained underwater simulation image is not a real underwater image.
The existing image obtaining method is to perform operations such as rotation, geometric transformation, scaling, blurring, noise adding or twisting on the acquired real underwater image to form a new sample image (the underwater image which can be used as a training sample). However, the sample image is a real underwater two-dimensional image, which is converted into a two-dimensional sample image through the operations, and the environment, light and object changes in water cannot be imitated in the conversion process, so that the obtained two-dimensional sample image has poor effectiveness.
The image obtaining method can obtain the corresponding simulated target underwater image by using the original atmospheric image, does not need to shoot an underwater area to obtain a real underwater image, reduces the difficulty of obtaining the underwater image, and meanwhile, compared with the sample image obtained by using the existing image obtaining method, the image obtaining method has better effectiveness.
Step S12: and processing the original atmosphere image by utilizing a three-dimensional modeling technology to obtain a three-dimensional model of the target area.
Step S13: and obtaining an initial image based on the preset distance information, the preset angle information and the three-dimensional model.
It should be noted that, by using a three-dimensional modeling technique, the original atmospheric image is processed to obtain a three-dimensional model of the target area, and based on preset distance information (a distance between the camera and the target area, typically a distance between the camera and a reference object of the target area) and preset angle information (an angle between the camera and the target area, typically an angle between the camera and a reference object of the target area) set by a user, an initial image of the three-dimensional model is obtained, that is, it is assumed that the camera photographs the three-dimensional model with the preset angle information and the preset distance information, and the initial image is obtained.
It can be understood that, for the three-dimensional model corresponding to the same target area, different preset distance information and preset angle information can be obtained, so that different initial images corresponding to the three-dimensional model can be obtained; the user can set different pieces of preset angle information and different pieces of preset distance information for the same three-dimensional model respectively to obtain a plurality of initial images, and obtain a plurality of final target underwater images based on the initial images.
In addition, based on preset distance information and preset angle information set by a user, preset distance information and preset angle information corresponding to each pixel point in the initial image can be obtained, and preset distance information and preset angle information corresponding to different pixel points in the initial image can be different.
The three-dimensional modeling technology involved can be any existing three-dimensional modeling technology, and the invention is not limited.
It can be understood that all objects in the initial image have a distance and an angle with a camera (a camera of a photographing device or a camera of a terminal device), and the initial image is required to be obtained based on a three-dimensional model, external parameters of the camera, preset distance information and preset angle information; in the initial image, the preset distance information and the preset angle information comprise preset distance information and preset angle information which respectively correspond to each pixel point in the initial image, wherein the preset distance information of one pixel point is the distance between the object point in the target area corresponding to the pixel point and the camera, and the preset angle information of one pixel point is the angle between the object point in the target area corresponding to the pixel point and the camera.
The initial image is a 2.5-dimensional image with preset distance information, namely the preset distance information is included in the initial image, and the preset distance information corresponding to each pixel point of the initial image is stored in the initial image in an additional information mode.
For example, the initial image of the target area includes 512×512 pixels, where the 124 th row and the 35 th column of pixels are a1 point a of the object a in the target area, the preset distance information of a 124,35 is the distance between the 1 point a and the camera, and the preset angle information of a 124,35 is the angle between the 1 point a and the camera.
Step S14: and obtaining the transmissivity based on the preset distance information and the preset attenuation coefficient.
Specifically, step S14 includes: based on the preset distance information and the preset attenuation coefficient, obtaining the transmissivity by using a formula I;
The first formula is:
tc(x)=exp(-βc·dph(x))
Wherein t c (x) is the transmittance corresponding to the x-th pixel of the initial image in the transmittances, β c is the preset attenuation coefficient, dph (x) is preset distance information corresponding to the x-th pixel in the preset distance information, c is a color channel, and c e { r, g, b }.
The transmissivity comprises the transmissivity of all pixels of the initial image, and meanwhile, each pixel in the initial image is a three-channel pixel, namely, each pixel in the initial image comprises pixel values of red, green and blue channels. Wherein, in the present invention, the preset attenuation coefficient includes ten kinds of preset attenuation coefficients of water types, which are named I, IA, IB, II, III, 1C, 3C, 5C, 7C and 9C, respectively, wherein I, IA, IB, II and III represent preset attenuation coefficients of deep sea waters from clear to slightly turbid, and 1C to 9C represent preset attenuation coefficients of types of coastal waters that are gradually turbid. Referring to table 1, table 1 is a table of preset damping coefficients for the ten water types described above, and table 1 is as follows:
TABLE 1
Type I IA IB II III 1C 3C 5C 7C 9C
R 0.341 0.342 0.349 0.375 0.426 0.439 0.498 0.564 0.635 0.755
G 0.049 0.0503 0.0572 0.078 0.121 0.121 0.198 0.314 0.494 0.777
B 0.021 0.0253 0.0325 0.110 0.139 0.240 0.400 0.650 0.693 1.24
As shown in table 1, the preset attenuation coefficients of the same water type include R, G and B preset attenuation coefficients, i.e., the preset attenuation coefficients of one water type each include preset attenuation coefficients corresponding to red light, green light, and blue light. It can be understood that the three colors of red, green and blue related to each pixel point need to be processed by using the preset attenuation coefficient of the corresponding color light, so as to obtain the corresponding transmittance of the pixel point. In table 1, the preset attenuation coefficient of red light represents the preset attenuation coefficient corresponding to 650nm, the preset attenuation coefficient of green light represents the preset attenuation coefficient corresponding to 525nm, and the preset attenuation coefficient of blue light represents the preset attenuation coefficient corresponding to 475 nm.
Step S15: and obtaining a target underwater image of the target area based on the transmittance and the initial image.
Further, before step S15, the method further includes: acquiring depth information of pixel points in the initial image; correspondingly, step S15 includes: the target underwater image is obtained based on the transmittance, the initial image, and the depth information.
It should be noted that, each pixel point in the initial image has depth information, and the depth information may be set by a user according to requirements. Generally, a user sets a preset depth, and the terminal device obtains depth information of each pixel point in the initial image based on the preset depth, wherein the set preset depth can be depth information of a certain reference pixel point in the initial image, and depth information of other pixel points is obtained through the depth information of the reference pixel point; the preset depth may be a depth of one reference point of the target area, depth information of a selected pixel corresponding to the reference point is obtained based on the depth of the reference point, and depth information of other pixels is obtained based on the depth information of the selected pixel. At this time, the obtained target underwater image is improved in effectiveness in view of transmittance.
Further, before step S15, the method further includes: acquiring a preset motion blur operator; correspondingly, step S15 includes: and obtaining the target underwater image based on the transmissivity, the initial image, the depth information and the preset motion blur operator.
It should be noted that, in a real underwater environment, the target area is affected by water flow, so that the camera and the target area move relatively, and motion blur is caused, and at this time, the influence caused by the motion blur needs to be considered, that is, at this time, the target underwater image needs to be obtained based on the transmittance, the initial image, the depth information and the preset motion blur operator. Typically, the preset motion blur operator uses a directly attenuated radiation scene convolution motion blur operator. The effectiveness of the target underwater image obtained at this time is further improved in consideration of motion blur factors.
Further, before step S15, the method further includes: acquiring a preset real underwater image set; acquiring an average pixel value of a selected area in each real underwater image in the preset real underwater image set; obtaining a result pixel value corresponding to the preset real underwater image set based on the average pixel value of the selected area in each real underwater image; normalizing the result pixel value to obtain a background light coefficient; correspondingly, step S15 includes: and obtaining the target underwater image based on the transmissivity, the initial image, the depth information, the preset motion blur operator and the background light coefficient.
Specifically, the step of obtaining the average pixel value of the selected area in each real underwater image in the preset real underwater image set includes: dividing each real underwater image by using a hierarchical search technology of quadtree subdivision to obtain four rectangular areas corresponding to each real underwater image; obtaining standard deviation and average pixel values of the four rectangular areas; determining a selected rectangular area with the largest difference between the standard deviation of the pixel values and the average pixel value in the four rectangular areas; dividing the selected rectangular area by using a hierarchical search technology of quadtree subdivision to update the four rectangular areas, and returning to the step of acquiring the standard deviation and the average pixel value of each pixel value of the four rectangular areas until the size of the selected rectangular area meets a preset condition, and determining the selected rectangular area meeting the preset condition as the selected area; an average pixel value for the selected region is calculated.
It should be noted that, when the calculation of the average pixel value is performed on each real underwater image, after each time the selected rectangular area is divided by using the hierarchical search technique of quadtree subdivision, a new selected rectangular area is obtained, and when the new selected rectangular area meets a preset condition, the dividing step is not performed, and the selected rectangular area meeting the preset condition is determined as the selected area. The preset condition can be that the size ratio of the new selected rectangular area to the original real underwater image is not more than 1/8 or 1/16; typically, when the new selected rectangular area is not more than 1/8 or 1/16, the difference in the obtained backlight coefficients is small, so that no division needs to be continued.
In addition, the normalization processing of the result pixel value may be that the result pixel value is normalized to between 0 and 1, so as to obtain a background light coefficient. Meanwhile, the background light coefficients include a red light background light coefficient, a green light background light coefficient and a blue light background light coefficient, that is, the result pixel values include a red result pixel value, a green result pixel value and a blue result pixel value, and other pixel values (average pixel value, standard deviation of pixel values, etc.) involved in the calculation process of the result pixel values also include a red pixel value, a green pixel value and a blue pixel value.
In a specific application, a plurality of preset real underwater image sets corresponding to a plurality of underwater environments are selected, one preset real underwater image set is used for obtaining a background light coefficient corresponding to one underwater environment, the plurality of underwater environments correspond to the plurality of background light coefficients, typically, the plurality of underwater environments are selected to be representative), and typically, the plurality of underwater environments are not less than four. In another embodiment, the user may set the backlight coefficient by himself as required, and typically, the ratio of R is smaller than G and B when the backlight coefficient is set.
Specifically, step S15 includes: obtaining the target underwater image by using a formula II based on the transmissivity, the initial image, the depth information, the preset motion blur operator and the background light coefficient;
The formula II is as follows:
Wherein I c (x) is a pixel value corresponding to the x-th pixel in the target underwater image, J c is a pixel value corresponding to the x-th pixel in the initial image, D (x) is depth information corresponding to the x-th pixel in the depth information, F is the motion blur operator, x is convolution operation, and a c is the backlight coefficient.
Wherein,The light which represents the light which is remained by attenuating natural light from the water surface to the water depth through the absorption and scattering action of water and then through the scene-camera for a certain distance is attenuated to reach the camera, namely the light directly attenuates the scene radiation; a c·(1-tc (x) represents part of the ambient light entering the camera due to scattering.
In a specific application, the pixel value of each pixel of the initial image is required to be calculated to obtain a new pixel with a new pixel value corresponding to each pixel, and an image formed by the new pixels corresponding to all the pixels involved in the initial image is the target underwater image.
In addition, each pixel point in the initial image is a pixel point of red, green and blue three channels, and the pixel values of the three channels of each pixel point need to be subjected to the above operation processing by using the background light coefficient and the transmissivity of the corresponding color so as to obtain a pixel point corresponding to the pixel point and having a new pixel value.
Referring to fig. 3-5, fig. 3 is a schematic view of an original target underwater image and a target underwater image added with depth information; FIG. 4 is a schematic diagram of an original target underwater image and a target underwater image subjected to motion blur processing; fig. 5 is a schematic view of an original target underwater image and a target underwater image with a changed background light coefficient.
In fig. 3, the left side is an original target underwater image corresponding to an initial image, the original target underwater image is obtained based on transmissivity and the initial image, and does not consider factors such as depth information, motion blur, background light coefficient and the like, the right side is a target underwater image corresponding to the initial image, to which depth information is added, wherein the depth information is 1m (the depth information of a reference pixel point), the target underwater image considers the depth information, and does not consider factors such as motion blur, background light coefficient and the like, and the effectiveness of the target underwater image is high.
In fig. 4, the left side is an original target underwater image corresponding to an initial image, the original target underwater image is obtained based on transmissivity and the initial image, and depth information, motion blur, background light coefficients and other factors are not considered, and the right side is a target underwater image corresponding to the initial image and subjected to motion blur processing, wherein the target underwater image takes motion blur into consideration, and depth information, background light coefficients and other factors are not considered. The effectiveness of the target underwater image is high.
In fig. 5, the left side is an original target underwater image corresponding to an initial image, the original target underwater image is obtained based on the transmissivity and the initial image, and the factors such as depth information, motion blur and background light coefficient are not considered, and the right side is a target underwater image corresponding to the initial image, which changes the background light coefficient, and the target underwater image considers the background light coefficient, and does not consider the factors such as motion blur and depth information. The effectiveness of the target underwater image is high.
Referring to fig. 6, fig. 6 is a schematic view of an actual underwater image and a target underwater image of the present invention; the left 3 images are real underwater images corresponding to different underwater environments respectively, the right is a target underwater image obtained after initial image simulation (processed by the image obtaining method) is utilized, the target underwater image considers factors such as depth information, background light coefficients, motion blur and the like, and therefore the similarity of the target underwater image obtained after the image obtaining method is utilized and the real underwater image corresponding to the underwater environment is high, and the effectiveness of the target underwater image is high.
The technical scheme of the invention provides an image acquisition method, which comprises the steps of acquiring an original atmosphere image of a target area; processing the original atmospheric image by utilizing a three-dimensional modeling technology to obtain a three-dimensional model of the target area; acquiring an initial image based on preset distance information, preset angle information and the three-dimensional model; obtaining transmissivity based on the preset distance information and a preset attenuation coefficient; and obtaining a target underwater image of the target area based on the transmittance and the initial image.
According to the existing image obtaining method, rotation, geometric transformation, scaling, blurring, noise adding or twisting and other operations are carried out on an actually collected underwater image, a training image which can be used as a training sample is obtained, the obtained training image does not consider the influence of light transmittance of an underwater environment, so that the training image cannot accurately reflect real information of an underwater area corresponding to the training image, and the effectiveness of the training image is poor. Therefore, the image obtaining method achieves the technical effect of improving the effectiveness of the target underwater image.
Referring to fig. 7, fig. 7 is a block diagram showing the construction of a first embodiment of an image obtaining apparatus according to the present invention, the apparatus being for a terminal device, the apparatus comprising:
An acquisition module 10 for acquiring an original atmospheric image of a target area;
a modeling module 20, configured to process the raw atmospheric image using a three-dimensional modeling technique to obtain a three-dimensional model of the target region;
A first obtaining module 30, configured to obtain an initial image based on preset distance information, preset angle information, and the three-dimensional model;
a second obtaining module 40, configured to obtain a transmittance based on the preset distance information and a preset attenuation coefficient;
a third obtaining module 50 is configured to obtain a target underwater image of the target area based on the transmittance and the initial image.
The foregoing description is only of the optional embodiments of the present invention, and is not intended to limit the scope of the invention, and all the equivalent structural changes made by the description of the present invention and the accompanying drawings or the direct/indirect application in other related technical fields are included in the scope of the invention.

Claims (7)

1. A method of obtaining an image, the method comprising the steps of:
acquiring an original atmosphere image of a target area;
Processing the original atmospheric image by utilizing a three-dimensional modeling technology to obtain a three-dimensional model of the target area;
acquiring an initial image based on preset distance information, preset angle information and the three-dimensional model;
obtaining transmissivity based on the preset distance information and a preset attenuation coefficient;
Obtaining a target underwater image of the target area based on the transmittance and the initial image;
Based on the transmittance and the initial image, prior to the step of obtaining a target underwater image of the target area, the method further comprises:
acquiring depth information of pixel points in the initial image, a preset motion blur operator and a background light coefficient;
The step of obtaining a target underwater image of the target area based on the transmittance and the initial image includes:
Obtaining the target underwater image by using a formula II based on the transmissivity, the initial image, the depth information, the preset motion blur operator and the background light coefficient;
The formula II is as follows:
Wherein I c (x) is a pixel value corresponding to an xth pixel in the target underwater image, J c (x) is a pixel value corresponding to the xth pixel in the initial image, D (x) is depth information corresponding to the xth pixel in the depth information, F is the preset motion blur operator, a c is the background light coefficient, and t c (x) is a transmittance corresponding to the xth pixel of the initial image.
2. The method of claim 1, wherein the step of obtaining the background light coefficients comprises:
acquiring a preset real underwater image set;
Acquiring an average pixel value of a selected area in each real underwater image in the preset real underwater image set;
Obtaining a result pixel value corresponding to the preset real underwater image set based on the average pixel value of the selected area in each real underwater image;
and carrying out normalization processing on the result pixel value to obtain the background light coefficient.
3. The method of claim 2, wherein the step of obtaining an average pixel value for a selected region in each of the set of preset real underwater images comprises:
Dividing each real underwater image by using a hierarchical search technology of quadtree subdivision to obtain four rectangular areas corresponding to each real underwater image;
obtaining standard deviation and average pixel values of the four rectangular areas;
determining a selected rectangular area with the largest difference between the standard deviation of the pixel values and the average pixel value in the four rectangular areas;
dividing the selected rectangular area by using a hierarchical search technology of quadtree subdivision to update the four rectangular areas, and returning to the step of acquiring the standard deviation and the average pixel value of each pixel value of the four rectangular areas until the size of the selected rectangular area meets a preset condition, and determining the selected rectangular area meeting the preset condition as the selected area;
An average pixel value for the selected region is calculated.
4. The method of claim 3, wherein the step of obtaining the transmittance based on the preset distance information and a preset attenuation coefficient comprises:
Based on the preset distance information and the preset attenuation coefficient, obtaining the transmissivity by using a formula I;
The first formula is:
tc(x)=exp(-βc·dph(x))
Wherein t c (x) is the transmittance corresponding to the x-th pixel of the initial image in the transmittances, β c is the preset attenuation coefficient, dph (x) is preset distance information corresponding to the x-th pixel in the preset distance information, c is a color channel, and c e { r, g, b }.
5. An image acquisition apparatus, characterized in that the apparatus comprises the steps of:
The acquisition module is used for acquiring an original atmospheric image of the target area;
The modeling module is used for processing the original atmospheric image by utilizing a three-dimensional modeling technology so as to obtain a three-dimensional model of the target area;
the first obtaining module is used for obtaining an initial image based on preset distance information, preset angle information and the three-dimensional model;
the second obtaining module is used for obtaining the transmissivity based on the preset distance information and the preset attenuation coefficient;
A third obtaining module for obtaining a target underwater image of the target area based on the transmittance and the initial image;
The third obtaining module is further configured to obtain depth information of a pixel point in the initial image, a preset motion blur operator, and a background light coefficient;
The third obtaining module is further configured to obtain the target underwater image by using a formula two based on the transmittance, the initial image, the depth information, the preset motion blur operator, and the background light coefficient; the formula II is as follows: Wherein I c (x) is a pixel value corresponding to an xth pixel in the target underwater image, J c (x) is a pixel value corresponding to the xth pixel in the initial image, D (x) is depth information corresponding to the xth pixel in the depth information, F is the preset motion blur operator, a c is the background light coefficient, and t c (x) is a transmittance corresponding to the xth pixel of the initial image.
6. A terminal device, characterized in that the terminal device comprises: memory, a processor and an image acquisition program stored on the memory and running on the processor, which when executed by the processor, implements the steps of the image acquisition method according to any one of claims 1 to 4.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon an image obtaining program which, when executed by a processor, implements the steps of the image obtaining method according to any one of claims 1 to 4.
CN202110376075.XA 2021-04-07 2021-04-07 Image acquisition method, apparatus, device and computer readable storage medium Active CN113160386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110376075.XA CN113160386B (en) 2021-04-07 2021-04-07 Image acquisition method, apparatus, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110376075.XA CN113160386B (en) 2021-04-07 2021-04-07 Image acquisition method, apparatus, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113160386A CN113160386A (en) 2021-07-23
CN113160386B true CN113160386B (en) 2024-08-27

Family

ID=76889016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110376075.XA Active CN113160386B (en) 2021-04-07 2021-04-07 Image acquisition method, apparatus, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113160386B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115456892B (en) * 2022-08-31 2023-07-25 北京四维远见信息技术有限公司 2.5-dimensional visual image automatic geometric correction method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070480A (en) * 2019-02-26 2019-07-30 青岛大学 A kind of analogy method of underwater optics image
CN112488955A (en) * 2020-12-08 2021-03-12 大连海事大学 Underwater image restoration method based on wavelength compensation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100268048B1 (en) * 1996-10-28 2000-11-01 고바야시 마사키 Underwater laser imaging apparatus
JP2010271845A (en) * 2009-05-20 2010-12-02 Mitsubishi Electric Corp Image reading support device
US11024047B2 (en) * 2015-09-18 2021-06-01 The Regents Of The University Of California Cameras and depth estimation of images acquired in a distorting medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070480A (en) * 2019-02-26 2019-07-30 青岛大学 A kind of analogy method of underwater optics image
CN112488955A (en) * 2020-12-08 2021-03-12 大连海事大学 Underwater image restoration method based on wavelength compensation

Also Published As

Publication number Publication date
CN113160386A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
US12254544B2 (en) Image-text fusion method and apparatus, and electronic device
EP3061071B1 (en) Method, apparatus and computer program product for modifying illumination in an image
CN112767281B (en) Image ghost eliminating method and device, electronic equipment and storage medium
CN109117806B (en) Gesture recognition method and device
CN113706440B (en) Image processing method, device, computer equipment and storage medium
CN109712097B (en) Image processing method, image processing device, storage medium and electronic equipment
EP4164217A1 (en) White balance control method and apparatus, and terminal device and storage medium
CN113034523B (en) Image processing method, device, storage medium and computer equipment
DE102021004572A1 (en) Denoise images rendered using Monte Carlo renditions
CN109214996A (en) A kind of image processing method and device
CN113052923A (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
CN112733688B (en) Attribute value prediction method, device, terminal device and computer-readable storage medium of house
CN114078083A (en) Hair transformation model generation method and device, and hair transformation method and device
CN114332553A (en) Image processing method, device, equipment and storage medium
CN113160386B (en) Image acquisition method, apparatus, device and computer readable storage medium
CN111415308A (en) Ultrasonic image processing method and communication terminal
CN112183217B (en) Gesture recognition method, interaction method based on gesture recognition and mixed reality glasses
CN112150396B (en) Hyperspectral image dimension reduction method and device, terminal equipment and storage medium
CN113553128B (en) Screen corner generation method, device, electronic device and storage medium
CN115035313B (en) Black-neck crane identification method, device, equipment and storage medium
CN114298895B (en) Image realism style migration method, device, equipment and storage medium
CN117218507A (en) Image processing model training method, image processing device and electronic equipment
CN112446846B (en) Fusion frame acquisition method, device, SLAM system and storage medium
US11315223B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
CN103593822A (en) Method and device carrying out frosted special efficacy treatment to data image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant