[go: up one dir, main page]

CN112585945A - Focusing method, device and equipment - Google Patents

Focusing method, device and equipment Download PDF

Info

Publication number
CN112585945A
CN112585945A CN202080004236.6A CN202080004236A CN112585945A CN 112585945 A CN112585945 A CN 112585945A CN 202080004236 A CN202080004236 A CN 202080004236A CN 112585945 A CN112585945 A CN 112585945A
Authority
CN
China
Prior art keywords
image
region
focused
imaging
acquisition device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080004236.6A
Other languages
Chinese (zh)
Inventor
任创杰
胡晓翔
封旭阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112585945A publication Critical patent/CN112585945A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

A focusing method, a focusing device and equipment are provided, and the method comprises the following steps: acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm; determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused corresponding to the power equipment to be patrolled and examined in the imaging image; and adjusting the lens parameters of the image acquisition device according to the to-be-focused area corresponding to the to-be-inspected electric power equipment in the imaging image so as to adjust the definition of the to-be-inspected electric power equipment in the imaging image of the image acquisition device. This application makes image acquisition device can be aimed at treating to patrol and examine the power equipment and focuses, has improved the degree of accuracy of focusing, is favorable to improving the definition of waiting to patrol and examine power equipment who shoots.

Description

Focusing method, device and equipment
Technical Field
The present application relates to the field of shooting technologies, and in particular, to a focusing method, apparatus, and device.
Background
Focusing refers to the process of changing the object distance and the distance position through a camera focusing mechanism to enable a shot object to be imaged clearly, and the focusing can comprise automatic focusing and manual focusing.
Currently, for automatic focusing, a target area needs to be selected first, and then focusing is performed according to the target area. In general, the target area selected by the auto-focusing is a rectangular area, and for example, a rectangular area in the center of the screen may be selected as the target area. However, since the subject is not fixed in a rectangular shape, the selected rectangular target area may include a background such as a building or a tree in addition to the subject such as a person.
Therefore, the above-described method of focusing based on the rectangular target region has a problem of low focusing accuracy.
Disclosure of Invention
The embodiment of the application provides a focusing method, a focusing device and focusing equipment, which are used for solving the problem of low focusing accuracy in a mode of focusing according to a rectangular target area in the prior art.
In a first aspect, an embodiment of the present application provides a focusing method, which is applied to an unmanned aerial vehicle for power inspection, wherein an image acquisition device is arranged on the unmanned aerial vehicle, and the image acquisition device is used for shooting an image including power equipment to be inspected in a process of power inspection by the unmanned aerial vehicle; the method comprises the following steps:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused corresponding to the power equipment to be patrolled;
and adjusting the lens parameters of the image acquisition device according to the to-be-focused area corresponding to the to-be-inspected electric power equipment in the imaging image so as to adjust the definition of the to-be-inspected electric power equipment in the imaging image of the image acquisition device, so that the image acquisition device focuses on the to-be-inspected electric power equipment.
In a second aspect, an embodiment of the present application provides a focusing method, including:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused in the imaging image;
and adjusting the lens parameters of the image acquisition device according to the area to be focused so as to adjust the definition of the object corresponding to the area to be focused in the imaging image of the image acquisition device, so that the image acquisition device focuses the object corresponding to the area to be focused.
In a third aspect, an embodiment of the present application provides an unmanned aerial vehicle, which includes a body of the unmanned aerial vehicle, and a power system, an image acquisition device and a focusing device that are disposed on the body;
the power system is used for providing power for the unmanned aerial vehicle;
the image acquisition device is used for shooting images including power equipment to be patrolled and examined in the process of carrying out power patrolling by the unmanned aerial vehicle;
the focusing device comprises a memory and a processor;
the memory for storing program code;
the processor, invoking the program code, when executed, is configured to:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused corresponding to the power equipment to be patrolled;
and adjusting the lens parameters of the image acquisition device according to the to-be-focused area corresponding to the to-be-inspected electric power equipment in the imaging image so as to adjust the definition of the to-be-inspected electric power equipment in the imaging image of the image acquisition device, so that the image acquisition device focuses on the to-be-inspected electric power equipment.
In a fourth aspect, an embodiment of the present application provides a focusing apparatus, including: a memory and a processor;
the memory for storing program code;
the processor, invoking the program code, when executed, is configured to:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused in the imaging image;
and adjusting the lens parameters of the image acquisition device according to the area to be focused so as to adjust the definition of the object corresponding to the area to be focused in the imaging image of the image acquisition device, so that the image acquisition device focuses the object corresponding to the area to be focused.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, the computer program comprising at least one code segment executable by a computer to control the computer to perform the method of any one of the above first aspects.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, where the computer program includes at least one piece of code, where the at least one piece of code is executable by a computer to control the computer to perform the method of any one of the second aspects.
In a seventh aspect, an embodiment of the present application provides a computer program, which is used to implement the method of any one of the above first aspects when the computer program is executed by a computer.
In an eighth aspect, the present application provides a computer program, which is used to implement the method of any one of the above second aspects when the computer program is executed by a computer.
The embodiment of the application provides a focusing method, a focusing device and focusing equipment, wherein an image acquisition device is used for acquiring an imaging image, a preset algorithm is adopted for identifying foreground pixels in the imaging image, at least part of area occupied by the foreground pixels in the imaging image is determined as an area to be focused corresponding to power equipment to be inspected in the imaging image, and adjusting the lens parameters of the image acquisition device according to the area to be focused corresponding to the power equipment to be inspected in the imaging image, so as to adjust the definition of the electric power equipment to be inspected in the imaging image of the image acquisition device, realize the adjustment of the focusing of the image acquisition device according to the area occupied by the electric power equipment to be inspected in the imaging image in the process of electric power inspection, the image acquisition device can focus on the power equipment to be patrolled and examined, the focusing accuracy is improved, and the shot definition of the power equipment to be patrolled and examined is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of an application scenario of a focusing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a focusing method according to an embodiment of the present application;
fig. 3A is a schematic diagram of a foreground pixel provided in the present application;
fig. 3B is a schematic diagram of an area to be focused according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a focusing method according to another embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a neural network model provided in an embodiment of the present application;
FIG. 6 is a flowchart illustrating a focusing method according to another embodiment of the present application;
FIG. 7 is a schematic view of a preset direction provided in an embodiment of the present application;
FIG. 8A is a schematic view of a region of interest provided by an embodiment of the present application;
FIGS. 8B and 8C are schematic views of an enlarged area provided by embodiments of the present application;
fig. 9 is a schematic diagram illustrating a user being prompted with an area to be focused according to an embodiment of the present application;
FIG. 10 is a flowchart illustrating a focusing method according to another embodiment of the present application;
FIG. 11 is a schematic structural diagram of a focusing device according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an unmanned aerial vehicle provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The focusing method provided by the embodiment of the present application can be applied to the focusing system 10 shown in fig. 1, and the focusing system 10 can include an image capturing device 11 and a focusing device 12. The image acquisition device 11 is used for acquiring images; the focusing device 12 may obtain an imaging image from the image capturing device, and perform processing based on the imaging image by using the focusing method provided in the embodiment of the present application to adjust the definition of the subject to be captured in the imaging image of the image capturing device 11, so that the image capturing device 11 can focus on the subject to be captured. The image acquisition device 11 includes a visible light camera, an infrared camera, and the like.
It should be noted that the focusing system 10 can be applied to any scene requiring focusing control. For example, the focusing system 10 may be applied to a digital camera, a smart phone providing a photographing function, an unmanned aerial vehicle, and the like.
According to the focusing method provided by the embodiment of the application, the imaging image is acquired through the image acquisition device, the foreground pixels in the imaging image are identified by adopting a preset algorithm, at least part of the area occupied by the foreground pixels in the imaging image is determined as the area to be focused in the imaging image, and the lens parameters of the image acquisition device are adjusted according to the area to be focused so as to adjust the definition of the object corresponding to the area to be focused in the imaging image of the image acquisition device, so that the focusing of the image acquisition device is adjusted according to the area occupied by the foreground pixels in the imaging image.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 2 is a schematic flow chart of a focusing method according to an embodiment of the present application, and an execution main body of the embodiment may be a focusing device. As shown in fig. 2, the method of this embodiment may include:
step 201, an imaging image is acquired through an image acquisition device, and a foreground pixel in the acquired imaging image is identified by adopting a preset algorithm.
In this step, the imaging image is used to perform focusing control on the image acquisition device, and the imaging image may include a foreground object and a background object, where the foreground object may be regarded as a subject to be photographed, and the subject to be photographed may be, for example, a person, an animal, an electric power device, and the like, according to different photographing requirements.
Images are composed of individual pixels, and imaged images are no exception. The pixels in the imaged image may be divided into foreground pixels and background pixels based on foreground objects and background objects in the imaged image. The foreground pixels in the imaged image may refer to pixels occupied by foreground objects in the imaged image, and the background pixels in the imaged image may refer to pixels occupied by background objects in the imaged image.
Specifically, a preset algorithm may be adopted to identify foreground pixels corresponding to the foreground object from the pixels of the imaging image. For example, foreground pixels corresponding to foreground objects may be identified from all pixels of the imaged image, or foreground pixels corresponding to foreground objects may be identified from pixels corresponding to partial regions of the imaged image. It should be noted that, for the specific algorithm used for identifying the foreground pixels in the imaging image, the design can be flexibly performed according to the requirements. Optionally, a neural network model can be adopted to identify the foreground pixels, so that the algorithm design difficulty is reduced, and the identification accuracy is improved.
Step 202, determining at least a partial area occupied by the foreground pixels in the imaged image as an area to be focused in the imaged image.
In this step, the area to be focused may refer to an area to which the image capturing device focuses. Since a foreground object is usually a subject to be photographed, a region to be focused in the imaged image coincides with a foreground pixel in the imaged image.
Based on this, the region to be focused can be determined from the foreground pixels in the imaged image. Specifically, the area occupied by the foreground pixel in the imaged image may be determined as the area to be focused in the imaged image.
Taking the foreground pixels and the background pixels in the imaged image as shown in fig. 3A as an example, the region to be focused may be shown as the diagonally filled region in fig. 3B. One small box in fig. 3A and 3B may represent one pixel, a number 0 in the small box may represent a background pixel, and a number 1 in the small box may represent a foreground pixel. A pixel of 0 may represent a background pixel and a pixel of 1 may represent a background pixel.
Step 203, adjusting lens parameters of the image acquisition device according to the region to be focused, so as to adjust the definition of the object corresponding to the region to be focused in the imaging image of the image acquisition device, so that the image acquisition device focuses on the object corresponding to the region to be focused.
In this step, based on the requirement that the image acquisition device needs to focus on the area to be focused, the lens parameters of the image acquisition device are adjusted. The focusing is also called focusing, which changes the image distance instead of the focal length of the lens, adjusts the distance between the imaging plane and the lens so that the distance from the imaging plane to the optical center is equal to the image distance, and enables the object to be imaged on the film (photosensitive element) clearly. The process of adjusting the image acquisition device to make the subject to be photographed image clearly is the focusing process. Specifically, a focusing process can be realized based on the to-be-focused region, that is, a lens parameter of the image acquisition device is adjusted to enable an object corresponding to the to-be-focused region (that is, an object corresponding to a foreground pixel) to be clearly imaged, where the clear imaging of the object corresponding to the to-be-focused region can be understood as that the image acquisition device focuses on the object corresponding to the to-be-focused region, and since the object corresponding to the foreground pixel is usually a subject to be photographed, the imaging of the subject to be photographed can be clear.
The lens parameter adjusted according to the area to be focused may specifically be a parameter that affects the imaging plane and the lens distance of the image capturing device. In one embodiment, the focus ring may be used to change the distance from the clearest plane to the lens, and the lens parameters adjusted according to the region to be focused may specifically be the rotation direction and the rotation number of the focus ring.
In the embodiment, an imaging image is acquired through an image acquisition device, a foreground pixel in the imaging image is identified by adopting a preset algorithm, at least a partial area occupied by the foreground pixel in the imaging image is determined as an area to be focused in the imaging image, and a lens parameter of the image acquisition device is adjusted according to the area to be focused so as to adjust the definition of an object corresponding to the area to be focused in the imaging image of the image acquisition device, so that the focusing of the image acquisition device is adjusted according to the area occupied by the foreground pixel in the imaging image, because a subject to be shot is usually used as the foreground of a lens, and the area occupied by the foreground pixel does not include background information, the image acquisition device can focus the object corresponding to the foreground pixel, and compared with the image area based on focusing control in the related technology which includes background information, the problem of low focusing accuracy caused by the influence of the background information is avoided, the accuracy of focusing is improved, and therefore the definition of a shot object is improved.
Fig. 4 is a schematic flowchart of a focusing method according to another embodiment of the present application, and this embodiment mainly describes an alternative implementation manner based on the embodiment shown in fig. 2. As shown in fig. 4, the method of this embodiment may include:
step 401, acquiring an imaging image through an image acquisition device, inputting the imaging image into a first neural network model trained in advance, and obtaining a first output result, where the first output result includes confidence that each pixel in the imaging image is a foreground pixel.
In this step, the first Neural network model may be a Convolutional Neural Network (CNN) model. The structure of the first neural network model may be as shown in fig. 5, for example. As shown in fig. 5, the first neural network model may include a plurality of computing nodes, each computing node may include a convolution (Conv) layer, a Batch Normalization (BN) layer, and an activation function ReLU, the computing nodes may be connected by using a Skip Connection (Skip Connection), input data of K × H × W may be input into the first neural network model, and output data of C × H × W may be obtained after the processing by the first neural network model. Wherein, K may represent the number of input channels, and K may be equal to 3, and respectively correspond to three channels of red (R, red), green (G, green), and blue (B, blue); h may represent the height of the imaged image, W may represent the width of the imaged image, and C equal to 2 may represent the number of output channels as 2.
The first output result of the first neural network model may include confidence feature maps output by 2 output channels, respectively, where the 2 output channels may correspond to 2 object categories one to one, the 2 object categories may be a foreground category and a background category, respectively, and a pixel value of the confidence feature map of a single object category is used to characterize a probability that a pixel is the object category.
Assuming that the first output result may include the confidence feature map 1 and the confidence feature map 2, and the confidence feature map 1 corresponds to the foreground class and the confidence feature map 2 corresponds to the background class, the pixel value of the pixel position (100 ) in the confidence feature map 1 is 90, the probability that the pixel of the pixel position (100 ) is the foreground pixel is 90%, the pixel value of the pixel position (100,80) in the confidence feature map 2 is 20, and the probability that the pixel of the pixel position (100,80) is the foreground pixel is 20%.
Step 402, determining foreground pixels in the imaged image according to the first output result.
In this step, for example, a pixel in the imaged image, which is a foreground pixel and whose confidence coefficient is greater than a preset threshold, may be determined as a foreground pixel based on the first output result. For example, pixels with a probability of being foreground pixels greater than 80% in the confidence feature map 1 may be determined as foreground pixels based on the confidence feature map 1 described above.
Alternatively to steps 401 and 402, foreground pixels in the imaged image may be identified based on a pre-trained second neural network model. Illustratively, steps 401 and 402 may be replaced by steps a and B as follows.
Step A, acquiring an imaging image through an image acquisition device, inputting the imaging image into a pre-trained second neural network model, and obtaining a second output result, wherein the second output result comprises confidence coefficients that each pixel in the imaging image is each foreground type pixel in at least one foreground type;
and B, determining pixels of each foreground category in the imaged image according to the second output result so as to obtain foreground pixels in the imaged image.
The second neural network model may be specifically a CNN model, and the structure of the second neural network model is similar to that of the first neural network model shown in fig. 5, and the difference is mainly that the number C of output channels is greater than 2.
The second output result of the second neural network model may include confidence feature maps output by C output channels, C is greater than 2, the C output channels may correspond to C object classes one to one, the C object classes may specifically include a plurality of specific foreground classes and background classes, and the pixel values of the confidence feature maps of a single object class are used to characterize the probability that a pixel is the object class.
Assuming that the second output result may include a confidence feature map 3, a confidence feature map 4 and a confidence feature map 5, and the confidence feature map 3 corresponds to a specific foreground category 1, the confidence feature map 4 corresponds to a specific foreground category 2, the confidence feature map 5 corresponds to a background category, the confidence feature fig. 3 has a pixel value of 90 for pixel location (100 ), a probability of 90% that the pixel at pixel location (100 ) is a foreground pixel of a particular foreground class 1, the confidence feature fig. 4 has a pixel value of 85 for pixel location (100,60), the probability that a pixel at pixel location (100,60) is a foreground pixel of a particular foreground class 2 is 85%, the confidence feature map, the pixel value at pixel location (100,80) in 5 is 20, the probability that a pixel at pixel location (100,80) is a foreground pixel is 20%.
For example, pixels in the imaged image, which are foreground class pixels and have a confidence level greater than a preset threshold, may be determined as foreground pixels based on the second output result. For example, based on the confidence feature map 3 and the confidence feature map 4, the pixels with the probability of being foreground class pixels greater than 80% in the confidence feature map 3 and the confidence feature map 4 may be determined as foreground pixels.
By adopting the second neural network model to identify the foreground pixels, the foreground pixels of specific categories in the imaging image can be identified, so that focusing based on the area occupied by the foreground pixels of specific categories is realized, the pertinence of identifying the foreground pixels is improved, and the focusing accuracy is improved.
Step 403, determining at least a partial region occupied by the foreground pixels in the imaged image as a region to be focused in the imaged image.
It should be noted that step 403 is similar to step 202, and is not described herein again.
Step 404, adjusting lens parameters of the image acquisition device according to the region to be focused, so as to adjust the definition of the object corresponding to the region to be focused in the imaging image of the image acquisition device, so that the image acquisition device focuses on the object corresponding to the region to be focused.
In this step, for example, the lens parameters of the image capturing device may be adjusted according to the area to be focused until the quality of the image in the area to be focused meets a certain condition. Because the definition of the image is an important index for measuring the quality of the image, the quality of the image of the region to be focused meets a certain condition, which can indicate that the definition of the image of the region to be focused meets a certain condition. Under the condition that the definition of the image of the area to be focused meets a certain condition, the imaging definition of the object corresponding to the area to be focused can be shown, namely, the image acquisition device focuses the object corresponding to the area to be focused.
It should be noted that the certain condition can be flexibly implemented according to the specific requirement for the image quality, which is not limited in the present application.
In this embodiment, a first output result is obtained by inputting the imaging image into a first neural network model trained in advance, a foreground pixel in the imaging image is determined according to the first output result, at least a partial region occupied by the foreground pixel in the imaging image is determined as a region to be focused in the imaging image, and a lens parameter of the image acquisition device is adjusted according to the region to be focused, so as to adjust the definition of an object corresponding to the region to be focused in the imaging image of the image acquisition device, thereby determining the foreground pixel in the imaging image based on the neural network model, and adjusting the focusing of the image acquisition device based on the region occupied by the foreground pixel.
Fig. 6 is a schematic flowchart of a focusing method according to another embodiment of the present application, and this embodiment mainly describes another alternative implementation manner based on the embodiment shown in fig. 2. As shown in fig. 6, the method of this embodiment may include:
step 601, acquiring an imaging image through an image acquisition device, and determining a region of interest in the imaging image.
In this step, the region of interest (ROI) may refer to a region to be processed, which is delineated from a processed image in a manner of a square frame, a circle, an ellipse, an irregular polygon, or the like in machine vision and image processing. Illustratively, the region of interest in the imaged image may be automatically found based on various operators (operators) and functions. The region of interest may be, for example, a region corresponding to a tracking frame in a target tracking algorithm, and of course, in other embodiments, the region of interest may also be other types of regions, which is not limited in this application. It should be noted that, for the specific manner of determining the region of interest, the specific manner may be flexibly implemented according to the requirement, and the application does not limit this.
Step 602, processing an image corresponding to the region of interest in the imaging image by using a preset algorithm to identify foreground pixels corresponding to the region of interest in the imaging image.
In this step, optionally, the image corresponding to the region of interest may be an image of the region of interest in the imaging image. That is, the corresponding foreground pixels may be identified based on the image of the region of interest in the imaged image.
Or after determining the region of interest in the imaging image, the region of interest may be expanded by at least one pixel in the imaging image along a preset direction, so as to obtain an expanded region; the image corresponding to the region of interest is the image of the enlarged region in the imaging image. Based on this, the image corresponding to the region of interest may include not only the image content in the region of interest, but also the image content adjacent to the region of interest, and under the condition that the region of interest fails to include a complete subject to be photographed due to a certain reason, the problem that foreground pixels of the subject to be photographed cannot be completely recognized based on the region of interest due to only recognizing foreground pixels in the image of the region of interest can be avoided.
Wherein the preset direction may include one or more of the 8 directions as shown in fig. 7. For example, assuming that the region of interest in the imaged image is as shown in fig. 8A, and the preset direction is direction 1 of 8 directions as an example, an expanded region obtained by expanding the region of interest by one pixel in the preset direction may be as shown in fig. 8B. For another example, assuming that the region of interest in the imaging image is as shown in fig. 8A, and the preset direction includes 8 directions as shown in fig. 7 as an example, an expanded region obtained by expanding the region of interest by one pixel along the preset direction may be as shown in fig. 8C. It should be noted that one small box in fig. 8A, 8B, and 8C may represent one pixel, and in fig. 8C, the number of pixels expanded in each direction is the same as an example, and it is understood that the number of pixels expanded in different directions may be different.
Similarly, in step 602, an image corresponding to a region of interest in an imaging image may be processed based on a neural network model to identify foreground pixels corresponding to the region of interest in the imaging image. Optionally, the image corresponding to the region of interest may be input into a pre-trained first neural network model to obtain an output result, where the output result includes a confidence that each pixel in the image corresponding to the region of interest is a foreground pixel, and according to the output result, a foreground pixel in the image corresponding to the region of interest is determined to obtain a foreground pixel corresponding to the region of interest in the imaging image. Or, the image corresponding to the region of interest may be input into a second neural network model trained in advance to obtain an output result, where the output result includes a confidence that each pixel in the image corresponding to the region of interest is a pixel of each foreground category in at least one foreground category, and the pixels of each foreground category in the image corresponding to the region of interest are determined according to the output result, so as to obtain a foreground pixel corresponding to the region of interest in the imaging image.
And B, determining pixels of each foreground category in the imaged image according to the second output result so as to obtain foreground pixels in the imaged image. It should be noted that, an implementation manner of processing the image corresponding to the region of interest in the imaging image based on the first neural network model or the second neural network model is similar to the specific manner of processing the imaging image based on the first neural network model or the second neural network model in the embodiment shown in fig. 4, and details are not repeated here.
Step 603, determining at least a partial region occupied by the foreground pixels in the imaged image as a region to be focused in the imaged image.
It should be noted that step 403 is similar to step 202, and is not described herein again.
Step 604, adjusting lens parameters of the image acquisition device according to the area to be focused, so as to adjust the definition of the object corresponding to the area to be focused in the imaging image of the image acquisition device, so that the image acquisition device focuses on the object corresponding to the area to be focused.
It should be noted that step 604 is similar to step 203 and step 404, and is not described herein again.
In this embodiment, an imaging image is acquired by an image acquisition device, a region of interest in the imaging image is determined, a preset algorithm is used to process an image corresponding to the region of interest in the imaging image to identify foreground pixels corresponding to the region of interest in the imaging image, at least a partial region occupied by the foreground pixels in the imaging image is determined as a region to be focused in the imaging image, and a lens parameter of the image acquisition device is adjusted according to the region to be focused to adjust the definition of an object corresponding to the region to be focused in the imaging image of the image acquisition device, so that the focusing of the image acquisition device is adjusted according to the region occupied by the foreground pixels corresponding to the region of interest in the imaging image, since the region of interest can usually include the object of interest, the foreground pixels corresponding to the region of interest are usually pixels corresponding to the object of interest, the focusing of the image acquisition device is adjusted based on the region occupied by the foreground pixels corresponding to the region of, the method and the device can realize that the subject to be shot aiming at focusing is an interested object, thereby ensuring the accuracy of the subject to be shot.
On the basis of the above embodiment, optionally, the method may further include: and prompting the area to be focused to a user in a shooting interface so that the user can know the current focusing area. For example, the imaged image may be displayed to the user, and the area to be focused may be marked in the imaged image. For example, the region to be focused may be prompted to the user by the manner of fig. 9, and the region framed by a black frame in the imaged image in fig. 9 may represent the region to be focused. Of course, in other embodiments, the area to be focused may be presented to the user in other manners, which is not limited in this application.
On the basis of prompting the area to be focused to the user in the shooting interface, optionally, the method may further include: obtaining an adjusted region to be focused based on the adjustment operation of the user for the region to be focused; and adjusting the lens parameters of the image acquisition device according to the adjusted to-be-focused area until the quality of the image of the to-be-focused area meets a certain condition. The adjusting operation can be flexibly implemented according to requirements, and exemplarily includes one or more of the following: a position adjustment operation, a shape adjustment operation, or a size adjustment operation.
By obtaining the adjustment operation and adjusting the lens parameters of the image acquisition device according to the adjusted to-be-focused area obtained based on the adjustment operation, the user can adjust the focusing area of the image acquisition device according to the requirement, so that the image acquisition device can focus on the to-be-focused area adjusted by the user, and the improvement of the focusing flexibility is facilitated.
Fig. 10 is a schematic flow chart of a focusing method according to still another embodiment of the present application, and this embodiment mainly describes a specific implementation manner of applying the focusing method to power inspection of an unmanned aerial vehicle on the basis of the foregoing embodiment, where an image acquisition device is arranged on the unmanned aerial vehicle, and the image acquisition device is used for shooting an image including power equipment to be inspected during power inspection of the unmanned aerial vehicle. As shown in fig. 10, the method of the present embodiment may include:
step 101, acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm.
In this step, wait to patrol and examine the power equipment and can regard as waiting to shoot the main part, wait to patrol and examine the power equipment and for example can include electric wire, wire pole, photovoltaic power plant's solar cell panel etc.. Of course, in other embodiments, the power device to be inspected may also be other devices, which is not limited in this application.
It should be noted that, for a specific way of identifying foreground pixels in an imaged image, reference may be made to the related description of the foregoing embodiments, and details are not repeated here.
Step 102, determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused corresponding to the power equipment to be patrolled.
For another example, whether the current foreground pixel is the corresponding pixel of the power device to be patrolled or not may be determined directly based on the shape formed by the foreground pixel. For example, when the shape formed by the foreground pixels is a linear shape, the current foreground pixels can be determined to be pixels corresponding to the electric wire to be inspected. There is also a case where some foreground pixels do not belong to the power device to be inspected, and the interference foreground pixels can be removed by a similar means. For example, the foreground pixels that constitute the preset shape are retained, and the foreground pixels that constitute the preset shape are deleted.
If the foreground pixel in the imaging image is a pixel corresponding to the power equipment to be patrolled, that is, the power equipment to be patrolled is taken as the foreground of the imaging image, it means that the area to be focused corresponding to the power equipment to be patrolled in the imaging image can be determined based on the foreground pixel.
If the foreground pixels in the imaging image are not pixels corresponding to the power equipment to be patrolled, namely the power equipment to be patrolled is not used as the foreground of the imaging image, the area occupied by the foreground pixels cannot be used for focusing of the equipment to be patrolled, and therefore the area to be focused corresponding to the power equipment to be patrolled in the imaging image cannot be determined based on the foreground pixels. Then, based on a new imaging image for focusing by the image acquisition device, performing focusing control by adopting the method of steps 101 to 103, specifically, a preset algorithm may be adopted to identify a foreground pixel in the new imaging image, determine whether the foreground pixel in the new imaging image is a pixel corresponding to the power equipment to be inspected, and in the case that the foreground pixel in the new imaging image is a pixel corresponding to the power equipment to be inspected, determine an area occupied by the foreground pixel in the new imaging image as an area to be focused on the power equipment to be inspected in the new imaging image, and adjust a lens parameter of the image acquisition device according to the area to be focused on the power equipment to be inspected in the new imaging image to adjust the definition of the power equipment to be inspected in the imaging image of the image acquisition device, so that the image acquisition device focuses on the power equipment to be patrolled and examined.
Optionally, in order to enable the image acquisition device to obtain a new imaging image of the power equipment to be patrolled and examined as a foreground, the unmanned aerial vehicle and/or the cradle head used for carrying the image acquisition device can be controlled to change the visual field range of the image acquisition device, and the imaging image is continuously acquired until a new imaging image of a corresponding pixel of the power equipment to be patrolled and examined as a foreground pixel can be obtained, so that the focusing of the power equipment to be patrolled and examined can be conveniently carried out.
In this step, since the power device to be inspected is used as the foreground of the imaging image, the area occupied by the foreground pixels in the imaging image is the area corresponding to the power device to be inspected in the imaging image. And, because the in-process of patrolling and examining at the electric power needs to be directed against the power equipment of patrolling and examining and shoot the image, consequently the region of waiting to focus can be used for waiting to patrol and examine the focusing of power equipment to it can the formation of image clearly to wait to patrol and examine power equipment in the image acquisition device, thereby makes the image acquisition device can shoot clear waiting to patrol and examine power equipment.
Step 103, adjusting lens parameters of the image acquisition device according to a to-be-focused area corresponding to the to-be-inspected electric power equipment in the imaging image so as to adjust the definition of the to-be-inspected electric power equipment in the imaging image of the image acquisition device, so that the image acquisition device focuses on the to-be-inspected electric power equipment.
In this step, for example, step 103 may specifically include: and adjusting the lens parameters of the image acquisition device according to the area to be focused until the quality of the image of the area to be focused meets a certain condition. For a specific description of adjusting the lens parameters until the quality of the image in the to-be-focused region satisfies a certain condition, reference may be made to the related description of the foregoing embodiments, which is not repeated herein.
In the embodiment, the imaging image is acquired through the image acquisition device, the foreground pixels in the imaging image are identified by adopting a preset algorithm, at least part of the area occupied by the foreground pixels in the imaging image is determined as the area to be focused corresponding to the power equipment to be patrolled in the imaging image, and the lens parameters of the image acquisition device are adjusted according to the area to be focused corresponding to the power equipment to be patrolled in the imaging image, so that the definition of the power equipment to be patrolled in the imaging image of the image acquisition device is adjusted, the focusing of the image acquisition device can be adjusted according to the area occupied by the power equipment to be patrolled in the imaging image in the power patrol process, the image acquisition device can be focused on the power equipment to be patrolled, the focusing accuracy is improved, and the improvement of the definition of the shot power equipment to be patrolled is.
On the basis of the embodiment shown in fig. 10, the following steps may be further included before step 101: controlling the unmanned aerial vehicle to fly to a target waypoint needing to shoot an image; and adjusting the posture of the unmanned aerial vehicle and/or the posture of a holder used for carrying the image acquisition device according to the inspection parameters corresponding to the target waypoints, so that the power equipment to be inspected can be used as a foreground in the current imaging of the image acquisition device. Based on this, it is made possible to obtain an imaged image for which the electric power equipment to be inspected is a foreground and which is used for focusing by the image acquisition device.
Optionally, the unmanned aerial vehicle can fly to the target waypoint that needs to shoot the image according to the cruise route is automatic, or, the unmanned aerial vehicle can fly to the target waypoint that needs to shoot the image by user manual control according to control device's control.
Because image acquisition device sets up on unmanned aerial vehicle, consequently, unmanned aerial vehicle's gesture can influence image acquisition device's field of vision scope to can make the electric power equipment of waiting to patrol and examine in image acquisition device's current formation of image can regard as the prospect through adjustment unmanned aerial vehicle's gesture.
Optionally, the image acquisition device can be arranged on the unmanned aerial vehicle through the cloud platform, and the cloud platform can be used for changing the orientation of the image acquisition device to change the field of view of the image acquisition device, so the gesture of the cloud platform also can influence the field of view of the image acquisition device, and therefore the gesture of the cloud platform can be adjusted to enable the current imaging of the image acquisition device to be used as the foreground for the power equipment to be patrolled and examined.
Optionally, step 103 may be followed by: and under the condition that the image acquisition device focuses on the electric power equipment to be patrolled and examined, shooting an image including the electric power equipment to be patrolled and examined. Based on this, can save the clear image of waiting to patrol and examine power equipment to follow-up can further carry out fault detection to treating to patrol and examine power equipment according to the clear image of waiting to patrol and examine power equipment.
Fig. 11 is a schematic structural diagram of a focusing device according to an embodiment of the present application, and as shown in fig. 11, the focusing device 110 may include: a processor 111 and a memory 112.
The memory 112 for storing program codes;
the processor 111, which invokes the program code, when the program code is executed, is configured to:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused in the imaging image;
and adjusting the lens parameters of the image acquisition device according to the area to be focused so as to adjust the definition of the object corresponding to the area to be focused in the imaging image of the image acquisition device, so that the image acquisition device focuses the object corresponding to the area to be focused.
The focusing apparatus provided in this embodiment may be used to implement the technical solutions of the method embodiments shown in fig. 2, fig. 4, and fig. 6, and the implementation principle and technical effects are similar to those of the method embodiments, and are not described herein again.
Fig. 12 is a schematic structural diagram of the unmanned aerial vehicle provided in an embodiment of the present application, and as shown in fig. 12, the unmanned aerial vehicle 120 may include: a body 121, a power system 122 arranged on the body 121, an image acquisition device 123 and a focusing device 124;
the power system 122 is used for providing power for the unmanned aerial vehicle;
the image acquisition device 123 is configured to capture an image including power equipment to be inspected in a process of performing power inspection by the unmanned aerial vehicle;
the focusing device 124 includes a memory and a processor;
the memory for storing program code;
the processor, invoking the program code, when executed, is configured to:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused corresponding to the power equipment to be patrolled;
and adjusting the lens parameters of the image acquisition device according to the to-be-focused area corresponding to the to-be-inspected electric power equipment in the imaging image so as to adjust the definition of the to-be-inspected electric power equipment in the imaging image of the image acquisition device, so that the image acquisition device focuses on the to-be-inspected electric power equipment.
Optionally, the drone 120 may further include a cradle head 125, and the image capturing device 123 may be disposed on the fuselage 121 through the cradle head 125. Of course, the drone may include other elements or devices in addition to those listed above, not to mention here.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (46)

1. A focusing method is applied to an unmanned aerial vehicle for power inspection and is characterized in that an image acquisition device is arranged on the unmanned aerial vehicle and used for shooting an image including power equipment to be inspected in the process of power inspection of the unmanned aerial vehicle; the method comprises the following steps:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused corresponding to the power equipment to be patrolled;
and adjusting the lens parameters of the image acquisition device according to the to-be-focused area corresponding to the to-be-inspected electric power equipment in the imaging image so as to adjust the definition of the to-be-inspected electric power equipment in the imaging image of the image acquisition device, so that the image acquisition device focuses on the to-be-inspected electric power equipment.
2. The method of claim 1, further comprising:
determining a region of interest in the imaged image;
the identifying foreground pixels in the imaged image by using a preset algorithm includes:
and processing an image corresponding to the region of interest in the imaging image by adopting a preset algorithm so as to identify foreground pixels corresponding to the region of interest in the imaging image.
3. The method of claim 2, wherein the image corresponding to the region of interest is an image of the region of interest in the imaging image.
4. The method of claim 2, further comprising:
in the imaging image, expanding at least one pixel of the region of interest along a preset direction to obtain an expanded region;
the image corresponding to the region of interest is the image of the enlarged region in the imaging image.
5. The method according to any one of claims 1-4, wherein said identifying foreground pixels in said imaged image using a predetermined algorithm comprises:
inputting the imaging image into a first neural network model trained in advance to obtain a first output result, wherein the first output result comprises confidence coefficients that each pixel in the imaging image is a foreground pixel;
determining foreground pixels in the imaged image according to the first output result.
6. The method of claim 5, wherein determining foreground pixels in the imaged image based on the first output comprises:
and determining pixels, which are foreground pixels in the imaged image and have confidence degrees larger than a preset threshold value, as foreground pixels according to the first output result.
7. The method according to any one of claims 1-4, wherein said identifying foreground pixels in said imaged image using a predetermined algorithm comprises:
inputting the imaging image into a pre-trained second neural network model to obtain a second output result, wherein the second output result comprises confidence coefficients that each pixel in the imaging image is each foreground type pixel in at least one foreground type;
and determining pixels of each foreground category in the imaged image according to the second output result so as to obtain foreground pixels in the imaged image.
And determining a region to be focused corresponding to the power equipment to be patrolled according to the foreground pixel in the imaging image.
8. The method according to any one of claims 1 to 4, wherein the adjusting of the lens parameters of the image acquisition device according to the to-be-focused area corresponding to the to-be-inspected electric power equipment in the imaging image to adjust the definition of the to-be-inspected electric power equipment in the imaging image of the image acquisition device so as to focus the image acquisition device on the to-be-inspected electric power equipment comprises:
and adjusting the lens parameters of the image acquisition device according to the area to be focused until the quality of the image of the area to be focused meets a certain condition.
9. The method according to any one of claims 1-4, further comprising:
controlling the unmanned aerial vehicle to fly to a target waypoint needing to shoot an image;
and adjusting the posture of the unmanned aerial vehicle and/or the posture of a holder used for carrying the image acquisition device according to the inspection parameters corresponding to the target waypoints, so that the power equipment to be inspected can be used as a foreground in the current imaging of the image acquisition device.
10. The method according to any one of claims 1-4, further comprising: and under the condition that the image acquisition device focuses on the electric power equipment to be patrolled and examined, shooting an image including the electric power equipment to be patrolled and examined.
11. A focusing method, comprising:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused in the imaging image;
and adjusting the lens parameters of the image acquisition device according to the area to be focused so as to adjust the definition of the object corresponding to the area to be focused in the imaging image of the image acquisition device, so that the image acquisition device focuses the object corresponding to the area to be focused.
12. The method of claim 1, further comprising:
determining a region of interest in the imaged image;
the identifying foreground pixels in the imaged image by using a preset algorithm includes:
and processing an image corresponding to the region of interest in the imaging image by adopting a preset algorithm so as to identify foreground pixels corresponding to the region of interest in the imaging image.
13. The method of claim 12, wherein the image corresponding to the region of interest is an image of the region of interest in the imaging image.
14. The method of claim 12, further comprising:
in the imaging image, expanding at least one pixel of the region of interest along a preset direction to obtain an expanded region;
the image corresponding to the region of interest is the image of the enlarged region in the imaging image.
15. The method according to any one of claims 11-14, wherein said identifying foreground pixels in said imaged image using a predetermined algorithm comprises:
inputting the imaging image into a first neural network model trained in advance to obtain a first output result, wherein the first output result comprises confidence coefficients that each pixel in the imaging image is a foreground pixel;
determining foreground pixels in the imaged image according to the first output result.
16. The method of claim 15, wherein determining foreground pixels in the imaged image based on the first output comprises:
and determining pixels, which are foreground pixels in the imaged image and have confidence degrees larger than a preset threshold value, as foreground pixels according to the first output result.
17. The method according to any one of claims 11-14, wherein said identifying foreground pixels in said imaged image using a predetermined algorithm comprises:
inputting the imaging image into a pre-trained second neural network model to obtain a second output result, wherein the second output result comprises confidence coefficients that each pixel in the imaging image is each foreground type pixel in at least one foreground type;
and determining pixels of each foreground category in the imaged image according to the second output result so as to obtain foreground pixels in the imaged image.
18. The method according to any one of claims 11 to 14, wherein the adjusting a lens parameter of the image capturing device according to the region to be focused to focus the image capturing device on the object corresponding to the region to be focused comprises:
and adjusting the lens parameters of the image acquisition device according to the area to be focused until the quality of the image of the area to be focused meets a certain condition.
19. The method according to any one of claims 11-14, further comprising:
and prompting the area to be focused to a user in a shooting interface.
20. The method of claim 19, further comprising:
obtaining an adjusted region to be focused based on the adjustment operation of the user for the region to be focused;
and adjusting the lens parameters of the image acquisition device according to the adjusted to-be-focused area until the quality of the image of the to-be-focused area meets a certain condition.
21. The method of claim 20, wherein the adjustment operation comprises one or more of:
a position adjustment operation, a shape adjustment operation, or a size adjustment operation.
22. An unmanned aerial vehicle is characterized by comprising a body of the unmanned aerial vehicle, and a power system, an image acquisition device and a focusing device which are arranged on the body;
the power system is used for providing power for the unmanned aerial vehicle;
the image acquisition device is used for shooting images including power equipment to be patrolled and examined in the process of carrying out power patrolling by the unmanned aerial vehicle;
the focusing device comprises a memory and a processor;
the memory for storing program code;
the processor, invoking the program code, when executed, is configured to:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused corresponding to the power equipment to be patrolled;
and adjusting the lens parameters of the image acquisition device according to the to-be-focused area corresponding to the to-be-inspected electric power equipment in the imaging image so as to adjust the definition of the to-be-inspected electric power equipment in the imaging image of the image acquisition device, so that the image acquisition device focuses on the to-be-inspected electric power equipment.
23. The drone of claim 22, wherein the processor is further to:
determining a region of interest in the imaged image;
the processor is configured to identify foreground pixels in the imaged image by using a preset algorithm, and specifically includes:
and processing an image corresponding to the region of interest in the imaging image by adopting a preset algorithm so as to identify foreground pixels corresponding to the region of interest in the imaging image.
24. The drone of claim 23, wherein the image corresponding to the region of interest is an image of the region of interest in the imaged image.
25. The drone of claim 23, wherein the processor is further to:
in the imaging image, expanding at least one pixel of the region of interest along a preset direction to obtain an expanded region;
the image corresponding to the region of interest is the image of the enlarged region in the imaging image.
26. A drone as claimed in any one of claims 22 to 25, wherein the processor is configured to use a predetermined algorithm to identify foreground pixels in the imaged image, including in particular:
inputting the imaging image into a first neural network model trained in advance to obtain a first output result, wherein the first output result comprises confidence coefficients that each pixel in the imaging image is a foreground pixel;
determining foreground pixels in the imaged image according to the first output result.
27. An unmanned aerial vehicle according to claim 26, wherein the processor is configured to determine foreground pixels in the imaged image based on the first output, including:
and determining pixels, which are foreground pixels in the imaged image and have confidence degrees larger than a preset threshold value, as foreground pixels according to the first output result.
28. A drone as claimed in any one of claims 22 to 25, wherein the processor is configured to use a predetermined algorithm to identify foreground pixels in the imaged image, including in particular:
inputting the imaging image into a pre-trained second neural network model to obtain a second output result, wherein the second output result comprises confidence coefficients that each pixel in the imaging image is each foreground type pixel in at least one foreground type;
and determining pixels of each foreground category in the imaged image according to the second output result so as to obtain foreground pixels in the imaged image.
And determining a region to be focused corresponding to the power equipment to be patrolled according to the foreground pixel in the imaging image.
29. An unmanned aerial vehicle according to any one of claims 22 to 25, wherein the processor is configured to adjust a lens parameter of the image capturing device according to a to-be-focused area corresponding to the to-be-inspected electric power device in the imaging image, so as to adjust a definition of the to-be-inspected electric power device in the imaging image of the image capturing device, so that the image capturing device is focused on the to-be-inspected electric power device, and specifically includes:
and adjusting the lens parameters of the image acquisition device according to the area to be focused until the quality of the image of the area to be focused meets a certain condition.
30. A drone as claimed in any one of claims 22-25, wherein the processor is further configured to:
controlling the unmanned aerial vehicle to fly to a target waypoint needing to shoot an image;
and adjusting the posture of the unmanned aerial vehicle and/or the posture of a holder used for carrying the image acquisition device according to the inspection parameters corresponding to the target waypoints, so that the power equipment to be inspected can be used as a foreground in the current imaging of the image acquisition device.
31. A drone as claimed in any one of claims 22-25, wherein the processor is further configured to: and under the condition that the image acquisition device focuses on the electric power equipment to be patrolled and examined, shooting an image including the electric power equipment to be patrolled and examined.
32. A focusing device, comprising: a memory and a processor;
the memory for storing program code;
the processor, invoking the program code, when executed, is configured to:
acquiring an imaging image through an image acquisition device, and identifying foreground pixels in the imaging image by adopting a preset algorithm;
determining at least partial area occupied by the foreground pixels in the imaging image as an area to be focused in the imaging image;
and adjusting the lens parameters of the image acquisition device according to the area to be focused so as to adjust the definition of the object corresponding to the area to be focused in the imaging image of the image acquisition device, so that the image acquisition device focuses the object corresponding to the area to be focused.
33. The apparatus of claim 32, wherein the processor is further configured to:
determining a region of interest in the imaged image;
the processor is configured to identify foreground pixels in the imaged image by using a preset algorithm, and specifically includes:
and processing an image corresponding to the region of interest in the imaging image by adopting a preset algorithm so as to identify foreground pixels corresponding to the region of interest in the imaging image.
34. The apparatus of claim 33, wherein the image corresponding to the region of interest is an image of the region of interest in the imaging image.
35. The apparatus of claim 33, wherein the processor is further configured to:
in the imaging image, expanding at least one pixel of the region of interest along a preset direction to obtain an expanded region;
the image corresponding to the region of interest is the image of the enlarged region in the imaging image.
36. The apparatus according to any one of claims 32-35, wherein the processor is configured to identify foreground pixels in the imaged image using a predetermined algorithm, and specifically comprises:
inputting the imaging image into a first neural network model trained in advance to obtain a first output result, wherein the first output result comprises confidence coefficients that each pixel in the imaging image is a foreground pixel;
determining foreground pixels in the imaged image according to the first output result.
37. The apparatus of claim 36, wherein the processor is configured to determine foreground pixels in the imaged image according to the first output, and specifically comprises:
and determining pixels, which are foreground pixels in the imaged image and have confidence degrees larger than a preset threshold value, as foreground pixels according to the first output result.
38. The apparatus according to any one of claims 32-35, wherein the processor is configured to identify foreground pixels in the imaged image using a predetermined algorithm, and specifically comprises:
inputting the imaging image into a pre-trained second neural network model to obtain a second output result, wherein the second output result comprises confidence coefficients that each pixel in the imaging image is each foreground type pixel in at least one foreground type;
and determining pixels of each foreground category in the imaged image according to the second output result so as to obtain foreground pixels in the imaged image.
39. The apparatus according to any one of claims 32 to 35, wherein the processor is configured to adjust a lens parameter of the image capturing device according to the region to be focused, so that the image capturing device is focused on an object corresponding to the region to be focused, specifically including:
and adjusting the lens parameters of the image acquisition device according to the area to be focused until the quality of the image of the area to be focused meets a certain condition.
40. The apparatus according to any of claims 32-35, wherein the processor is further configured to:
and prompting the area to be focused to a user in a shooting interface.
41. The apparatus of claim 40, wherein the processor is further configured to:
obtaining an adjusted region to be focused based on the adjustment operation of the user for the region to be focused;
and adjusting the lens parameters of the image acquisition device according to the adjusted to-be-focused area until the quality of the image of the to-be-focused area meets a certain condition.
42. The apparatus of claim 41, wherein the adjustment operation comprises one or more of:
a position adjustment operation, a shape adjustment operation, or a size adjustment operation.
43. A computer-readable storage medium, having stored thereon a computer program comprising at least one code section executable by a computer for controlling the computer to perform the method according to any one of claims 1-10.
44. A computer-readable storage medium, having stored thereon a computer program comprising at least one code section executable by a computer for controlling the computer to perform the method according to any one of claims 11-21.
45. A computer program for implementing the method according to any one of claims 1-10 when the computer program is executed by a computer.
46. A computer program for implementing the method according to any of claims 11-21 when the computer program is executed by a computer.
CN202080004236.6A 2020-02-26 2020-02-26 Focusing method, device and equipment Pending CN112585945A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/076839 WO2021168707A1 (en) 2020-02-26 2020-02-26 Focusing method, apparatus and device

Publications (1)

Publication Number Publication Date
CN112585945A true CN112585945A (en) 2021-03-30

Family

ID=75145418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080004236.6A Pending CN112585945A (en) 2020-02-26 2020-02-26 Focusing method, device and equipment

Country Status (2)

Country Link
CN (1) CN112585945A (en)
WO (1) WO2021168707A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992823B (en) * 2021-09-27 2023-12-08 国网浙江省电力有限公司金华供电公司 Secondary equipment fault intelligent diagnosis method based on multiple information sources
CN114845041B (en) * 2021-12-30 2024-03-15 齐之明光电智能科技(苏州)有限公司 Focusing method and device for nanoparticle imaging and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
CN104766052A (en) * 2015-03-24 2015-07-08 广州视源电子科技股份有限公司 Face recognition method, face recognition system, user terminal and server
CN105096655A (en) * 2014-05-19 2015-11-25 本田技研工业株式会社 Object detection device, driving assistance device, object detection method, and object detection program
CN105391939A (en) * 2015-11-04 2016-03-09 腾讯科技(深圳)有限公司 Unmanned aerial vehicle shooting control method, device, unmanned aerial vehicle shooting method and unmanned aerial vehicle
KR20170022872A (en) * 2016-07-13 2017-03-02 아이디어주식회사 Unmanned aerial vehicle having Automatic Tracking
CN107465855A (en) * 2017-08-22 2017-12-12 上海歌尔泰克机器人有限公司 Image pickup method and device, the unmanned plane of image
CN107729808A (en) * 2017-09-08 2018-02-23 国网山东省电力公司电力科学研究院 A kind of image intelligent acquisition system and method for power transmission line unmanned machine inspection
US20180259960A1 (en) * 2015-08-20 2018-09-13 Motionloft, Inc. Object detection and analysis via unmanned aerial vehicle
CN108810418A (en) * 2018-07-16 2018-11-13 Oppo广东移动通信有限公司 Image processing method, device, mobile terminal and computer readable storage medium
CN108984657A (en) * 2018-06-28 2018-12-11 Oppo广东移动通信有限公司 Image recommendation method and device, terminal, and readable storage medium
CN109343573A (en) * 2018-10-31 2019-02-15 云南兆讯科技有限责任公司 Power equipment inspection figure image collection processing system based on light field technique for taking
CN109743499A (en) * 2018-12-29 2019-05-10 武汉云衡智能科技有限公司 A kind of zoom unmanned plane and zoom unmanned aerial vehicle (UAV) control method applied to image recognition
CN109886209A (en) * 2019-02-25 2019-06-14 成都旷视金智科技有限公司 Anomaly detection method and device, mobile unit
CN110133440A (en) * 2019-05-27 2019-08-16 国电南瑞科技股份有限公司 Electric power unmanned plane and method for inspecting based on Tower Model matching and vision guided navigation
CN110149482A (en) * 2019-06-28 2019-08-20 Oppo广东移动通信有限公司 Focusing method, focusing device, electronic equipment and computer readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8903119B2 (en) * 2010-10-11 2014-12-02 Texas Instruments Incorporated Use of three-dimensional top-down views for business analytics
CN102780847B (en) * 2012-08-14 2015-10-21 北京汉邦高科数字技术股份有限公司 A kind of video camera auto-focusing control method for moving target
CN103235602B (en) * 2013-03-25 2015-10-28 山东电力集团公司电力科学研究院 A kind of power-line patrolling unmanned plane automatic camera opertaing device and control method
US9584716B2 (en) * 2015-07-01 2017-02-28 Sony Corporation Method and apparatus for autofocus area selection by detection of moving objects
CN105629631B (en) * 2016-02-29 2020-01-10 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN108924419A (en) * 2018-07-09 2018-11-30 国网福建省电力有限公司漳州供电公司 A kind of unmanned plane camera shooting Zoom control system of transmission line-oriented inspection

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096655A (en) * 2014-05-19 2015-11-25 本田技研工业株式会社 Object detection device, driving assistance device, object detection method, and object detection program
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
CN104766052A (en) * 2015-03-24 2015-07-08 广州视源电子科技股份有限公司 Face recognition method, face recognition system, user terminal and server
US20180259960A1 (en) * 2015-08-20 2018-09-13 Motionloft, Inc. Object detection and analysis via unmanned aerial vehicle
CN105391939A (en) * 2015-11-04 2016-03-09 腾讯科技(深圳)有限公司 Unmanned aerial vehicle shooting control method, device, unmanned aerial vehicle shooting method and unmanned aerial vehicle
KR20170022872A (en) * 2016-07-13 2017-03-02 아이디어주식회사 Unmanned aerial vehicle having Automatic Tracking
CN107465855A (en) * 2017-08-22 2017-12-12 上海歌尔泰克机器人有限公司 Image pickup method and device, the unmanned plane of image
CN107729808A (en) * 2017-09-08 2018-02-23 国网山东省电力公司电力科学研究院 A kind of image intelligent acquisition system and method for power transmission line unmanned machine inspection
CN108984657A (en) * 2018-06-28 2018-12-11 Oppo广东移动通信有限公司 Image recommendation method and device, terminal, and readable storage medium
CN108810418A (en) * 2018-07-16 2018-11-13 Oppo广东移动通信有限公司 Image processing method, device, mobile terminal and computer readable storage medium
CN109343573A (en) * 2018-10-31 2019-02-15 云南兆讯科技有限责任公司 Power equipment inspection figure image collection processing system based on light field technique for taking
CN109743499A (en) * 2018-12-29 2019-05-10 武汉云衡智能科技有限公司 A kind of zoom unmanned plane and zoom unmanned aerial vehicle (UAV) control method applied to image recognition
CN109886209A (en) * 2019-02-25 2019-06-14 成都旷视金智科技有限公司 Anomaly detection method and device, mobile unit
CN110133440A (en) * 2019-05-27 2019-08-16 国电南瑞科技股份有限公司 Electric power unmanned plane and method for inspecting based on Tower Model matching and vision guided navigation
CN110149482A (en) * 2019-06-28 2019-08-20 Oppo广东移动通信有限公司 Focusing method, focusing device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
WO2021168707A1 (en) 2021-09-02

Similar Documents

Publication Publication Date Title
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
JP2022509034A (en) Bright spot removal using a neural network
US11908160B2 (en) Method and apparatus for context-embedding and region-based object detection
CN112585554A (en) Unmanned aerial vehicle inspection method and device and unmanned aerial vehicle
JP7123884B2 (en) Imaging device, method and program
US8340512B2 (en) Auto focus technique in an image capture device
CN109358648B (en) Unmanned aerial vehicle autonomous flight method and device and unmanned aerial vehicle
CN112484691B (en) Image processing apparatus, distance measuring method, and recording medium
CN105282443A (en) Method for imaging full-field-depth panoramic image
WO2021134179A1 (en) Focusing method and apparatus, photographing device, movable platform and storage medium
US10602064B2 (en) Photographing method and photographing device of unmanned aerial vehicle, unmanned aerial vehicle, and ground control device
CN111765974B (en) A wildlife observation system and method based on a micro-cooled infrared thermal imager
CN109587392B (en) Method and device for adjusting monitoring equipment, storage medium and electronic device
CN112884811A (en) Photoelectric detection tracking method and system for unmanned aerial vehicle cluster
CN110731076A (en) A shooting processing method, device and storage medium
CN104184935A (en) Image shooting device and method
CN114020039A (en) Automatic focusing system and method for UAV inspection tower
CN115578662A (en) Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment
CN113273173A (en) Inspection method and device for movable platform, movable platform and storage medium
CN116185065A (en) UAV inspection method, device and non-volatile storage medium
CN112585945A (en) Focusing method, device and equipment
CN106791353B (en) The methods, devices and systems of auto-focusing
CN114037895A (en) A method of image recognition for UAV tower inspection
CN112631333A (en) Target tracking method and device of unmanned aerial vehicle and image processing chip
US20210337098A1 (en) Neural Network Supported Camera Image Or Video Processing Pipelines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210330

WD01 Invention patent application deemed withdrawn after publication