CN116416701A - Inspection method, inspection device, electronic equipment and storage medium - Google Patents
Inspection method, inspection device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116416701A CN116416701A CN202310344462.4A CN202310344462A CN116416701A CN 116416701 A CN116416701 A CN 116416701A CN 202310344462 A CN202310344462 A CN 202310344462A CN 116416701 A CN116416701 A CN 116416701A
- Authority
- CN
- China
- Prior art keywords
- inspection
- image
- preset
- target
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 490
- 238000000034 method Methods 0.000 title claims abstract description 49
- 239000011159 matrix material Substances 0.000 claims description 30
- 230000009466 transformation Effects 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 18
- 230000001737 promoting effect Effects 0.000 abstract 1
- 230000008569 process Effects 0.000 description 14
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003245 coal Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/20—Checking timed patrols, e.g. of watchman
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a patrol method, a patrol device, electronic equipment and a computer readable storage medium. The method comprises the following steps: controlling the robot to go to a patrol position corresponding to the current patrol target; after the robot reaches the inspection position, acquiring a first inspection image acquired by a camera of the robot for a current inspection target; adjusting the pose of the camera based on the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera; acquiring a plurality of second inspection images acquired by a camera on a current inspection target, wherein the plurality of second inspection images are different in shooting focal length and identical in shooting multiplying power, and the shooting multiplying power of the plurality of second inspection images is larger than that of the first inspection image; and determining the second inspection image with the best image quality from the plurality of second inspection images as a target inspection image. Through this application scheme, can be when promoting the success rate of patrolling and examining, ensure the accuracy of the result of patrolling and examining.
Description
Technical Field
The application belongs to the technical field of robots, and particularly relates to a patrol method, a patrol device, electronic equipment and a computer readable storage medium.
Background
With the development of artificial intelligence and robots, robots carrying artificial intelligence are increasingly used in the fields of security, inspection, medical treatment, health and the like. Among them, for inspection robots, because they can adapt to complex environments and also can save repetitive human labor, they have been widely used in power plants, substations, coal mines, chemical industry and other industries.
By using the inspection robot, each inspection target at different positions can be automatically detected at regular intervals. In order to ensure the accuracy of the inspection result, the image photographed by the inspection robot during inspection generally uses a larger photographing magnification. However, at a larger shooting magnification, if the navigation error of the inspection robot is too large, there is a possibility that the inspection target does not exist in the image obtained by shooting, resulting in inspection failure.
Disclosure of Invention
The application provides a patrol method, a patrol device, electronic equipment and a computer readable storage medium, which can ensure the accuracy of patrol results while improving the success rate of patrol.
In a first aspect, the present application provides a method for inspection, including:
controlling the robot to go to a patrol position corresponding to the current patrol target;
After the robot reaches the inspection position, acquiring a first inspection image acquired by a camera of the robot for a current inspection target;
adjusting the pose of the camera based on the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera;
acquiring a plurality of second inspection images acquired by a camera on a current inspection target, wherein the plurality of second inspection images are different in shooting focal length and identical in shooting multiplying power, and the shooting multiplying power of the plurality of second inspection images is larger than that of the first inspection image;
and determining the second inspection image with the best image quality from the plurality of second inspection images as a target inspection image.
In a second aspect, the present application provides a patrol device comprising:
the control module is used for controlling the robot to go to the inspection position corresponding to the current inspection target;
the first acquisition module is used for acquiring a first inspection image acquired by a camera of the robot on a current inspection target after the robot reaches the inspection position;
the adjusting module is used for adjusting the pose of the camera based on the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera;
the second acquisition module is used for acquiring a plurality of second inspection images acquired by the camera on the current inspection target, wherein the shooting focal lengths of the plurality of second inspection images are different but the shooting multiplying power is the same, and the shooting multiplying power of the plurality of second inspection images is larger than that of the first inspection image;
And the determining module is used for determining the second inspection image with the best image quality from the plurality of second inspection images as a target inspection image.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of the method of the first aspect described above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by one or more processors, implements the steps of the method of the first aspect described above.
Compared with the prior art, the beneficial effects that this application exists are: when the robot performs inspection, the robot firstly performs shooting with a smaller shooting multiplying power, so that a first inspection image obtained by shooting can contain a current inspection target; then, adjusting the pose of the camera according to the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera; then shooting with a larger shooting multiplying power, so that a plurality of second inspection images obtained by shooting can all contain the current inspection target; because the shooting focal lengths of the plurality of second inspection images are different, the image quality of the plurality of second inspection images is also different to a certain extent, and therefore the target inspection images can be screened out according to the image quality, and the image quality of the obtained target inspection images is optimal. According to the process, the condition that the current inspection target cannot be shot during inspection can be reduced by shooting with a smaller shooting multiplying power and then shooting with a larger shooting multiplying power, so that the success rate of inspection is improved; and through screening the second inspection images under the different shooting focal lengths, the image quality of the finally obtained target inspection image can meet the inspection requirement, so that the accuracy of the inspection result is ensured.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation flow of a patrol method provided in an embodiment of the present application;
fig. 2 is a block diagram of a patrol device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The following describes a patrol method provided in the embodiments of the present application. The inspection method can be applied to robots; of course, the present invention may also be applied to other devices that have a communication connection with a robot and can control the robot, such as a user terminal or a server, which is not limited in this embodiment. For convenience of description, the inspection method is applied to a robot as an example, and is explained and described below. Referring to fig. 1, the inspection method in the embodiment of the present application includes:
The robot may initiate a patrol task in response to patrol instructions issued by other devices (e.g., user terminals or servers); or, the robot can also automatically start the inspection task after reaching the preset inspection time, and the embodiment of the application does not limit the starting time of the inspection task. After the inspection task is started, the robot can traverse among a plurality of inspection targets to be inspected so as to determine the current inspection target. It can be understood that after finishing the inspection of the current inspection target, the robot can determine the new current inspection target again and continue the inspection; after all the inspection targets are inspected, the robot can finish the inspection task.
Each inspection target has a corresponding inspection position. The corresponding relation between the inspection target and the inspection position can be stored in the storage space accessible by the robot in advance. The storage space may be a local storage space or a cloud storage space, which is not limited herein. The robot can determine the inspection position corresponding to the current inspection target according to the corresponding relation and move to the inspection position.
In case the robot feeds back that it has reached the inspection position, the robot may actually still be at a distance from the inspection position due to the presence of navigation errors. In order to avoid missing the current inspection target, the robot can not directly shoot at a larger shooting multiplying power at the moment, but can shoot at a smaller shooting multiplying power, so that the shooting range is expanded as much as possible, and the image obtained by shooting can contain the current inspection target. For convenience of description, an image acquired at this time is referred to as a first patrol image.
And step 103, adjusting the pose of the camera based on the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera.
Because of the existence of the navigation error, the current inspection target in the first inspection image is not necessarily positioned at the center of the first inspection image; if the current inspection target is not located at the center position in the first inspection image, when the shooting magnification is changed, the position of the current inspection target in a shooting picture is changed, so that the detection of the current inspection target is affected. Based on the above, the robot can adjust the pose of the camera based on the first inspection image, so that the current inspection target can be positioned at the center of the shooting picture of the camera after the pose of the camera is adjusted.
After the pose of the camera has been adjusted, the robot may adjust the photographing magnification of the camera to zoom in the lens (zoom in) after the current inspection target is at the center position of the photographing screen of the camera, thereby making the area occupied by the current inspection target in the photographing screen larger. It will be appreciated that the current inspection target will typically remain in the center of the shot after the lens is zoomed in, since the current inspection target is already in the center of the shot before the lens is zoomed in. Therefore, the current inspection target can be prevented from going out of the frame after the lens is pulled up, and the inspection failure caused by the failure of shooting the current inspection target is reduced.
After the lens is pulled up, the current shooting multiplying power is kept unchanged, and the robot can adjust the shooting focal length for multiple times so as to realize focusing on the current inspection target. Finally, the robot can obtain a plurality of second inspection images through the camera. Obviously, the photographing focal lengths of the second inspection images are different, but the photographing magnification is the same. Because the second inspection image is obtained by shooting after the camera is zoomed in, the shooting magnification of the second inspection image is necessarily larger than that of the first inspection image.
And 105, determining a second inspection image with the best image quality from the plurality of second inspection images as a target inspection image.
This may cause differences in image quality of the different second inspection images due to the different focal lengths of the second inspection images. In order to ensure inspection accuracy, the robot may determine the second inspection image with the optimal image quality as the target inspection image from the viewpoint of image quality, where the indexes describing the image quality may include, but are not limited to: definition, color, noise, etc., are not limited herein. The subsequent robot can carry out corresponding inspection processing on the current inspection target based on the target inspection image, for example, when the inspection target is a meter reading, and the current inspection target is a meter, the robot can carry out reading operation according to the determined target inspection image so as to determine the current reading of the meter.
In some embodiments, the robot may specifically acquire the first inspection image by:
a1, determining a pre-configuration parameter corresponding to the current inspection target.
For each inspection target, the robot determines configuration parameters (i.e., pre-configuration parameters) for the inspection target in advance on the premise that the inspection target can be clearly and centrally shot, and stores the pre-configuration parameters in the storage space set forth above. The preconfiguration parameters include: a preset navigation position point for the inspection target, a preset robot orientation for the inspection target, a preset camera orientation for the inspection target, a preset focal length for the inspection target, and a first preset magnification for the inspection target.
Because the preconfiguration parameters are in one-to-one correspondence with the current inspection target, the robot can determine the preconfiguration parameters corresponding to the current inspection target from all the stored preconfiguration parameters after determining the current inspection target.
A2, adjusting the pose of the robot, the pose of the camera and the shooting focal length of the camera according to the preconfigured parameters.
According to the foregoing description, the determined preconfigured parameters include a preset navigation position point for the current inspection target and a preset robot orientation for the current inspection target. The robot can adjust the pose of the robot according to the preset navigation position point and the preset robot direction, so that the pose of the robot can be matched with the preset parameters.
As can be seen from the foregoing description, the determined preconfigured parameters further include a preset camera orientation for the current inspection target. The robot can thus adjust the pose of the camera on which it is mounted according to the preset camera orientation so that the pose of the camera can be matched with the preset parameters.
As can be seen from the foregoing description, the determined preconfigured parameters further include a preset focal length for the current inspection target. The robot can thereby adjust the shooting focal length of the camera on which it is mounted so that the shooting focal length of the camera can be matched to the preconfigured parameters.
And A3, after the adjustment is finished, controlling the camera to acquire a first inspection image by taking the first preset multiplying power as the shooting multiplying power.
According to the foregoing description, the determined preconfigured parameters further include a first preset multiplying power for the current inspection target. After the pose of the robot, the pose of the camera and the shooting focal length of the camera related to the step A2 are all adjusted, the robot can control the camera to shoot with the first preset multiplying power as the shooting multiplying power, so that the first image can be acquired. It will be appreciated that the first inspection image so taken can generally contain the current inspection target.
Through the process, after the robot reaches the inspection position, the pose of the robot, the pose of the camera carried by the robot and the shooting parameters of the camera can be further adjusted, so that the current inspection target is ensured to be in the shooting range of the camera, and the current inspection target can be clearly shot by the camera.
In some embodiments, the robot may specifically adjust the pose of the camera by:
b1, determining a first preset image corresponding to the current inspection target.
The first preset image is a reference image which is shot by the robot in advance for the current inspection target based on preset parameters. Because the preset parameters corresponding to the current inspection target are determined on the premise that the robot can clearly shoot the current inspection target in the middle, the current inspection target is necessarily located at the center position in the first preset image.
And B2, adjusting the pose of the camera based on the first preset image and the first inspection image.
In the first preset image, the current inspection target is necessarily located at the center position; the current inspection target is not necessarily located at the center position in the first inspection image due to navigation errors and the like in the inspection process. The robot can calculate the image difference between the first preset image and the first inspection image by applying a visual algorithm, and adjust the pose of the camera according to the image difference, so that the composition of the shooting picture of the camera can be close to the first preset image, and finally the effect that the current inspection target is positioned at the center of the shooting picture of the camera can be realized.
In some embodiments, to improve the efficiency of the camera pose adjustment, the step B2 may be specifically implemented by the following steps:
and C1, calculating a first transformation matrix from the first preset image to the first inspection image.
Because the first preset image and the first inspection image are images obtained by shooting the current inspection target by the robot under preset parameters, even if navigation errors exist, the first preset image and the first inspection image still contain a large amount of identical image information. The robot can thus calculate the transformation matrix of the first preset image to the first inspection image from the same image information contained in both. For ease of distinction, the present embodiments record this transformation matrix as the first transformation matrix.
Specifically, the robot may implement calculation of the first transformation matrix based on the image key points, and the process may be: the key points of the first preset image and the key points of the first inspection image are extracted respectively, and then the key points of the first preset image and the key points of the first inspection image are matched, so that a first matching result can be obtained. By way of example only, the keypoint extraction and matching may employ a Scale-invariant feature transform (SIFT) algorithm or a super point algorithm, etc., which are not limited in this embodiment. Because the first matching result describes the correspondence between the key points of the first preset image and the key points of the first inspection image, the robot can calculate the transformation matrix from the first preset image to the first inspection image according to the first matching result.
And C2, calculating pixel coordinates of a pixel point corresponding to the central point of the first preset image in the first inspection image according to the first transformation matrix.
In the foregoing, the present inspection target is located at the center position in the first preset image, so that the center point of the first preset image can be approximately considered as the center point of the present inspection target, that is, the center point of the first preset image represents the present inspection target. And because the first preset image and the first inspection image are images obtained by shooting the current inspection target by the robot under preset parameters, the pixel point corresponding to the center point in the first inspection image also represents the current inspection target. In the case that the first transformation matrix has been obtained, the robot may calculate, through the first transformation matrix, pixel coordinates of a pixel point in the first inspection image corresponding to a center point of the first preset image. The robot may approximate the pixel coordinates to be coordinates of the center point of the current inspection target in the first inspection image.
And C3, adjusting the pose of the camera according to the pixel coordinates.
The purpose of robot adjustment camera's position appearance is: and the current inspection target is positioned at the center of the shooting picture of the camera, namely, the center point of the current inspection target is overlapped with the center point of the shooting picture of the camera. Through step C2, the robot knows that the center point of the current inspection target is the pixel coordinate, and the coordinate of the center point of the shot image is also known to the robot, so that the robot can quickly determine the difference between the center point of the current inspection target and the center point of the shot image of the camera, and adjust the pose of the camera based on the difference, so that the shot object corresponding to the pixel coordinate (i.e., the current inspection target) can be finally located at the center of the shot image in real time.
In some embodiments, the robot may specifically acquire a plurality of second inspection images by:
and D1, fine tuning is carried out by taking the preset focal length as a center, so that a plurality of candidate focal lengths including the preset focal length are obtained.
According to the foregoing description, the preset parameters corresponding to the current inspection target include a preset focal length for the current inspection target. The robot may perform fine tuning within a certain focal length range (a certain focal length span) with the preset focal length as a center, for example, a slight decrease, a slight increase, etc., so as to obtain a plurality of candidate focal lengths including the preset focal length.
It should be noted that the certain focal length range may be set according to the depth of field of the camera. If the depth of field is small, the certain focal length range may be large; conversely, if the depth of field is smaller, the certain focal length range may be smaller. The embodiment of the application does not limit the certain focal length range.
It should be noted that, theoretically, the smaller the step size adopted for fine adjustment is, the better the subsequent inspection effect is; but correspondingly the longer the processing time. Thus, the robot may set the step size according to the specific requirements of the inspection, which is not limited in the embodiments of the present application.
By way of example only, let a preset focal length of f 0 The step length adopted by fine tuning is f 1 The resulting candidate focal length may include: f (f) 0 ,f 0 +f 1 ,f 0 -f 1 ,f 0 +2f 1 ,f 0 -2f 1 ,f 0 +3f 1 ,f 0 -3f 1 Etc., and are not described in detail herein.
And D2, controlling the camera to acquire a plurality of second inspection images by taking the second preset multiplying power as the shooting multiplying power and taking each candidate focal length as the shooting focal length.
In addition to the preset navigation position point, the preset robot orientation, the preset camera orientation, the preset focal length and the first preset magnification set forth above, for each inspection target, the corresponding preset parameters further include: a second preset multiplying power aiming at the inspection target. It should be noted that, for the same inspection target, the first preset multiplying power in the corresponding preset parameters is necessarily smaller than the second preset multiplying power.
Therefore, the determined preset parameters comprise a second preset multiplying power aiming at the current inspection target. The robot can control the camera to take the second preset multiplying power as the shooting multiplying power, and respectively take each candidate focal length as the shooting focal length, a plurality of second inspection images are acquired, and the number of the second inspection images is the same as that of the candidate focal lengths. That is, if N candidate focal lengths are determined in step D1, the robot may finally obtain N second inspection images for the current inspection target in step D2.
In some embodiments, among the plurality of indices describing image quality, the index that is greatly affected by the photographing focal length is sharpness; based on this, the robot may determine the target inspection image mainly with sharpness as a consideration by:
and E1, calculating the image definition of each second inspection image.
The robot can calculate the image definition of each second inspection image through definition algorithms such as a Tenengrad gradient method, a Laplacian gradient method, a variance method and the like, which is not limited in the embodiment of the present application.
And E2, determining the second inspection image with the highest image definition as a target inspection image.
The robot can traverse according to the image definition of each second inspection image, and finally find out the second inspection image with the highest image definition, and the second inspection image can be determined as the target inspection image. For example only, the process of this lookup may be: and sequencing all the obtained second inspection images according to the sequence of the image definition from high to low, wherein the second inspection image with the highest sequencing is the second inspection image with the highest image definition.
In some embodiments, considering that the current inspection target may not occupy the whole image in each second inspection image, the robot may specifically calculate the image sharpness of each second inspection image by:
F1, determining a second preset image corresponding to the current inspection target and a position parameter of the current inspection target in the second preset image.
The robot stores second preset images corresponding to all inspection targets in the storage space in advance. It can be understood that the difference between the first preset image and the second preset image corresponding to the same inspection target is only that the shooting magnification is different. That is, in the case that the preset parameters corresponding to the current inspection target include a preset navigation position point, a preset robot orientation, a preset camera orientation, a preset focal length, a first preset magnification, and a second preset magnification, the first preset image is a reference image that the robot pre-photographs the current inspection target based on the preset navigation position point, the preset robot orientation, the preset camera orientation, the preset focal length, and the first preset magnification in the preset parameters, and the second preset image is a reference image that the robot pre-photographs the current inspection target based on the preset navigation position point, the preset robot orientation, the preset camera orientation, the preset focal length, and the second preset magnification in the preset parameters.
In addition, the robot stores the position parameters of each inspection target in the corresponding second preset image in the storage space in advance. Wherein the location parameters include: coordinates of a designated vertex (e.g., an upper left vertex) of a rectangular frame corresponding to the inspection target in the second preset image, and a size of the rectangular frame.
Therefore, the robot can determine the corresponding second preset image and the position parameter of the current inspection target in the second preset image from the storage space according to the determined current inspection target.
And F2, calculating a second transformation matrix from the second inspection image to a second preset image according to each second inspection image.
Similar to the first inspection image and the first preset image described above, the second inspection image and the second preset image still contain a large amount of the same image information for any one second inspection image. The robot can thus calculate the transformation matrix of the second inspection image into the second preset image according to the same image information contained in the two images. For ease of distinction, the present embodiments record this transformation matrix as a second transformation matrix.
Specifically, the robot may still implement calculation of the second transformation matrix based on the image key points, and the process may be: and respectively extracting key points of a second preset image and a second inspection image, and then matching the key points of the second preset image and the key points of the second inspection image, so that a second matching result can be obtained. By way of example only, key point extraction and matching may employ SIFT algorithm or super point algorithm, and the embodiments of the present application are not limited thereto. Because the second matching result describes the correspondence between the key points of the second preset image and the key points of the second inspection image, the robot can calculate the transformation matrix from the second inspection image to the second preset image according to the second matching result.
And F3, determining a target area in the second inspection image according to the second transformation matrix and the position parameter, wherein the target area comprises the current inspection target.
And under the condition that the second transformation matrix is obtained, the robot can calculate the area in the second inspection image corresponding to the rectangular frame of the current inspection target in the second preset image through the second transformation matrix, wherein the area is the target area. The robot may approximate that the area is the area selected by the rectangular frame corresponding to the current inspection target in the second inspection image.
And F4, calculating the definition of the target area, and determining the definition as the image definition of the second inspection image.
It can be understood that the information of the current inspection target concerned by the robot is not contained outside the target area, so that even if a blurred image effect appears outside the target area, the detection and recognition of the current inspection target in the target area by the robot cannot be influenced. Based on the above, the robot does not need to consider the overall definition of each second inspection image any more, but only needs to consider the definition of the area (i.e. the target area) containing the current inspection target in each second inspection image. The definition algorithm adopted by the robot can be described with reference to the foregoing, and will not be repeated here.
In some embodiments, after the robot is started, key points of a first preset image and a second preset image corresponding to each inspection target are extracted first, so that processing time in subsequent inspection is saved.
In some embodiments, for any inspection target, the configuration information corresponding to the inspection target (including the preset parameters corresponding to the inspection target, the first preset image, the second preset image, and the position parameters of the inspection target in the second preset image) may be specifically determined by:
g1, controlling the robot to move to a position point relatively suitable for the inspection target, and adjusting the robot to face the inspection target. The location point may be determined as a preset navigation location point corresponding to the inspection target, and the orientation may be determined as a preset robot orientation corresponding to the inspection target.
And G2, manually adjusting the direction, shooting focal length and shooting multiplying power of the camera until the image shot by the camera contains a patrol target, wherein the patrol target is positioned in the center of the picture and is clear enough. The orientation may be determined as a preset camera orientation corresponding to the inspection target, the photographing focal length may be determined as a preset photographing focal length corresponding to the inspection target, the photographing magnification may be determined as a second preset magnification corresponding to the inspection target, and the image may be determined as a second preset image corresponding to the inspection target.
In addition, the staff may manually calibrate the rectangular frame (which may be the minimum circumscribed rectangular frame) corresponding to the inspection target in the image (i.e. the second preset image) captured by the camera at this time, and the position parameter (such as the size and the coordinates of the top left vertex) of the rectangular frame may be determined as the position parameter of the inspection target in the second preset image.
It should be noted that, in the image captured by the camera (i.e., the second preset image), only the inspection target is basically included, and no background information is included, and other inspection targets that are not the inspection target may be included.
And G3, keeping the pose of the robot and the camera unchanged, keeping the shooting focal length of the camera unchanged, and reducing the shooting magnification of the camera (for example, reducing to 1) to obtain a new image. The image may be determined as a first preset image corresponding to a patrol object, and the photographing magnification may be determined as a first preset magnification corresponding to the patrol object.
It should be noted that, in the image captured by the camera (i.e. the first preset image), not only the inspection target but also abundant background information is included, and the inspection target is still located in the center of the image.
The robot has determined the configuration information corresponding to the inspection target.
As can be seen from the above, in the embodiment of the present application, when the robot performs inspection, the robot first performs shooting with a smaller shooting magnification, so that the first inspection image obtained by shooting can include the current inspection target; then, adjusting the pose of the camera according to the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera; then shooting with a larger shooting multiplying power, so that a plurality of second inspection images obtained by shooting can all contain the current inspection target; because the shooting focal lengths of the plurality of second inspection images are different, the image quality of the plurality of second inspection images is also different to a certain extent, and therefore the target inspection images can be screened out according to the image quality, and the image quality of the obtained target inspection images is optimal. According to the process, the condition that the current inspection target cannot be shot during inspection can be reduced by shooting with a smaller shooting multiplying power and then shooting with a larger shooting multiplying power, so that the success rate of inspection is improved; and through screening the second inspection images under the different shooting focal lengths, the image quality of the finally obtained target inspection image can meet the inspection requirement, so that the accuracy of the inspection result is ensured.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the inspection method provided above, the embodiment of the application also provides an inspection device, which can be integrated in a robot; alternatively, the inspection device may be integrated into other devices that are in communication with the robot and capable of controlling the robot, which is not limited in this embodiment of the present application. Referring to fig. 2, the inspection apparatus 2 in the embodiment of the present application includes:
the control module 201 is used for controlling the robot to go to the inspection position corresponding to the current inspection target;
a first obtaining module 202, configured to obtain a first inspection image collected by a camera of the robot for a current inspection target after the robot reaches an inspection position;
an adjusting module 203, configured to adjust a pose of the camera based on the first inspection image, so that a current inspection target is at a center of a shot image of the camera;
the second obtaining module 204 is configured to obtain a plurality of second inspection images collected by the camera for the current inspection target, where the photographing focal lengths of the plurality of second inspection images are different but the photographing magnification is the same, and the photographing magnification of the plurality of second inspection images is greater than the photographing magnification of the first inspection image;
The determining module 205 is configured to determine, as the target inspection image, a second inspection image with the best image quality from the plurality of second inspection images.
In some embodiments, the first acquisition module 202 includes:
the first determining unit is configured to determine a preconfiguration parameter corresponding to a current inspection target, where the preconfiguration parameter includes: presetting a navigation position point, a preset robot orientation, a preset camera orientation, a preset focal length and a first preset multiplying power;
the first adjusting unit is used for adjusting the pose of the robot, the pose of the camera and the shooting focal length of the camera according to the pre-configuration parameters;
the first acquisition unit is used for controlling the camera to acquire a first inspection image by taking a first preset multiplying power as a shooting multiplying power after adjustment is completed.
In some embodiments, the adjustment module 203 includes:
the second determining unit is used for determining a first preset image corresponding to the current inspection target, wherein the first preset image is a reference image which is shot by the robot in advance on the basis of preset parameters, and the current inspection target is positioned at the center of the first preset image;
the second adjusting unit is used for adjusting the pose of the camera based on the first preset image and the first inspection image.
In some embodiments, the second adjustment unit comprises:
the first calculating subunit is used for calculating a first transformation matrix from a first preset image to a first inspection image;
the second calculating subunit is used for calculating pixel coordinates of a pixel point corresponding to the center point of the first preset image in the first inspection image according to the first transformation matrix;
and the pose adjusting subunit is used for adjusting the pose of the camera according to the pixel coordinates.
In some embodiments, the preconfigured parameters further comprise: the second preset multiplying power is larger than the first preset multiplying power; the second acquisition module 204 includes:
the fine tuning unit is used for fine tuning by taking the preset focal length as a center to obtain a plurality of candidate focal lengths including the preset focal length;
the second acquisition unit is used for controlling the camera to acquire a plurality of second inspection images by taking a second preset multiplying power as a shooting multiplying power and taking each candidate focal length as a shooting focal length.
In some embodiments, the determining module 205 includes:
the computing unit is used for computing the image definition of each second inspection image;
and the third determining unit is used for determining the second inspection image with the highest image definition as the target inspection image.
In some embodiments, the computing unit comprises:
the first determination subunit is used for determining a second preset image corresponding to the current inspection target and a position parameter of the current inspection target in the second preset image, wherein the first preset image is a reference image of the robot, which is shot in advance for the current inspection target, based on parameters except for a first preset multiplying power in the preset parameters;
the third calculation subunit is used for calculating a second transformation matrix from the second inspection image to a second preset image for each second inspection image;
the second determining subunit is used for determining a target area in the second inspection image according to the second transformation matrix and the position parameter, wherein the target area comprises a current inspection target;
and the fourth calculating subunit is used for calculating the definition of the target area and determining the definition as the image definition of the second inspection image.
As can be seen from the above, in the embodiment of the present application, when the robot performs inspection, the robot first performs shooting with a smaller shooting magnification, so that the first inspection image obtained by shooting can include the current inspection target; then, adjusting the pose of the camera according to the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera; then shooting with a larger shooting multiplying power, so that a plurality of second inspection images obtained by shooting can all contain the current inspection target; because the shooting focal lengths of the plurality of second inspection images are different, the image quality of the plurality of second inspection images is also different to a certain extent, and therefore the target inspection images can be screened out according to the image quality, and the image quality of the obtained target inspection images is optimal. According to the process, the condition that the current inspection target cannot be shot during inspection can be reduced by shooting with a smaller shooting multiplying power and then shooting with a larger shooting multiplying power, so that the success rate of inspection is improved; and through screening the second inspection images under the different shooting focal lengths, the image quality of the finally obtained target inspection image can meet the inspection requirement, so that the accuracy of the inspection result is ensured.
Corresponding to the inspection method provided above, the embodiment of the application also provides electronic equipment. Referring to fig. 3, an electronic device 3 in an embodiment of the present application includes: a memory 301, one or more processors 302 (only one shown in fig. 3) and computer programs stored on the memory 301 and executable on the processors. Wherein: the memory 301 is used for storing software programs and modules, and the processor 302 executes various functional applications and data processing by running the software programs and units stored in the memory 301 to obtain resources corresponding to the preset events. Specifically, the processor 302 implements the following steps by running the above-described computer program stored in the memory 301:
controlling the robot to go to a patrol position corresponding to the current patrol target;
after the robot reaches the inspection position, acquiring a first inspection image acquired by a camera of the robot for a current inspection target;
adjusting the pose of the camera based on the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera;
acquiring a plurality of second inspection images acquired by a camera on a current inspection target, wherein the plurality of second inspection images are different in shooting focal length and identical in shooting multiplying power, and the shooting multiplying power of the plurality of second inspection images is larger than that of the first inspection image;
And determining the second inspection image with the best image quality from the plurality of second inspection images as a target inspection image.
Assuming that the foregoing is a first possible embodiment, in a second possible embodiment provided by way of example of the first possible embodiment, acquiring a first inspection image acquired by a camera of the robot for a current inspection target includes:
determining a pre-configuration parameter corresponding to a current inspection target, wherein the pre-configuration parameter comprises: presetting a navigation position point, a preset robot orientation, a preset camera orientation, a preset focal length and a first preset multiplying power;
according to the preset parameters, adjusting the pose of the robot, the pose of the camera and the shooting focal length of the camera;
after the adjustment is finished, the camera is controlled to take the first preset multiplying power as the shooting multiplying power, and a first inspection image is acquired.
In a third possible implementation manner provided by the second possible implementation manner, adjusting the pose of the camera based on the first inspection image includes:
determining a first preset image corresponding to a current inspection target, wherein the first preset image is a reference image of the robot, which is shot in advance for the current inspection target based on preset parameters, and the current inspection target is positioned in the center of the first preset image;
And adjusting the pose of the camera based on the first preset image and the first inspection image.
In a fourth possible implementation manner provided by the third possible implementation manner, adjusting the pose of the camera based on the first preset image and the first inspection image includes:
calculating a first transformation matrix from a first preset image to a first inspection image;
according to the first transformation matrix, calculating pixel coordinates of a pixel point corresponding to a central point of a first preset image in the first inspection image;
and adjusting the pose of the camera according to the pixel coordinates.
In a fifth possible implementation provided by the second possible implementation as set forth above, the pre-configuration parameters further include: the second preset multiplying power is larger than the first preset multiplying power; acquiring a plurality of second inspection images acquired by a camera on a current inspection target, including:
fine tuning is carried out by taking the preset focal length as the center to obtain a plurality of candidate focal lengths including the preset focal length;
and controlling the camera to acquire a plurality of second inspection images by taking the second preset multiplying power as the shooting multiplying power and taking each candidate focal length as the shooting focal length.
In a sixth possible implementation manner provided by the fifth possible implementation manner as a basis, determining, as the target inspection image, the second inspection image with the best image quality from the plurality of second inspection images, includes:
Calculating the image definition of each second inspection image;
and determining the second inspection image with the highest image definition as a target inspection image.
In a seventh possible embodiment provided by the third possible embodiment as a basis, calculating the image sharpness of each of the second inspection images includes:
determining a second preset image corresponding to the current inspection target and a position parameter of the current inspection target in the second preset image, wherein the first preset image is a reference image of the robot for shooting the current inspection target in advance based on parameters except the first preset multiplying power in the preset parameters;
calculating a second transformation matrix from the second inspection image to a second preset image;
determining a target area in the second inspection image according to the second transformation matrix and the position parameters, wherein the target area comprises a current inspection target;
and calculating the definition of the target area, and determining the definition as the image definition of the second inspection image.
It should be appreciated that in embodiments of the present application, the processor 302 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
As can be seen from the above, in the embodiment of the present application, when the robot performs inspection, the robot first performs shooting with a smaller shooting magnification, so that the first inspection image obtained by shooting can include the current inspection target; then, adjusting the pose of the camera according to the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera; then shooting with a larger shooting multiplying power, so that a plurality of second inspection images obtained by shooting can all contain the current inspection target; because the shooting focal lengths of the plurality of second inspection images are different, the image quality of the plurality of second inspection images is also different to a certain extent, and therefore the target inspection images can be screened out according to the image quality, and the image quality of the obtained target inspection images is optimal. According to the process, the condition that the current inspection target cannot be shot during inspection can be reduced by shooting with a smaller shooting multiplying power and then shooting with a larger shooting multiplying power, so that the success rate of inspection is improved; and through screening the second inspection images under the different shooting focal lengths, the image quality of the finally obtained target inspection image can meet the inspection requirement, so that the accuracy of the inspection result is ensured.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the system embodiments described above are merely illustrative, e.g., the division of modules or units described above is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct associated hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The above computer readable storage medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer readable Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier wave signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable storage medium described above may be appropriately increased or decreased according to the requirements of the jurisdiction's legislation and the patent practice, for example, in some jurisdictions, the computer readable storage medium does not include electrical carrier signals and telecommunication signals according to the legislation and the patent practice.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (10)
1. A method of inspection comprising:
controlling the robot to go to a patrol position corresponding to the current patrol target;
after the robot reaches the inspection position, acquiring a first inspection image acquired by a camera of the robot on the current inspection target;
adjusting the pose of the camera based on the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera;
acquiring a plurality of second inspection images acquired by the camera on the current inspection target, wherein the plurality of second inspection images are different in shooting focal length and identical in shooting multiplying power, and the shooting multiplying power of the plurality of second inspection images is larger than that of the first inspection image;
And determining a second inspection image with the best image quality from the plurality of second inspection images as a target inspection image.
2. The inspection method of claim 1, wherein the acquiring the first inspection image acquired by the camera of the robot for the current inspection target comprises:
determining a pre-configuration parameter corresponding to the current inspection target, wherein the pre-configuration parameter comprises: presetting a navigation position point, a preset robot orientation, a preset camera orientation, a preset focal length and a first preset multiplying power;
adjusting the pose of the robot, the pose of the camera and the shooting focal length of the camera according to the pre-configured parameters;
and after the adjustment is finished, controlling the camera to take the first preset multiplying power as a shooting multiplying power, and acquiring a first inspection image.
3. The inspection method of claim 2, wherein the adjusting the pose of the camera based on the first inspection image comprises:
determining a first preset image corresponding to the current inspection target, wherein the first preset image is a reference image of the robot, which is shot in advance on the current inspection target based on the preset parameters, and the current inspection target is positioned in the center of the first preset image;
And adjusting the pose of the camera based on the first preset image and the first inspection image.
4. The inspection method of claim 3, wherein adjusting the pose of the camera based on the first preset image and the first inspection image comprises:
calculating a first transformation matrix from the first preset image to the first inspection image;
calculating pixel coordinates of a pixel point corresponding to the center point of the first preset image in the first inspection image according to the first transformation matrix;
and adjusting the pose of the camera according to the pixel coordinates.
5. The inspection method of claim 2, wherein the preconfigured parameters further comprise: a second preset magnification, the second preset magnification being greater than the first preset magnification; the obtaining a plurality of second inspection images acquired by the camera for the current inspection target includes:
fine tuning is carried out by taking the preset focal length as a center to obtain a plurality of candidate focal lengths including the preset focal length;
and controlling the camera to acquire a plurality of second inspection images by taking the second preset multiplying power as the shooting multiplying power and taking each candidate focal length as the shooting focal length.
6. The inspection method according to claim 5, wherein determining a second inspection image with the best image quality among the plurality of second inspection images as a target inspection image comprises:
calculating the image definition of each second inspection image;
and determining the second inspection image with the highest image definition as the target inspection image.
7. The inspection method of claim 6, wherein said calculating the image sharpness of each of said second inspection images comprises:
determining a second preset image corresponding to the current inspection target and a position parameter of the current inspection target in the second preset image, wherein the first preset image is a reference image of the robot, which is shot in advance for the current inspection target based on parameters except the first preset multiplying power in the preset parameters;
calculating a second transformation matrix from the second inspection image to the second preset image for each second inspection image;
determining a target area in the second inspection image according to the second transformation matrix and the position parameter, wherein the target area comprises the current inspection target;
And calculating the definition of the target area, and determining the definition as the image definition of the second inspection image.
8. A patrol device, comprising:
the control module is used for controlling the robot to go to the inspection position corresponding to the current inspection target;
the first acquisition module is used for acquiring a first inspection image acquired by a camera of the robot on the current inspection target after the robot reaches the inspection position;
the adjusting module is used for adjusting the pose of the camera based on the first inspection image so that the current inspection target is positioned at the center of a shooting picture of the camera;
the second acquisition module is used for acquiring a plurality of second inspection images acquired by the camera on the current inspection target, wherein the shooting focal lengths of the plurality of second inspection images are different but the shooting multiplying power is the same, and the shooting multiplying power of the plurality of second inspection images is larger than that of the first inspection image;
and the determining module is used for determining the second inspection image with the best image quality from the plurality of second inspection images as a target inspection image.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310344462.4A CN116416701A (en) | 2023-03-28 | 2023-03-28 | Inspection method, inspection device, electronic equipment and storage medium |
PCT/CN2023/140777 WO2024198558A1 (en) | 2023-03-28 | 2023-12-21 | Inspection method, inspection apparatus, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310344462.4A CN116416701A (en) | 2023-03-28 | 2023-03-28 | Inspection method, inspection device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116416701A true CN116416701A (en) | 2023-07-11 |
Family
ID=87049158
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310344462.4A Pending CN116416701A (en) | 2023-03-28 | 2023-03-28 | Inspection method, inspection device, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116416701A (en) |
WO (1) | WO2024198558A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117934722A (en) * | 2024-01-26 | 2024-04-26 | 武汉海德斯路科技有限公司 | Inspection point image acquisition method and device based on three-dimensional model |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119360360A (en) * | 2024-12-24 | 2025-01-24 | 国网瑞嘉(天津)智能机器人有限公司 | Electric meter box data processing method, device and electronic device based on wearable device |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9558555B2 (en) * | 2013-02-22 | 2017-01-31 | Leap Motion, Inc. | Adjusting motion capture based on the distance between tracked objects |
CN111316185A (en) * | 2019-02-26 | 2020-06-19 | 深圳市大疆创新科技有限公司 | Inspection control method of movable platform and movable platform |
CN110850872A (en) * | 2019-10-31 | 2020-02-28 | 深圳市优必选科技股份有限公司 | Robot inspection method and device, computer readable storage medium and robot |
WO2021253247A1 (en) * | 2020-06-16 | 2021-12-23 | 深圳市大疆创新科技有限公司 | Inspection method and apparatus for movable platform, and movable platform and storage medium |
CN113727022B (en) * | 2021-08-30 | 2023-06-20 | 杭州申昊科技股份有限公司 | Inspection image acquisition method and device, electronic equipment, storage medium |
CN114594770B (en) * | 2022-03-04 | 2024-04-26 | 深圳市千乘机器人有限公司 | Inspection method for inspection robot without stopping |
CN115097836A (en) * | 2022-06-30 | 2022-09-23 | 国电南瑞科技股份有限公司 | Transmission line inspection method, system and storage medium based on image registration |
-
2023
- 2023-03-28 CN CN202310344462.4A patent/CN116416701A/en active Pending
- 2023-12-21 WO PCT/CN2023/140777 patent/WO2024198558A1/en unknown
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117934722A (en) * | 2024-01-26 | 2024-04-26 | 武汉海德斯路科技有限公司 | Inspection point image acquisition method and device based on three-dimensional model |
Also Published As
Publication number | Publication date |
---|---|
WO2024198558A1 (en) | 2024-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7003238B2 (en) | Image processing methods, devices, and devices | |
US11538175B2 (en) | Method and apparatus for detecting subject, electronic device, and computer readable storage medium | |
CN110493527B (en) | Body focusing method and device, electronic equipment and storage medium | |
CN116416701A (en) | Inspection method, inspection device, electronic equipment and storage medium | |
US8340512B2 (en) | Auto focus technique in an image capture device | |
EP3598385B1 (en) | Face deblurring method and device | |
CN111815517B (en) | Self-adaptive panoramic stitching method based on snapshot pictures of dome camera | |
CN112367474B (en) | Self-adaptive light field imaging method, device and equipment | |
CN112686802A (en) | Image splicing method, device, equipment and storage medium | |
CN111080526A (en) | Method, device, equipment and medium for measuring and calculating farmland area of aerial image | |
CN108257186B (en) | Method and device for determining calibration image, camera and storage medium | |
CN110349163A (en) | Image processing method and apparatus, electronic device, computer-readable storage medium | |
CN109587392B (en) | Method and device for adjusting monitoring equipment, storage medium and electronic device | |
CN109712177A (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium | |
CN111815715A (en) | Method and device for calibrating zoom pan-tilt camera and storage medium | |
CN112150355A (en) | Image processing method and related equipment | |
CN110365897A (en) | Image correction method and device, electronic equipment and computer readable storage medium | |
CN113837979A (en) | Live image synthesis method and device, terminal device and readable storage medium | |
CN113938578B (en) | Image blurring method, storage medium and terminal equipment | |
CN114663284A (en) | Infrared thermal imaging panoramic image processing method, system and storage medium | |
CN115035281A (en) | A Fast Infrared Panoramic Image Stitching Method | |
CN113409375A (en) | Image processing method, image processing apparatus, and non-volatile storage medium | |
CN114862934B (en) | Scene depth estimation method and device for billion pixel imaging | |
CN115086558B (en) | Focusing method, image pickup apparatus, terminal apparatus, and storage medium | |
CN117710250B (en) | Method for eliminating honeycomb structure imaged by fiberscope |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |