CN113569843A - Corner point detection method and device, computer equipment and storage medium - Google Patents
Corner point detection method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113569843A CN113569843A CN202110684113.8A CN202110684113A CN113569843A CN 113569843 A CN113569843 A CN 113569843A CN 202110684113 A CN202110684113 A CN 202110684113A CN 113569843 A CN113569843 A CN 113569843A
- Authority
- CN
- China
- Prior art keywords
- corner
- image frame
- target image
- calibration
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
The application relates to a corner point detection method, a corner point detection device, computer equipment and a storage medium. The method comprises the following steps: and determining a calibration board area in a target image frame to be detected based on the angular point set, wherein the angular point set is determined based on the angular points in the calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by a shooting device in a motion state. When the corner detection is carried out, the whole area of the image frame is not detected, namely the global corner detection is not carried out, but only the calibration board area in the whole area is detected, so that the detection range is reduced, and the consumed computing resource can be reduced. Meanwhile, the detection range is reduced, and the workload of the whole detection is correspondingly reduced, so that the detection efficiency can be improved, and the batch calibration operation and the production capacity of a calibration production line are facilitated to be improved.
Description
Technical Field
The present disclosure relates to the field of calibration technologies for photographing devices, and in particular, to a method and an apparatus for detecting a corner point, a computer device, and a storage medium.
Background
In the process of camera production, the camera imaging sensor must not be perfect, so that the picture seen by the camera is different from that seen by human eyes, i.e. the image is distorted. In addition, the camera is mounted in a position such that the plane of the lens is not horizontal, but is angled relative to the plane being imaged, so that the position of the object viewed by the camera does not match the actual position of the object. It is therefore necessary for the camera to be calibrated before use, in particular for machine vision applications and image measurement scenarios.
In the calibration process of the camera, a calibration board with a specific corner point pattern is usually shot by the camera, image coordinates of each corner point are extracted from each frame of shot image, and the image coordinates of the corner points are matched with three-dimensional space coordinates of corresponding corner points on the calibration board, so that a data base is provided for the subsequent calibration process. Usually, the camera calibration needs only to take dozens of photos for calibration, but a video calibration is generally recorded when the camera-gyroscope is calibrated, and at this time, the calibration corner points in each frame of image in the video stream need to be detected. In the related art, the corner coordinates are mainly obtained in a global detection manner, that is, by performing global detection on each image frame in the video stream. Since the calibration board may only occupy a part of the image, and the entire image is detected, the corner-free region of the image without the calibration board is detected at the same time, which consumes excessive computing resources.
Disclosure of Invention
In view of the above, it is necessary to provide a corner detection method, an apparatus, a computer device and a storage medium capable of saving computing resources in view of the above technical problems.
A method of corner detection, the method comprising:
determining a calibration board area in a target image frame to be detected based on an angular point set, wherein the angular point set is determined based on angular points in a calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by a shooting device in a motion state;
and detecting and obtaining corner points in the target image frame based on the detection area.
In one embodiment, determining a calibration plate region in a target image frame to be detected based on a set of corner points includes:
if the corner set meets a first preset condition, acquiring position information of each corner in the corner set in a target image frame;
and determining a calibration board area based on the position information of each corner point in the set of corner points in the target image frame.
In one embodiment, the set of corner points is determined based on the corner points that can be tracked when tracking to the target image frame after tracking the corner points detected in the preamble image frame; the preamble image frame is a previous image frame of the target image frame in the video stream.
In one embodiment, obtaining the position information of each corner in the set of corners in the target image frame includes:
acquiring attitude information, wherein the attitude information is used for representing the position relation of the shooting equipment relative to the calibration board when shooting a target image frame;
and acquiring the position information of each corner point in the corner point set in the target image frame according to the attitude information and the image coordinate system established based on the target image frame.
In one embodiment, the set of corner points is determined by corner points corresponding to vertices enabling determination of the contour of the calibration plate, and/or the pose information is determined based on a coordinate system transformation between a coordinate system in which the calibration plate is located and a coordinate system in which the photographing apparatus is located when the image frame of the object is photographed.
In one embodiment, the first preset condition includes that the total number of corner points in the set of corner points is greater than a preset threshold.
In one embodiment, the calibration board region satisfies a second preset condition, where the second preset condition includes that each corner in the set of corners is located in the calibration board region or that a corner feature of each corner in the set of corners can be retained in the calibration board region.
An apparatus for corner detection, the apparatus comprising:
the device comprises a determining module, a detecting module and a processing module, wherein the determining module is used for determining a calibration board area in a target image frame to be detected based on an angular point set and taking the calibration board area as a detection area, the angular point set is determined based on angular points in the calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by shooting equipment in a motion state;
and the detection module is used for detecting and obtaining the corner points in the target image frame based on the detection area.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program: determining a calibration board area in a target image frame to be detected based on an angular point set, wherein the angular point set is determined based on angular points in a calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by a shooting device in a motion state; and detecting and obtaining corner points in the target image frame based on the detection area.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of: determining a calibration board area in a target image frame to be detected based on an angular point set, wherein the angular point set is determined based on angular points in a calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by a shooting device in a motion state; and detecting and obtaining corner points in the target image frame based on the detection area.
The corner point detection method, the corner point detection device, the computer equipment and the storage medium determine a calibration board area in a target image frame to be detected based on the corner point set and use the calibration board area as a detection area. And detecting and obtaining corner points in the target image frame based on the detection area. When the corner detection is carried out, the whole area of the image frame is not detected, namely the global corner detection is not carried out, and only the calibration board area in the whole area is detected, so that the detection range is reduced, and the consumed computing resource can be reduced. Meanwhile, the detection range is reduced, and the workload of the whole detection is correspondingly reduced, so that the detection efficiency can be improved, the batch calibration operation is facilitated, and the productivity of a calibration production line is facilitated to be improved.
Drawings
FIG. 1 is a diagram illustrating a camera calibration scenario in one embodiment;
FIG. 2 is a schematic flow chart of a corner detection method in one embodiment;
FIG. 3 is a diagram illustrating image frames taken with the camera in different poses in one embodiment;
FIG. 4 is a schematic flow chart of a corner detection method in another embodiment;
FIG. 5 is a schematic diagram of a calibration system coordinate system in one embodiment;
FIG. 6 is a schematic diagram of a coordinate system of a calibration system in another embodiment;
FIG. 7 is a schematic flow chart of a corner detection method in one embodiment;
FIG. 8 is a schematic flow chart of a corner detection method in another embodiment;
FIG. 9 is a block diagram of an embodiment of a corner detection apparatus;
FIG. 10 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various terms, but these terms are not limited by these terms unless otherwise specified. These terms are only used to distinguish one term from another. For example, the third preset threshold and the fourth preset threshold may be the same or different without departing from the scope of the present application.
In the process of camera production, the camera imaging sensor must not be perfect, so that the picture seen by the camera is different from that seen by human eyes, i.e. the image is distorted. In addition, the camera is mounted in a position such that the plane of the lens is not horizontal, but is angled relative to the plane being imaged, so that the position of the object viewed by the camera does not match the actual position of the object. It is therefore necessary for the camera to be calibrated before use, in particular for machine vision applications and image measurement scenarios.
In the calibration process of the camera, a calibration board with a specific corner point pattern is usually shot by the camera, image coordinates of each corner point are extracted from each frame of shot image, and the image coordinates of the corner points are matched with three-dimensional space coordinates of corresponding corner points on the calibration board, so that a data base is provided for the subsequent calibration process. From the above process, it is necessary to obtain the coordinates of the corner points of the calibration plate in each frame of image. In the related art, the coordinates of the corner points are mainly obtained in a global detection manner, that is, by detecting the whole image. Since the calibration board may only occupy a part of the image, and the entire image is detected, the corner-free region of the image without the calibration board is detected at the same time, which consumes excessive computing resources.
In view of the above problems in the related art, embodiments of the present invention provide a corner point detection method, which may be applied to a device with a shooting function, where the device may be a camera, a smart phone, a personal computer with a camera, a notebook computer, a tablet computer, a portable wearable device, and the like. It should be noted that, the numbers of "a plurality" and the like mentioned in the embodiments of the present application each refer to a number of "at least two", for example, "a plurality" refers to "at least two".
Before describing the embodiments of the present application, a main application scenario of the present application will be described. The corner detection method is mainly used for calibrating shooting equipment. Specifically, as shown in fig. 1, the calibration board is placed in front of the camera, i.e., the shooting device, the mechanical arm is used for placing the camera, the camera shoots an image and transmits the image to the server through the data transmission channel between the camera and the shooting device, the server performs calibration processing, and the server transmits the calibration result to the camera, so that the camera adjusts itself according to the calibration result.
In connection with the above embodiments, in one embodiment, referring to fig. 2, a corner point detection method is provided. The method is applied to a server, and an execution subject is taken as a server for example, it is understood that the method may also be applied to a shooting device, and a corresponding execution subject is a shooting device, or according to actual needs and feasibility, the method may be applied to both a shooting device and a server, that is, an execution subject of a part of steps in the method may be a shooting device, and an execution subject of another part of steps may be a server, which is not specifically limited in this embodiment of the present invention. For example, step 201 in the method flow corresponding to fig. 2 may be executed by the shooting device, and then the shooting device sends the data corresponding to the calibration board area to the server, and then the server executes step 202. In combination with the above embodiments, in one embodiment, the method includes the steps of:
201. determining a calibration board area in a target image frame to be detected based on an angular point set, wherein the angular point set is determined based on angular points in a calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by a shooting device in a motion state;
202. and detecting and obtaining corner points in the target image frame based on the detection area.
In step 201, after the shooting device shoots the calibration board, the video stream can be obtained, and the calibration board will exist in each image frame of the video stream. The angular points in the calibration board are captured in each image frame, so that the angular points captured in each image frame can form an angular point set, that is, the angular point set is formed by the angular points in the image frames, and the angular points in the image frames include the angular points in the corresponding calibration board. When determining the calibration board region in the target image frame to be detected based on the corner set, the corner set is formed by the corners in the target image frame, so that the corners in the corner set can be connected, and the calibration board region is selected based on the connecting line frame.
As shown in fig. 3, the left half of fig. 3 is an image frame captured by the camera facing the calibration board, the right half of fig. 3 is an image frame captured by the camera after the posture of the camera changes with respect to the calibration board, and a dotted frame in the right half of fig. 3 is a representation when the calibration board region is defined by a square frame. In both the left half and the right half of fig. 3, it is obvious that the image frame is not only photographed with the calibration board, but also photographed with the environment around the calibration board, and these parts are also subjected to the corner point detection in the related art.
As can be seen from the implementation of the above steps 201 and 202, the implementation can be applied to corner detection based on video stream. Specifically, a video is composed of one frame of image, and corner detection is required for each image frame. For a certain image frame, before the angular point detection is performed on the image frame, if a detection area smaller than the image frame is determined in the image frame, the angular point is only detected in the detection area, so that the calculation resource consumption can be reduced.
In the actual implementation process, if the angular point detection is based on video, with reference to fig. 1, the actual calibration process may be that a camera is installed on the mechanical arm to record video on the calibration board, the mechanical arm makes different postures in the video recording process, and the recorded video is transmitted to the server to detect the angular points of the calibration board by using the following algorithm, so that batch calibration production is performed. If the actual calibration workload is large, for example, the video is long, the video stream in which the corner points need to be detected can be divided into n sections, the same operation is performed on the n sections of video, and then the corner points detected by each section of video are organized according to the time sequence to perform the subsequent calibration work. The angular point detection can be performed by a plurality of threads in the server, for example, the angular point detection is performed in parallel by n threads.
The method provided by the embodiment of the invention determines the calibration board area in the target image frame to be detected based on the corner point set and uses the calibration board area as the detection area. And detecting and obtaining corner points in the target image frame based on the detection area. When the corner detection is carried out, the whole area of the image frame is not detected, namely the global corner detection is not carried out, and only the calibration board area in the whole area is detected, so that the detection range is reduced, and the consumed computing resource can be reduced. Meanwhile, the detection range is reduced, and the workload of the whole detection is correspondingly reduced, so that the detection efficiency can be improved, the batch calibration operation is facilitated, and the productivity of a calibration production line is facilitated to be improved.
In combination with the content of the foregoing embodiment, in an embodiment, referring to fig. 4, an embodiment of the present invention further provides a corner point detecting method, including the following steps:
401. if the corner set meets a first preset condition, acquiring position information of each corner in the corner set in a target image frame;
402. determining a calibration plate area based on the position information of each corner point in the set of corner points in the target image frame, and taking the calibration plate area as a detection area;
403. and detecting and obtaining corner points in the target image frame based on the detection area.
In step 401, the reason why the first preset condition is to be set for the corner set is that the calibration board area needs to be determined based on the corner set in the embodiment of the present invention, and there is a case that the calibration board area cannot be determined or the determined calibration board area is not accurate enough because the corner set does not satisfy some conditions. Therefore, for these situations, a first preset condition needs to be preset for the corner point set in advance, so that on the premise that the first preset condition is satisfied, in step 402, the calibration board area can be determined.
For example, if the set of corner points is all the corner points of a certain part of the concentrated calibration board, for example, the set of corner points is all the corner points of the part of the concentrated calibration board squeezed in the middle of the calibration board, it is obvious that the entire calibration board cannot be accurately determined based on such a set of corner points. Therefore, the first preset condition can be that the number of the corner points with the distance between every two corner points in the corner point set larger than a certain threshold reaches a certain number, so the first preset condition can be set as follows, and one of the two considerations is that the corner points with the distance between every two corner points larger than the certain threshold exist, so that the condition that the corner points in the corner point set are not concentrated in a certain part of regions of the calibration plate can be explained; secondly, the angular points with the distance between every two angular points larger than a certain threshold reach a certain number, so that the angular points with the distance between every two angular points larger than the certain threshold are not certain examples, but are widely distributed in the angular point set. In summary, based on these two considerations, it is possible to ensure that an accurate calibration plate area is determined. Of course, in an actual implementation process, the first preset condition may also be set for the corner set from different reference angles, which is not specifically limited in this embodiment of the present invention.
In step 401, the position information may be coordinates of the corner points in the target image frame. For example, a coordinate system may be established in the lower left corner of the image frame, since the corner points are represented in the image frame by pixels, i.e. are all composed of pixels, the pixel coordinates may be determined, and thus the corner point coordinates may also be determined. It should be noted that a corner usually includes a plurality of pixels in an image frame, so that the position information of a certain corner can be actually represented by coordinates of the corner corresponding to the plurality of pixels included in the image frame.
It should be noted that, except for step 401, if the set of corner points does not satisfy the first preset condition, the set of corner points is not suitable for determining the calibration board area. In such a case, global corner detection may be performed on the target image frame to be detected.
In the method provided by the embodiment of the invention, by judging whether the corner set meets the first preset condition, the calibration board area is determined based on the corner set under the condition that the corner set meets the first preset condition. Because the first preset condition can filter out the corner-based set, the calibration board region cannot be determined or the determined situation is not accurate enough, and the success rate of subsequent corner detection can be improved.
It should be understood that although the steps in the flowcharts of fig. 1 and 4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 and 4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the other steps or stages.
With reference to the above description, in an embodiment, a corner set is determined based on corner points that can be tracked when tracking to a target image frame after tracking corner points detected in a preamble image frame; the preamble image frame is a previous image frame of the target image frame in the video stream.
Wherein, a correlation algorithm of target tracking can be adopted when tracking the detected corner points. The related algorithm for target tracking can be a related filtering algorithm, a scale self-adaptive algorithm and an optical flow tracking algorithm. For ease of understanding, the following explanation will be given by taking an optical flow tracking algorithm as an example: the optical flow is the instantaneous velocity of pixel motion of a spatially moving object on the viewing imaging plane. Optical flow tracking is a method of calculating motion information of an object between adjacent frames by using the change of pixels in an image sequence in a time domain and the correlation between adjacent frames to find the correspondence between a previous frame and a current frame. In conjunction with the above explanation, the optical flow tracking requires a relatively close distance between two frames, so that the preamble image frame is the previous image frame of the target image frame in the embodiment of the present invention. After the position information of each corner point in the corner point set in the front-sequence image frame is determined, the position information of each corner point in the corner point set in the target image frame can be obtained based on optical flow tracking.
After the position information of each corner point in the corner point set in the target image frame is obtained, that is, the positions of the corner points in the corner point set in the target image frame are determined, and based on the position information, the calibration plate area can be determined. Accordingly, the embodiment of the present invention does not specifically limit the manner of determining the calibration board region based on the position information of each corner point in the set of corner points in the target image frame, including but not limited to: determining a frame which can surround each corner point in the corner point set in the target image frame based on the position information of each corner point in the corner point set in the target image frame; based on the bounding box, a calibration board area is determined.
It should be noted that, since the calibration board is generally rectangular, the frame in the above process may be rectangular. In practice, the frame may have other shapes, such as an irregular frame, and only needs to be able to surround each corner in the corner set and be smaller than the size of the image frame itself. Taking the frame as a rectangle as an example, the frame may further be a minimum frame capable of enclosing each corner in the corner set, so as to reduce the area of the region to be detected for subsequent corner detection, thereby reducing the consumed computing resource and improving the detection efficiency. It should be noted that, in an actual implementation process, no matter whether a frame is the minimum frame or not, in order to ensure that the frame can surround each corner point in the corner point set, the frame may be expanded according to some principles, for example, the distance between two pixels is expanded to the outside in a unified manner, which is not specifically limited in the embodiment of the present invention.
According to the method provided by the embodiment of the invention, the position information of each corner point in the corner point set in the target image frame can be determined based on the tracking algorithm, the calibration plate area, namely the detection area, can be determined based on the position information of each corner point in the corner point set in the target image frame, and the corner points in the target image frame can be detected and obtained based on the detection area. In the subsequent corner detection, the whole area of the image frame is not detected, namely the global corner detection is not performed, but only the calibration board area in the whole area is detected, so that the detection range is reduced, and the consumed computing resource can be reduced. Meanwhile, the detection range is reduced, and the workload of the whole detection is correspondingly reduced, so that the detection efficiency can be improved, the batch calibration operation is facilitated, and the productivity of a calibration production line is facilitated to be improved.
The process is mainly based on tracking to realize corner detection, and the main idea is to track the corners detected in the preamble image frame so as to predict the position information of the corners in the target image frame, and then to define an area smaller than the size of the image frame based on the position information of the corners, so that the subsequent corner detection is only carried out in the area.
As can be seen from fig. 1, the calibration process requires the mechanical arm to drive the camera to move and to be shot by the camera, so as to perform calibration based on the shot video stream. In addition, the coordinate system o in FIG. 1 where the robot base is locatedwAnd the coordinate system o in which the calibration plate is locatedbIs constant. The mechanical arm drives the camera to move, that is, the camera moves along with the movement of the mechanical arm, referring to fig. 5 and 6, it can be seen from fig. 5 and 6 that the mechanical arm is located in a coordinate system ohCoordinate system o of the cameracIs relatively constant, but ocAnd ohThe two coordinate systems are relative to owAnd obAll changed. In the state shown by the robot arm corresponding to fig. 5, the left half of fig. 3 is referred to as an image captured by the camera. In the state shown by the robot arm corresponding to fig. 6, the right half of fig. 3 is referred to for an image captured by the camera.
Since the robot arm is controlled by the system, it is known how to move the robot arm, and from the above description of the embodiment, the robot arm base is located in the coordinate system owAnd the coordinate system o in which the calibration plate is locatedbIs constant and the coordinate system o of the robot arm is locatedhCoordinate system o of the cameracIs relatively invariant. Thus, only o needs to be knownwAnd obRelative transformation between Tw_b、ohAnd ocRelative transformation between Th_cAnd relearning owAnd ohRelative transformation between Tw_hThen can determine ocAnd obRelative transformation between Tc_b. The above-described relative transformation means a relative transformation between coordinate systems. At the time of determining the time at which the target image frame is captured ocAnd obRelative transformation between Tc_bI.e. can be based on Tc_bAnd calculating to obtain the position information of each corner point in the corner point set in the target image frame.
As can be seen from the above description, the shooting device is in continuous motion when shooting the video stream, so that for each image frame in the video stream, the posture of the shooting device relative to the calibration board is different each time the image frame is shot, that is, the video stream is shot when the shooting device is in continuous motion. Based on the foregoing principle and in combination with the content of the foregoing embodiment, in an embodiment, the embodiment of the present invention does not specifically limit the manner of obtaining the position information of each corner in the set of corners in the target image frame, which includes but is not limited to: acquiring attitude information, wherein the attitude information is used for representing the position relation of the shooting equipment relative to the calibration board when shooting a target image frame; and acquiring the position information of each corner point in the corner point set in the target image frame according to the attitude information and the image coordinate system established based on the target image frame.
In conjunction with the coordinate systems shown in fig. 5 and fig. 6, the position relationship, i.e., the posture information, of the photographing apparatus relative to the calibration board when the target image frame is photographed may be a relative transformation between the coordinate systems mentioned above, and may be specifically represented by a transformation matrix between the coordinate systems, i.e., the relative transformation T mentioned abovew_b、Th_c、Tw_hAnd Tc_bAny one of the two matrices may be a transformation matrix, which is not specifically limited in this embodiment of the present invention.
Wherein, the transformation relation between the transformation matrixes can refer to the following formula (1) and formula (2):
in the above formulas (1) and (2), Tw_bAnd Th_cIs constant, and Tw_hAnd Tc_bMay vary. In fig. 5 and 6, the mechanical arm moves and carries the camera together, the interval of each shooting is very short, for example, the video is shot for 1 second and may be 24 frames, and thus, each time T is obtainedw_hAnd Tc_bIs continuous in one frame.
In the above-mentioned formula,the upper mark 1 in (1) can refer to the time o when the 1 st frame is shotwAnd ohA transformation matrix between the first and second image data,the upper mark 2 in (1) can refer to the time o when the 2 nd frame is shotwAnd ohA transformation matrix between. It should be noted that, in the following description,andthe values of (2) are different, because the mechanical arm moves when the frame 2 is shot, so that the coordinate system of the mechanical arm is changed relative to the coordinate system of the mechanical arm base.
In the same way, the method for preparing the composite material,the upper mark 1 in (1) can refer to the time o when the 1 st frame is shotcAnd obA transformation matrix between the first and second image data,the upper mark 2 in (1) can refer to the time o when the 2 nd frame is shotcAnd obA transformation matrix between. It should be noted that, in the following description,andthe value of (2) is also different because the camera moves along with the movement of the mechanical arm when the frame 2 is shot, and the coordinate system where the camera is located is changed relative to the coordinate system where the calibration plate is located.
Note that, because of Tw_bAnd Th_cIs fixed and invariant, so that T can be solved firstw_bAnd Th_c. In particular, with respect to the same corner points in the calibration plate, T may be solved based on coordinates of these corner points in the coordinate system of the preceding image frame, coordinates of these corner points in the coordinate system of the target image frame, and coordinates of these corner points in the coordinate system of the calibration platew_bAnd Th_c. For the target image frame to be detected, if the coordinates of the corner points in the punctuation plate in the target image frame need to be acquired, the coordinates can be combined with Tw_bAnd Th_cTo obtain. The coordinates of the corner points in the calibration plate in the target image frame specifically refer to the coordinates of the corner points in an image coordinate system established based on the target image frame, and the image coordinate system is established by taking the top point of the upper left corner of the target image frame as an origin.
It should be noted that, different from the optical flow tracking method, the method for solving the target image frame with the corner point is provided, assuming that the preamble image frame is the 1 st frame, and the target image frame to be detected is the k-th frame, and then the solution T is obtainedw_bAnd Th_cDuring the process, the value of k needs to be as large as possible, so that the interval between the preamble image frame and the target image frame is as large as possible, because the shooting interval time of two adjacent frames is actually too short, if k is not 2 and the value is small, the interval time between the preamble image frame and the target image frame is still very short even if several frame spans exist. This results in that the robot arm may not have moved so much and its movement is essentially undetectable. Accordingly, the transformation matrix determined in frame 1 may be unchanged from the transformation matrix determined in frame k, and the location of the corner points in the k frame, i.e. the target image frame, may be determinedCan be the same as in the preceding image frame, and thus cannot accurately solve for Tw_bAnd Th_c. In summary, the capturing time interval between the preceding image frame and the target image frame needs to be as large as possible to ensure that the motion of the robot arm can be detected.
From the above process, if the position information of the corner point in the target image frame needs to be obtained, T when the target image frame is shot can be obtainedc_b. The T isc_bCan be combined as the posture information. In accordance with Tc_bAnd when the position information of each corner point in the corner point set in the target image frame is acquired based on an image coordinate system established by the target image frame, the following two formulas can be specifically referred to:
in the above formula (3) and formula (4), s is a scaling factor, p is the position information of the corner point in the target image frame, i.e. the coordinate, K and is the camera internal reference,for capturing the k-th frame ocAnd obA transformation matrix between them, P is a coordinate system o of the corner point on the calibration platebThe coordinates of (a) are (b),is when the k frame is shot owAnd ohA transformation matrix between. Wherein, the k frame is the target image frame.
According to the method provided by the embodiment of the invention, the position information of each corner point in the corner point set in the target image frame is obtained by obtaining the attitude information and according to the attitude information and the image coordinate system established based on the target image frame. And based on the position information of each corner in the set of corners in the target image frame, a calibration plate area, that is, a detection area, can be determined, and based on the detection area, the corner in the target image frame is detected. In the subsequent corner detection, the whole area of the image frame is not detected, namely the global corner detection is not performed, but only the calibration board area in the whole area is detected, so that the detection range is reduced, and the consumed computing resource can be reduced. Meanwhile, the detection range is reduced, and the workload of the whole detection is correspondingly reduced, so that the detection efficiency can be improved, the batch calibration operation is facilitated, and the productivity of a calibration production line is facilitated to be improved.
As can be seen from the above description of the embodiments, the embodiments of the present invention mainly determine the calibration board area based on the positions of the corner points in the calibration board in the image frame. In addition to the above-described way of determining the calibration plate area by determining the smallest border that can enclose these corner points, the plate area can also be calibrated if these corner points themselves have certain characteristics. Based on this principle, in combination with the above-described embodiments, in one embodiment, the set of corner points is determined by corner points corresponding to vertices capable of determining the outline of the calibration plate, and/or the pose information is determined based on coordinate system conversion between a coordinate system in which the calibration plate is located and a coordinate system in which the photographing apparatus is located when the target image frame is photographed.
The set of corner points may be four vertices of the calibration board, or because the calibration board is rectangular, the set of corner points may also be two vertices of opposite corners of the calibration board, which is not specifically limited in this embodiment of the present invention. The four vertices or two vertices of a diagonal may each determine the contour of the calibration plate. In addition, the pose information may be determined by performing coordinate system conversion between a coordinate system where the calibration board is located and a coordinate system where the shooting device is located when the target image frame is shot, and may specifically be a conversion matrix in the above embodiment, which is not specifically limited in this embodiment of the present invention.
According to the method provided by the embodiment of the invention, the calibration board area can be determined based on the corner points corresponding to the vertexes capable of determining the profile of the calibration board, and in the subsequent corner point detection, the whole area of the image frame is not detected, namely the global corner point detection is not performed, but only the calibration board area in the whole area is detected, so that the detection range is reduced, and the consumed computing resources can be reduced. Meanwhile, the detection range is reduced, and the workload of the whole detection is correspondingly reduced, so that the detection efficiency can be improved, the batch calibration operation is facilitated, and the productivity of a calibration production line is facilitated to be improved.
In combination with the above description of the embodiment, in an embodiment, the first preset condition includes that the total number of corner points in the corner point set is greater than a preset threshold. As the total number of corners in the set of corners is larger than a certain threshold, it can only be guaranteed that there are enough corners in the set of corners. In this way, the bounding box surrounding these corner points can restore the outline of the calibration plate itself as much as possible.
In the method provided by the embodiment of the invention, by judging whether the corner set meets the first preset condition, the calibration board area is determined based on the corner set under the condition that the corner set meets the first preset condition. Because the first preset condition can filter out the corner-based set, the calibration board region cannot be determined or the determined situation is not accurate enough, and the success rate of subsequent corner detection can be improved.
With reference to the content of the foregoing embodiment, in an embodiment, the calibration board region meets a second preset condition, where the second preset condition includes that each corner in the set of corners is located in the calibration board region or that a corner feature of each corner in the set of corners can be retained in the calibration board region.
The content of the above embodiment may mean that only if the corner points in the set of corner points are all in the calibration board region, it can be ensured that these corner points can be detected when the subsequent corner point detection is performed in the calibration board region. Furthermore, if the determined calibration board region can retain the corner features of the corners, and subsequently, when the corners are detected in the calibration board region, it is ensured that the corners can be detected. The corner point is located in the calibration plate region, which may mean that n pixels in the corner point are located in the calibration plate region, or a pixel in the center of the corner point is located in the calibration plate region.
According to the method provided by the embodiment of the invention, the calibration board area can be limited to meet the second preset condition, so that all corner points can be ensured to be detected when the corner points are detected in the calibration board area subsequently, and the subsequent calibration process can be ensured to be carried out smoothly.
For convenience of understanding, two corner point detection manners provided by the embodiments of the present invention will be specifically explained with reference to the contents of the above embodiments. The first method is mainly a method of determining the positions of corner points in an image frame in a video stream based on optical flow tracking, determining a detection area based on the positions of the corner points, and then performing corner point detection based on the detection area. The second mode is mainly a mode of determining the positions of corner points in an image frame in a video stream based on the motion of a mechanical arm, determining a detection area based on the positions of the corner points, and then performing corner point detection based on the detection area. In the first mode, the processing procedure can refer to fig. 7.
In fig. 7, the video stream may be divided into n segments, each of which may be handled by one thread of the server. For convenience of explanation, one of the threads is taken as an example: the global detection may be performed on a first frame of image in the video stream to detect a corner in the first frame of image. And carrying out optical flow tracking on the corner points detected from the first frame image, after tracking to the second frame image, determining whether the number of the corner points which can be tracked is greater than a preset threshold value, if so, determining an optimal surrounding frame based on the tracked corner points, namely selecting a proper area, and carrying out local corner point detection in the optimal surrounding frame in the second frame image. After the corner detection is performed on the second frame image, the corner detected by the second frame image can be tracked again, and the above process is repeated until the corner detection of the last frame image is completed. After the detection of the corner points of the image frames in the video stream is completed, the detected corner points can be organized according to the time sequence of the image frames in the video stream, so that the external reference calibration is completed.
In the second mode, the processing procedure can refer to fig. 8. In FIG. 8, T may be calculated firstw_bAnd Th_cThe specific calculation method can refer to the above process. Thereafter, the video may be similarly streamedFor n segments, each segment may be handed to one thread of the server for processing. For convenience of explanation, one of the threads is taken as an example: for a certain frame of image in the video stream, T can be based onw_bAnd Th_cAnd obtaining T at the corresponding moment of the frame imagec_b. In obtaining Tc_bThen according to Tc_bAnd coordinates of the angular points at the four corners of the calibration board in a coordinate system of the calibration board can be obtained, so that the positions of the angular points at the four corners of the calibration board in the frame of image can be obtained. According to the positions of the corner points of the four angular positions of the calibration plate in the frame image, a detection area can be determined in the frame image, and the corner points are detected in the detection area.
If the number of corner points detected in the detection area is less than a preset threshold value, the T is indicatedw_bAnd Th_cIt is possible that the previous calculations are not accurate enough to be solved again. If the number of the corner points detected in the detection area is not less than the preset threshold value, the corner point detection can be continuously performed on the image frame in the video stream according to the above mode until the corner point detection on the last image frame is completed. After the detection of the corner points of the image frames in the video stream is completed, the detected corner points can be organized according to the time sequence of the image frames in the video stream, so that the external reference calibration is completed.
It should be noted that the technical solutions described above can be implemented as independent embodiments in actual implementation, and can also be combined with each other and implemented as combined embodiments. In addition, when the contents of the embodiments of the present invention are described above, the different embodiments are described according to the corresponding sequence only based on the idea of convenient description, for example, the sequence of the data flow is adopted, and the execution sequence between the different embodiments is not limited. Accordingly, in the actual implementation process, if it is necessary to implement multiple embodiments provided by the present invention, the execution sequence provided in the embodiments of the present invention is not necessarily required, but the execution sequence between different embodiments may be arranged according to requirements.
In combination with the content of the above embodiments, in one embodiment, as shown in fig. 9, there is provided a corner point detecting apparatus, including: a determining module 901 and a detecting module 902, wherein:
a determining module 901, configured to determine, based on an angular point set, a calibration board region in a target image frame to be detected, and use the calibration board region as a detection region, where the angular point set is determined based on angular points in the calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by a shooting device in a motion state;
a detecting module 902, configured to detect a corner point in the target image frame based on the detection area.
In one embodiment, the determining module 901 includes:
the acquisition unit is used for acquiring the position information of each corner point in the corner point set in the target image frame when the corner point set meets a first preset condition;
and the determining unit is used for determining the calibration plate area based on the position information of each corner point in the corner point set in the target image frame.
In one embodiment, the set of corner points is determined based on the corner points that can be tracked when tracking to the target image frame after tracking the corner points detected in the preamble image frame; the preamble image frame is a previous image frame of the target image frame in the video stream.
In one embodiment, an acquisition unit acquires attitude information indicating a positional relationship of a photographing apparatus with respect to a calibration plate when photographing a target image frame; and acquiring the position information of each corner point in the corner point set in the target image frame according to the attitude information and the image coordinate system established based on the target image frame.
In one embodiment, the set of corner points is determined by corner points corresponding to vertices enabling determination of the contour of the calibration plate, and/or the pose information is determined based on a coordinate system transformation between a coordinate system in which the calibration plate is located and a coordinate system in which the capturing device is located when capturing the target image frame.
In an embodiment, the first preset condition comprises that a total number of corner points in the set of corner points is larger than a preset threshold.
In one embodiment, the calibration board area satisfies a second preset condition, where the second preset condition includes that each corner in the set of corners is located in the calibration board area or that a corner feature of each corner in the set of corners can be retained in the calibration board area.
For the specific definition of the corner detection means, reference may be made to the above definition of the corner detection method, which is not described herein again. The modules in the corner detection apparatus may be implemented wholly or partially by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data relating to corner points. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a corner detection method.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
determining a calibration board area in a target image frame to be detected based on an angular point set, wherein the angular point set is determined based on angular points in a calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by a shooting device in a motion state;
and detecting and obtaining corner points in the target image frame based on the detection area.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
if the corner set meets a first preset condition, acquiring position information of each corner in the corner set in a target image frame;
and determining a calibration board area based on the position information of each corner point in the set of corner points in the target image frame.
In one embodiment, when the processor executes the computer program, the set of corner points is determined based on the corner points that can be tracked when tracking to the target image frame after tracking the corner points detected in the preamble image frame; the preamble image frame is a previous image frame of the target image frame in the video stream.
In one embodiment, the processor, when executing the computer program, further performs the following steps when executing the computer program:
acquiring attitude information, wherein the attitude information is used for representing the position relation of the shooting equipment relative to the calibration board when shooting a target image frame;
and acquiring the position information of each corner point in the corner point set in the target image frame according to the attitude information and the image coordinate system established based on the target image frame.
In an embodiment, the processor, when executing the computer program, the set of corner points is determined by corner points corresponding to vertices enabling determination of the contour of the calibration plate, and/or the pose information is determined based on a coordinate system transformation between a coordinate system in which the calibration plate is located and a coordinate system in which the capturing device is located when capturing the target image frame.
In an embodiment, the processor, when executing the computer program, comprises that the total number of corners in the set of corners is greater than a preset threshold.
In an embodiment, the processor, when executing the computer program, is configured to determine that the calibration board area satisfies a second predetermined condition, where the second predetermined condition includes that each corner in the set of corners is located in the calibration board area or that a corner feature of each corner in the set of corners is retained in the calibration board area.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
determining a calibration board area in a target image frame to be detected based on an angular point set, wherein the angular point set is determined based on angular points in a calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by a shooting device in a motion state;
and detecting and obtaining corner points in the target image frame based on the detection area.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the corner set meets a first preset condition, acquiring position information of each corner in the corner set in a target image frame; and determining a calibration board area based on the position information of each corner point in the set of corner points in the target image frame.
In one embodiment, when the computer program is executed by the processor, the set of corners is determined based on the corners that can be tracked when tracking to the target image frame after tracking the corners detected in the preamble image frame; the preamble image frame is a previous image frame of the target image frame in the video stream.
In one embodiment, the computer program, when executed by the processor, further performs the steps of: acquiring attitude information, wherein the attitude information is used for representing the position relation of the shooting equipment relative to the calibration board when shooting a target image frame; and acquiring the position information of each corner point in the corner point set in the target image frame according to the attitude information and the image coordinate system established based on the target image frame.
In an embodiment the set of corner points is determined by the set of corner points corresponding to vertices enabling determination of the contour of the calibration plate when the computer program is executed by the processor, and/or the pose information is determined based on a coordinate system transformation between a coordinate system in which the calibration plate is located and a coordinate system in which the capturing device is located when capturing the target image frame.
In an embodiment, the first preset condition comprises that the total number of corner points in the set of corner points is larger than a preset threshold value when the computer program is executed by the processor.
In an embodiment, the computer program, when executed by the processor, is adapted to cause the calibration plate area to satisfy a second predetermined condition, the second predetermined condition including that each corner of the set of corners is located within the calibration plate area or that a corner feature of each corner of the set of corners is retained within the calibration plate area.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method of corner detection, the method comprising:
determining a calibration board area in a target image frame to be detected based on an angular point set, wherein the angular point set is determined based on angular points in a calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by a shooting device in a motion state;
and detecting and obtaining the corner points in the target image frame based on the detection area.
2. The method according to claim 1, wherein the determining a calibration plate area in the target image frame to be detected based on the set of corner points comprises:
if the corner set meets a first preset condition, acquiring position information of each corner in the corner set in the target image frame;
and determining the calibration board area based on the position information of each corner point in the set of corner points in the target image frame.
3. The method of claim 2, wherein the set of corner points is determined based on corner points that can be tracked when tracking to the target image frame after tracking corner points detected in a preceding image frame; the preamble image frame is a previous image frame of the target image frame in the video stream.
4. The method according to claim 2, wherein said obtaining the position information of each corner point in the set of corner points in the target image frame comprises:
acquiring attitude information which is used for representing the position relation of the shooting equipment relative to the calibration board when the target image frame is shot;
and acquiring the position information of each corner point in the corner point set in the target image frame according to the attitude information and an image coordinate system established based on the target image frame.
5. The method of claim 4, wherein the set of corner points is determined by corner points corresponding to vertices enabling determination of a calibration plate outline, and/or wherein the pose information is determined based on a coordinate system transformation between a coordinate system in which the calibration plate is located and a coordinate system in which the capture device is located when capturing the target image frame.
6. The method according to claim 2, wherein the first preset condition comprises that the total number of corner points in the set of corner points is greater than a preset threshold.
7. The method according to any one of claims 1 to 6, wherein the calibration plate area satisfies a second predetermined condition, the second predetermined condition including that each corner point of the set of corner points is located within the calibration plate area or that a corner point feature of each corner point of the set of corner points can be retained within the calibration plate area.
8. An apparatus for corner detection, the apparatus comprising:
the device comprises a determining module, a detecting module and a processing module, wherein the determining module is used for determining a calibration board area in a target image frame to be detected based on an angular point set and taking the calibration board area as a detection area, the angular point set is determined based on angular points in a calibration board, the target image frame is an image frame in a video stream, and the video stream is obtained by shooting the calibration board by shooting equipment in a motion state;
and the detection module is used for detecting the corner points in the target image frame based on the detection area.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110684113.8A CN113569843B (en) | 2021-06-21 | 2021-06-21 | Corner detection method, corner detection device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110684113.8A CN113569843B (en) | 2021-06-21 | 2021-06-21 | Corner detection method, corner detection device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113569843A true CN113569843A (en) | 2021-10-29 |
CN113569843B CN113569843B (en) | 2024-08-23 |
Family
ID=78162445
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110684113.8A Active CN113569843B (en) | 2021-06-21 | 2021-06-21 | Corner detection method, corner detection device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113569843B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114219867A (en) * | 2021-12-20 | 2022-03-22 | 上海肇观电子科技有限公司 | Method and device for calibrating camera, electronic equipment and readable storage medium |
CN114494450A (en) * | 2021-12-27 | 2022-05-13 | 上海欧菲智能车联科技有限公司 | Calibration plate, calibration method, calibration device, storage medium, and electronic apparatus |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102842117A (en) * | 2012-07-13 | 2012-12-26 | 浙江工业大学 | Method for correcting kinematic errors in microscopic vision system |
CN111145238A (en) * | 2019-12-12 | 2020-05-12 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment |
-
2021
- 2021-06-21 CN CN202110684113.8A patent/CN113569843B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102842117A (en) * | 2012-07-13 | 2012-12-26 | 浙江工业大学 | Method for correcting kinematic errors in microscopic vision system |
CN111145238A (en) * | 2019-12-12 | 2020-05-12 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114219867A (en) * | 2021-12-20 | 2022-03-22 | 上海肇观电子科技有限公司 | Method and device for calibrating camera, electronic equipment and readable storage medium |
CN114494450A (en) * | 2021-12-27 | 2022-05-13 | 上海欧菲智能车联科技有限公司 | Calibration plate, calibration method, calibration device, storage medium, and electronic apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN113569843B (en) | 2024-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113286194B (en) | Video processing method, device, electronic equipment and readable storage medium | |
US10915998B2 (en) | Image processing method and device | |
CN111639522B (en) | Living body detection method, living body detection device, computer equipment and storage medium | |
US8928730B2 (en) | Method and system for correcting a distorted input image | |
CN113436113B (en) | Anti-shake image processing method, device, electronic equipment and storage medium | |
EP3425587A1 (en) | Method and device for generating a panoramic image | |
JP7192582B2 (en) | Object tracking device and object tracking method | |
WO2020253618A1 (en) | Video jitter detection method and device | |
CN113194263B (en) | Gun and ball linkage control method and device, computer equipment and storage medium | |
CN113556464B (en) | Shooting method and device and electronic equipment | |
CN113222862B (en) | Image distortion correction method, device, electronic equipment and storage medium | |
CN113569843B (en) | Corner detection method, corner detection device, computer equipment and storage medium | |
CN115174878B (en) | Projection picture correction method, apparatus and storage medium | |
CN115705651A (en) | Video motion estimation method, device, equipment and computer readable storage medium | |
CN109785444A (en) | Recognition methods, device and the mobile terminal of real plane in image | |
JP2019175112A (en) | Image processing device, photographing device, image processing method, and program | |
CN116964627A (en) | Information processing device, information processing method, and control program | |
CN115174879A (en) | Projection picture correction method, projection picture correction device, computer equipment and storage medium | |
CN113436267B (en) | Visual inertial navigation calibration method, device, computer equipment and storage medium | |
JP7589017B2 (en) | Image processing device, image processing method, and program | |
CN111353945B (en) | Fisheye image correction method, device and storage medium | |
CN113436256B (en) | Shooting device state identification method, shooting device state identification device, computer equipment and storage medium | |
CN113327228B (en) | Image processing method and device, terminal and readable storage medium | |
US11790483B2 (en) | Method, apparatus, and device for identifying human body and computer readable storage medium | |
CN112529943B (en) | Object detection method, object detection device and intelligent equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |