CN111583344B - Method, system, computer device and storage medium for space-time calibration - Google Patents
Method, system, computer device and storage medium for space-time calibration Download PDFInfo
- Publication number
- CN111583344B CN111583344B CN202010387669.6A CN202010387669A CN111583344B CN 111583344 B CN111583344 B CN 111583344B CN 202010387669 A CN202010387669 A CN 202010387669A CN 111583344 B CN111583344 B CN 111583344B
- Authority
- CN
- China
- Prior art keywords
- transformation
- calibration
- time calibration
- space
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The application relates to a space-time calibration method, a system, computer equipment and a storage medium, wherein the space-time calibration method comprises the following steps: determining constraint conditions of first transformation, second transformation, third transformation and space calibration according to the rigid coordinates of OTS, and initializing the space calibration into a preset matrix; the second transformation and the third transformation are used for constraint time calibration; traversing the time calibration, determining the first transformation according to the preset matrix and the constraint condition, acquiring a minimum residual according to the first transformation and the constraint condition, and determining the first time calibration as a target time calibration according to the minimum residual; and according to the constraint condition, the first transformation and the target time calibration, acquiring a first space calibration and determining the first space calibration as the target space calibration. The method solves the problem of lower accuracy of space-time calibration in OTS and cameras.
Description
Technical Field
The present application relates to the field of camera calibration technologies, and in particular, to a method, a system, a computer device, and a storage medium for space-time calibration.
Background
Currently, in order to track the position of a predetermined object, an optical tracking system (optical tracking system, abbreviated OTS) is generally used; OTS and cameras may be applied in equipment such as AR systems or robots for tracking objects in real time. In order to improve the accuracy and robustness of the AR system or robot estimation, the OTS and the camera must be spatially and temporally aligned with respect to each other. In the related art, the problem of space calibration is usually solved by a method such as hand-eye calibration, but the method is usually applied to calibration between cameras or between an inertial sensor and a laser radar; moreover, because OTS has only pose observations, in the related art, the space-time calibration between OTS and the camera cannot be precisely determined, resulting in lower accuracy of the space-time calibration in OTS and the camera.
Aiming at the problem of lower accuracy of space-time calibration in OTS and cameras in the related art, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the application provides a space-time calibration method, a space-time calibration system, computer equipment and a storage medium, which at least solve the problem of lower space-time calibration accuracy in OTS and cameras in the related art.
In a first aspect, an embodiment of the present application provides a method for space-time calibration, where the method includes:
determining constraint conditions of first transformation, second transformation, third transformation and space calibration according to the rigid coordinates of OTS, and initializing the space calibration into a preset matrix; the second transformation and the third transformation are used for constraint time calibration;
the first transformation is a transformation from an OTS global coordinate system to a camera global coordinate system, the second transformation is a pose transformation from an OTS local coordinate system to the OTS global coordinate system, and the third transformation is a pose transformation from a camera local coordinate system to the camera global coordinate system;
traversing the time calibration, determining the first transformation according to the preset matrix and the constraint condition, acquiring a minimum residual according to the first transformation and the constraint condition, and determining a first time calibration as a target time calibration according to the minimum residual;
and acquiring a first space calibration and determining the first space calibration as a target space calibration according to the constraint condition, the first transformation and the target time calibration.
In some embodiments, traversing the time scaling, determining the first transformation according to the preset matrix and the constraint condition, obtaining a minimum residual according to the first transformation and the constraint condition, and determining a first time scaling as a target time scaling according to the minimum residual includes:
traversing the time calibration according to a first traversing interval, acquiring the first transformation according to the preset matrix and the constraint condition, and determining the first time calibration according to the first transformation;
traversing the time calibration according to a second traversing interval under the condition that the first time calibration is larger than or equal to a first preset threshold value, and redefining the second time calibration; the second traversal interval is smaller than the first traversal interval, and the precision of the second time calibration is higher than that of the first time calibration;
and under the condition that the second time calibration is smaller than the first preset threshold value, determining the second time calibration as the target time calibration.
In some embodiments, after the obtaining the first spatial calibration and determining as the target spatial calibration according to the constraint, the first transformation, and the target time calibration, the method further includes:
traversing the time calibration again according to the constraint condition and the first space calibration, determining a third time calibration according to the first transformation obtained by traversing, and redetermining the third time calibration as the target time calibration; and the precision of the third time calibration is higher than that of the second time calibration.
In some embodiments, after the obtaining the first spatial calibration and determining as the target spatial calibration according to the constraint, the first transformation, and the target time calibration, the method further includes:
according to the first space calibration, iterating through the time calibration, and re-determining a fourth time calibration and a second space calibration according to the constraint condition;
and stopping the iteration when the iteration number is greater than or equal to a second preset threshold or the minimum residual is less than or equal to a third preset threshold, and redefining the fourth time calibration as the target time calibration and redefining the second space calibration as the target space calibration.
In some of these embodiments, said traversing said time scaling, determining said first transformation according to said preset matrix and said constraint comprises:
acquiring the second transformation and the third transformation in the traversal process; acquiring the first transformation according to the second transformation, the third transformation and the preset matrix; wherein the second transformation and the third transformation are both constrained by a transformation portion.
In some of these embodiments, said determining said first transformation according to said preset matrix and said constraint comprises:
according to the preset matrix and the constraint condition, the first transformation is obtained through an iterative closest point (Iterative Closest Point, abbreviated as ICP) algorithm.
In some embodiments, after the acquiring the first spatial calibration and determining as the target spatial calibration, the method further includes:
according to the target time calibration and the target space calibration, obtaining an output track of a vision simultaneous localization and mapping (Visual Simultaneous Localization And Mapping, VSLAM for short) algorithm under the OTS global coordinate system; and obtaining the track precision evaluation of the VSLAM algorithm through the track observation of the output track and the OTS.
In a second aspect, embodiments of the present application provide a space-time calibration system, the system comprising: camera, OTS and control means;
the control device determines constraint conditions of first transformation, second transformation, third transformation and space calibration according to the rigid coordinates of the OTS, and initializes the space calibration to a preset matrix; wherein the second transform and the third transform are used for constrained time scaling;
the first transformation is a transformation from an OTS global coordinate system to a camera global coordinate system, the second transformation is a pose transformation from an OTS local coordinate system to the OTS global coordinate system, and the third transformation is a pose transformation from a camera local coordinate system to the camera global coordinate system;
the control device traverses the time calibration, determines the first transformation according to the preset matrix and the constraint condition, obtains a minimum residual according to the first transformation and the constraint condition, and determines a first time calibration as a target time calibration according to the minimum residual;
and the control device acquires a first space calibration and determines the first space calibration as a target space calibration according to the constraint condition, the first transformation and the target time calibration.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method of spatio-temporal calibration as described in the first aspect above when the processor executes the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method of spatio-temporal calibration as described in the first aspect above.
Compared with the related art, the space-time calibration method, system, computer equipment and storage medium provided by the embodiment of the application determine the constraint conditions of the first transformation, the second transformation, the third transformation and the space calibration according to the rigid coordinates of the OTS, and initialize the space calibration into a preset matrix; the second transformation and the third transformation are used for constraint time calibration; the first transformation is the transformation from an OTS global coordinate system to a camera global coordinate system, the second transformation is the pose transformation from the OTS local coordinate system to the global coordinate system, and the third transformation is the pose transformation from the camera local coordinate system to the global coordinate system; traversing the time calibration, determining the first transformation according to the preset matrix and the constraint condition, acquiring a minimum residual according to the first transformation and the constraint condition, and determining the first time calibration as a target time calibration according to the minimum residual; according to the constraint condition, the first transformation and the target time calibration, a first space calibration is obtained and is determined to be the target space calibration, and the problem that the accuracy of space-time calibration in OTS and cameras is low is solved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart diagram of a method of spatio-temporal calibration according to an embodiment of the application;
FIG. 2 is a second flowchart of a space-time calibration method according to an embodiment of the application;
FIG. 3 is a flow chart III of a spatio-temporal calibration method according to an embodiment of the application;
FIG. 4 is a flow chart four of a spatio-temporal calibration method according to an embodiment of the application;
FIG. 5 is a flow chart fifth of a spatio-temporal calibration method according to an embodiment of the application;
FIG. 6 is a block diagram of a space-time calibration system according to an embodiment of the application;
fig. 7 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
In this embodiment, a space-time calibration method is provided. FIG. 1 is a flow chart I of a space-time calibration method according to an embodiment of the application, as shown in FIG. 1, comprising the steps of:
step S102, determining constraint conditions of first transformation, second transformation, third transformation and space calibration according to rigid coordinates of OTS at any moment, wherein the first transformation is transformation from an OTS global coordinate system to a camera global coordinate system, the second transformation is pose transformation from an OTS local coordinate system to the OTS global coordinate system, and the third transformation is pose transformation from the camera local coordinate system to the camera global coordinate system. The constraint is as shown in equation 1:
equation 1
Furthermore, since the spatiotemporal calibration dt calculates the time difference between OTS and the camera, equation 1 may also be modified as shown in equation 2:
equation 2
Wherein the second transformation and the third transformation are used for constrained time scaling; OTS will typically track N (N>=4) rigid bodies composed of reflective balls not on the same plane, optical Tracking System Balls (abbreviated as OTSB) refers to rigid bodies composed of the N reflective balls;the rigid coordinates are represented at any time. In formula 1, g represents global, and l represents local. />And is the corresponding relation of local to global pose transformation under OTS system at the i+dt th moment, namely the second transformation; />The third transformation is the corresponding relation of 'local to global pose transformation under a camera system' at the ith moment; />And->For constraint solving time scaling, but needs to be known +.>。/>Is the first transformation of global pose of the OTS and the camera, and is a fixed value if the time calibration is known; />The OTSB is external to the camera and its solution is the result of the spatial calibration.
Thus, equation 1 can be simplified as shown in equation 3:
equation 3
Wherein dt is the time scale to be solved; n is the space calibration to be solved, namely the pose transformation from the OTS local coordinate system to the camera local coordinate system; m is a first transformation that is an intermediate unknowns for solution space-time calibration;is a second transformation, and the second transformation is the pose of the ith+dt moment output by OTS; />Is a third transformation which is the pose of the ith moment of the camera output,/position>Can be obtained through a calibration plate.
Step S104, initializing the space calibration to a preset matrix, wherein the preset matrix can be set by a user; for example, the preset matrix may be set as an identity matrix, i.e., n=diag (1, 1) is initialized.
Step S106, traversing the time scale, for exampleFor example, the traversal dt= [ -100ms,10ms,100ms]I.e. dt increases from-100 ms,10ms per step, and traverses up to 100 ms. Determining the first transformation from the preset matrix and the constraint, i.e. substituting n=diag (1, 1) into equation 2, and for each traversed dt, obtainingAnd->The method comprises the steps of carrying out a first treatment on the surface of the At the same time->And->With partial restriction only, i.e. only the +.>And->Translation vectors in the x, y and z directions in these two poses.
The first transformation can be obtained from the above substitution result of equation 2; wherein the first transformation may be obtained by ICP algorithm solution. And then obtaining a minimum residual according to the first transformation and the constraint condition, wherein the obtaining mode of the minimum residual specifically comprises the following steps: substituting the first transformation into a formula 2, comparing the calculation results of substituting data into the left side and the right side of the formula 2, taking the difference between the calculation results of the left side and the right side as residual errors, obtaining a series of residual errors for each traversing dt by using the calculation mode, and comparing the magnitudes of the series of residual errors to obtain the minimum residual errors; then the first time of obtaining the minimum residual error in the traversal process is marked dt 1 And determining the target time calibration.
Step S108, according to the constraint condition, the first transformation and the target time calibration, acquiring a first space calibration and determining the first space calibration as a target space calibration; wherein, since the first transformation and the first time calibration have been determined, substituting the numerical value of the first transformation and the data of the first time calibration into formula 2 can solve to obtain a first space calibration, and taking the first space calibration as the target space calibration; the target time calibration and the target space calibration are the results of the spatio-temporal calibration between OTS and the camera.
Through the steps S102 to S108, the constraint conditions of time calibration and space calibration are determined, namely, a constraint formula of space-time calibration between OTS and a camera is determined, and a formula commonly used in the related art is determinedCompared with the embodiment of the application, the method adds an intermediate unknown quantity for solving the space-time calibration, namely a first transformation, initializes the space calibration to be solved, calculates and acquires the first transformation through an initialized value and a constraint formula, and then skillfully solves and acquires the result of the space-time calibration by utilizing the constraint formula according to the first transformation, thereby avoiding incapability of realizing the space-time calibration caused by that OTS only can observe the poseThe method is directly used for space-time calibration between the camera and the OTS, and solves the problem of lower accuracy of space-time calibration in the OTS and the camera.
In some embodiments, a space-time calibration method is provided, fig. 2 is a flowchart two of the space-time calibration method according to an embodiment of the present application, as shown in fig. 2, and the method includes the following steps:
step S202, traversing the time calibration according to the first traversing interval, for example, setting the first traversing interval to 10ms, and traversing dt= [ -100ms,10ms,100ms ], i.e. dt increases from-100 ms by 10ms each step until 100 ms; substituting a preset matrix into a formula 2 in the traversal process, acquiring the first transformation according to the constraint condition, and determining the first time calibration according to the first transformation.
Step S204, detecting whether the first time calibration reaches a first preset threshold, wherein the preset threshold can be set to be 1ms by a user; when the first time scale is greater than or equal to a first preset threshold value, the obtained time scale dt does not reach the precision required to be metThe traversal space is further narrowed, and the time scale is traversed according to the second traversal interval, e.g., dt= [ dt ] 1 -10ms,1ms,dt 1 +10ms]The dt is 1 A value representing a first time calibration; then re-determining a second time calibration in the traversal process; the second traversal interval is smaller than the first traversal interval, and the precision of the second time calibration is higher than that of the first time calibration.
Step S206, when the second time calibration is smaller than the first preset threshold, it is indicated that the second time calibration has met the accuracy requirement, and the time calibration dt does not need to be continuously traversed, so that the second time calibration is determined to be the target time calibration. Otherwise, the traversal space is further narrowed by narrowing the traversal interval, e.g., traversal dt= [ dt = [ dt ] 2 -1ms,0.1ms,dt 2 +1ms]The dt is 2 A value representing a second time scale; and repeating the step of traversing the time calibration to solve the first transformation until the time calibration determined in the process of traversing to acquire the first transformation meets the precision requirement, and determining the time calibration meeting the precision requirement as the target time calibration.
Through the steps S202 to S206, the multi-time traversal is performed, the traversal interval is gradually shortened, and the time calibration is determined according to the constraint condition in each traversal until the time calibration reaches the set precision requirement, so that the accurate solution of the target time calibration in the OTS and camera system is ensured, and the accuracy of the space-time calibration method is improved.
In some embodiments, a space-time calibration method is provided, and fig. 3 is a flowchart III of the space-time calibration method according to an embodiment of the present application, as shown in fig. 3, and the method includes the following steps:
step S302, substituting the solved first space calibration as the value of N into the formula 2, repeating the traversal of the time calibration in the steps S302 to S306 again according to the constraint condition and the first space calibration, redefining a third time calibration according to the first transformation obtained by the traversal, and redefining the third time calibration as the target time calibration; wherein the accuracy of the third time scaling is higher than the accuracy of the second time scaling.
Through the step S302, the target space calibration obtained by solving the embodiment of the present application is substituted into the constraint condition to traverse the time calibration again, so that the first transformation with better fitting degree with the constraint condition can be obtained, further the target time calibration with higher precision can be obtained, and the accuracy of the space-time calibration in OTS and cameras is further improved.
In some embodiments, a space-time calibration method is provided, and fig. 4 is a flowchart of a space-time calibration method according to an embodiment of the present application, as shown in fig. 4, and the method includes the following steps:
step S402, iterating through the time calibration according to the first space calibration, namely substituting the first space calibration into N in the formula 2, and redefining a fourth time calibration and a second space calibration according to the constraint condition.
In step S404, when the number of iterations is greater than or equal to the second preset threshold, i.e. the number of iterations reaches the maximum value set by the user, or when the minimum residual is less than or equal to the third preset threshold, i.e. the error is less than the error threshold set by the user, the iteration is stopped, the fourth time calibration is redetermined as the target time calibration, and the second space calibration is redetermined as the target space calibration. Otherwise, the error of the solving result is still larger, and the iteration number still does not reach the maximum value, so that iteration needs to be continued, at the moment, the second space calibration is substituted into N, and the fifth time calibration and the third space calibration are redetermined according to the constraint condition until the iteration number reaches the maximum value or the error of the space-time calibration is smaller than the set error threshold.
Through the steps S402 to S404, the target time calibration and the target space calibration are redetermined through an iterative solution algorithm, and the time-space calibration result can be continuously optimized in the iterative process, so that the accuracy of the time-space calibration is improved; and simultaneously, the user can set the condition of ending the iteration so as to adjust the precision of the space-time calibration result by combining the actual situation.
In some embodiments, a space-time calibration method is provided, and fig. 5 is a flowchart five of the space-time calibration method according to an embodiment of the present application, as shown in fig. 5, and the method includes the following steps:
step S502, obtaining a track precision evaluation result through a Visual SLAM algorithm according to the target time calibration and the target space calibration; the Visual SLAM algorithm can be suitable for camera attitude estimation in an AR system; because the obtained target time calibration and target space calibration are higher in precision, the error between the obtained target time calibration and target space calibration is smaller by comparing the output track of the VSLAM algorithm under the OTS global coordinate system and the track observation obtained by OTS; the trace error result obtained by the VSLAM algorithm is smaller by comparing the output trace with the trace observation. Through the step S502, the track accuracy of the VSLAM algorithm is evaluated by acquiring the space-time calibration results of the OTS and the camera, so that the accuracy of the evaluation of the VSLAM algorithm is effectively improved.
It should be understood that, although the steps in the flowcharts of fig. 1 to 5 are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily occur in sequence, but may be performed alternately or alternately with at least a portion of the other steps or sub-steps or stages of other steps.
In this embodiment, a space-time calibration system is provided, and fig. 6 is a block diagram of a space-time calibration system according to an embodiment of the present application, as shown in fig. 6, where the system includes: a camera 62, OTS66 and a control device 64.
The control device 64 determines constraint conditions of the first transformation, the second transformation, the third transformation and the target space calibration according to the rigid coordinates of the OTS66 at any moment, and initializes the target space calibration to a preset matrix; wherein the second transformation is used to constrain the target time calibration; wherein the first transformation is a transformation from an OTS66 global coordinate system to a camera 62 global coordinate system, the second transformation is a pose transformation from the OTS66 local coordinate system to an OTS66 global coordinate system, and the third transformation is a pose transformation from the camera 62 local coordinate system to a camera 62 global coordinate system.
The control device 64 traverses the time calibration, determines the first transformation according to the preset matrix and the constraint condition, obtains a minimum residual according to the first transformation and the constraint condition, and determines a first time calibration as the target time calibration according to the minimum residual; the control device 64 obtains a first spatial calibration and determines the target spatial calibration based on the constraint, the first transformation, and the target time calibration.
By the above embodiment, the constraint conditions of time calibration and space calibration are determined, namely, the constraint formula of space-time calibration between OTS and camera is determined, and the formula commonly used in the related art is determinedCompared with the embodiment of the application, the method adds an intermediate unknown quantity for solving the space-time calibration, namely a first transformation, and the control device 64 initializes the space calibration to be solved firstly, obtains the first transformation through calculation of an initialized value and a constraint formula, and solves the result of the space-time calibration by utilizing the constraint formula according to the first transformation, thereby avoiding the situation that OTS can only observe the position and the pose, and the situation can not be solvedThe method is directly used for space-time calibration between the camera and the OTS, and solves the problem of lower accuracy of space-time calibration in the OTS and the camera.
In some of these embodiments, the control device 64 is further configured to traverse the target time scale according to a first traverse interval, obtain the first transformation according to the preset matrix and the constraint, and determine the first time scale according to the first transformation.
The control device 64 traverses the time scaling according to a second traverse interval and re-determines a second time scaling if the first time scaling is greater than or equal to a first preset threshold; the second traversal interval is smaller than the first traversal interval, and the precision of the second time calibration is higher than that of the first time calibration; the control device 64 determines the second time scale to be the target time scale if the second time scale is less than the first preset threshold.
In some embodiments, the control device 64 is further configured to re-traverse the target time scale according to the constraint condition and the first spatial scale, re-determine a third time scale according to the first transformation obtained by the traversing, and re-determine the third time scale as the target time scale; wherein the accuracy of the third time scaling is higher than the accuracy of the second time scaling.
In some of these embodiments, the control device 64 is further configured to iteratively traverse the time scale based on the first spatial scale and re-determine a fourth time scale and a second spatial scale based on the constraint; the control device 64 stops the iteration if the number of iterations is greater than or equal to a second preset threshold, or the minimum residual is less than or equal to a third preset threshold, and redetermines the fourth time calibration as the target time calibration, redetermines the second space calibration as the target space calibration.
In some of these embodiments, the control means 64 are also adapted to obtain the second and third transformations in the traversal process; acquiring the first transformation according to the second transformation, the third transformation and the preset matrix; wherein the second transformation and the third transformation are both constrained by a transformation portion.
In some of these embodiments, the control device 64 is further configured to obtain the first transformation by an ICP algorithm according to the preset matrix and the constraint.
In some embodiments, the control device 64 is further configured to obtain an output track of the VSLAM algorithm under the OTS global coordinate system according to the target time calibration and the target space calibration; and obtaining the track precision evaluation of the VSLAM algorithm through the track observation of the output track and the OTS.
In addition, the method of space-time calibration of the embodiment of the present application described in connection with FIG. 1 may be implemented by a computer device. Fig. 7 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present application.
The computer device may include a processor 72 and a memory 74 storing computer program instructions.
In particular, the processor 72 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 74 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 74 may include a Hard Disk Drive (HDD), floppy Disk Drive, solid state Drive (Solid State Drive, SSD), flash memory, optical Disk, magneto-optical Disk, tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of these. The memory 74 may include removable or non-removable (or fixed) media where appropriate. The memory 74 may be internal or external to the data processing apparatus where appropriate. In a particular embodiment, the memory 74 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 74 includes Read-Only Memory (ROM) and random access Memory (Random Access Memory, RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (Programmable Read-Only Memory, abbreviated PROM), an erasable PROM (Erasable Programmable Read-Only Memory, abbreviated EPROM), an electrically erasable PROM (Electrically Erasable Programmable Read-Only Memory, abbreviated EEPROM), an electrically rewritable ROM (Electrically Alterable Read-Only Memory, abbreviated EAROM), or a FLASH Memory (FLASH), or a combination of two or more of these. The RAM may be Static Random-Access Memory (SRAM) or dynamic Random-Access Memory (Dynamic Random Access Memory DRAM), where the DRAM may be a fast page mode dynamic Random-Access Memory (Fast Page Mode Dynamic Random Access Memory FPMDRAM), extended data output dynamic Random-Access Memory (Extended Date Out Dynamic Random Access Memory EDODRAM), synchronous dynamic Random-Access Memory (Synchronous Dynamic Random-Access Memory SDRAM), or the like, as appropriate.
Memory 74 may be used to store or cache various data files requiring processing and/or communication, as well as possible computer program instructions for execution by processor 72.
The processor 72 implements any of the spatiotemporal calibration methods of the above embodiments by reading and executing computer program instructions stored in the memory 74.
In some of these embodiments, the computer device may also include a communication interface 76 and a bus 78. The processor 72, the memory 74, and the communication interface 76 are connected to each other via a bus 78 and communicate with each other as shown in fig. 7.
Communication interface 76 is used to enable communication between modules, devices, units, and/or units in embodiments of the application. Communication port 76 may also enable communication with other components such as: and the external equipment, the image/data acquisition equipment, the database, the external storage, the image/data processing workstation and the like are used for data communication.
The bus 78 includes hardware, software, or both, that couples the components of the computer device to one another. The bus 78 includes, but is not limited to, at least one of: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), local Bus (Local Bus). By way of example, and not limitation, bus 78 may include a graphics acceleration interface (Accelerated Graphics Port), AGP or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture), EISA) Bus, front Side Bus (FSB), hyperTransport (HT) interconnect, industry standard architecture (Industry Standard Architecture), ISA) Bus, infiniBand (InfiniBand) interconnect, low Pin Count (LPC) Bus, memory Bus, micro channel architecture (Micro Channel Architecture), MCA Bus, peripheral component interconnect (Peripheral Component Interconnect), PCI-Express (PCI-X) Bus, serial advanced technology attachment (Serial Advanced Technology Attachment, SATA) Bus, video electronics standards association local (Video Electronics Standards Association Local Bus, VLB) Bus, or other suitable Bus, or a combination of two or more of these. Bus 78 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
The computer device may perform the space-time calibration method according to the embodiment of the present application based on the obtained constraint conditions, thereby implementing the space-time calibration method described in connection with fig. 1.
In addition, in combination with the space-time calibration method in the above embodiment, the embodiment of the application may be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the space-time calibration methods of the above embodiments.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
Claims (10)
1. A method of space-time calibration, the method comprising:
determining constraint conditions of first transformation, second transformation, third transformation and space calibration according to rigid coordinates of an OTS (optical tracking system), and initializing the space calibration into a preset matrix; the second transformation and the third transformation are used for constraint time calibration;
the first transformation is a transformation from an OTS global coordinate system to a camera global coordinate system, the second transformation is a pose transformation from an OTS local coordinate system to the OTS global coordinate system, and the third transformation is a pose transformation from a camera local coordinate system to the camera global coordinate system;
traversing the time calibration, determining the first transformation according to the preset matrix and the constraint condition, acquiring a minimum residual according to the first transformation and the constraint condition, and determining a first time calibration as a target time calibration according to the minimum residual;
and acquiring a first space calibration and determining the first space calibration as a target space calibration according to the constraint condition, the first transformation and the target time calibration.
2. The method of claim 1, wherein traversing the time scale, determining the first transformation based on the preset matrix and the constraint, obtaining a minimum residual based on the first transformation and the constraint, and determining a first time scale based on the minimum residual as a target time scale comprises:
traversing the time calibration according to a first traversing interval, acquiring the first transformation according to the preset matrix and the constraint condition, and determining the first time calibration according to the first transformation;
traversing the time calibration according to a second traversing interval under the condition that the first time calibration is larger than or equal to a first preset threshold value, and redefining the second time calibration; the second traversal interval is smaller than the first traversal interval, and the precision of the second time calibration is higher than that of the first time calibration;
and under the condition that the second time calibration is smaller than the first preset threshold value, determining the second time calibration as the target time calibration.
3. The method of claim 2, wherein after the obtaining the first spatial calibration and determining as the target spatial calibration based on the constraint, the first transformation, and the target time calibration, the method further comprises:
traversing the time calibration again according to the constraint condition and the first space calibration, determining a third time calibration according to the first transformation obtained by traversing, and redetermining the third time calibration as the target time calibration; and the precision of the third time calibration is higher than that of the second time calibration.
4. The method of claim 1, wherein after the obtaining the first spatial calibration and determining as the target spatial calibration based on the constraint, the first transformation, and the target time calibration, the method further comprises:
according to the first space calibration, iterating through the time calibration, and re-determining a fourth time calibration and a second space calibration according to the constraint condition;
and stopping the iteration when the iteration number is greater than or equal to a second preset threshold or the minimum residual is less than or equal to a third preset threshold, and redefining the fourth time calibration as the target time calibration and redefining the second space calibration as the target space calibration.
5. The method of claim 1, wherein said traversing the time scale, determining the first transformation based on the preset matrix and the constraint comprises:
acquiring the second transformation and the third transformation in the traversal process; acquiring the first transformation according to the second transformation, the third transformation and the preset matrix; wherein the second transformation and the third transformation are both constrained by a transformation portion.
6. The method of claim 1, wherein said determining the first transformation from the preset matrix and the constraint comprises:
and acquiring the first transformation through an iterative closest point ICP algorithm according to the preset matrix and the constraint condition.
7. The method according to any one of claims 1 to 6, wherein after the acquiring the first spatial calibration and determining as the target spatial calibration, the method further comprises:
obtaining an output track of a VSLAM algorithm under the OTS global coordinate system according to the target time calibration and the target space calibration; and obtaining the track precision evaluation of the VSLAM algorithm through the track observation of the output track and the OTS.
8. A system for space-time calibration, the system comprising: camera, OTS and control means;
the control device determines constraint conditions of first transformation, second transformation, third transformation and space calibration according to the rigid coordinates of the OTS, and initializes the space calibration to a preset matrix; wherein the second transform and the third transform are used for constrained time scaling;
the first transformation is a transformation from an OTS global coordinate system to a camera global coordinate system, the second transformation is a pose transformation from an OTS local coordinate system to the OTS global coordinate system, and the third transformation is a pose transformation from a camera local coordinate system to the camera global coordinate system;
the control device traverses the time calibration, determines the first transformation according to the preset matrix and the constraint condition, obtains a minimum residual according to the first transformation and the constraint condition, and determines a first time calibration as a target time calibration according to the minimum residual;
and the control device acquires a first space calibration and determines the first space calibration as a target space calibration according to the constraint condition, the first transformation and the target time calibration.
9. Computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of spatio-temporal calibration according to any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements a method of spatio-temporal calibration according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010387669.6A CN111583344B (en) | 2020-05-09 | 2020-05-09 | Method, system, computer device and storage medium for space-time calibration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010387669.6A CN111583344B (en) | 2020-05-09 | 2020-05-09 | Method, system, computer device and storage medium for space-time calibration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111583344A CN111583344A (en) | 2020-08-25 |
CN111583344B true CN111583344B (en) | 2023-09-12 |
Family
ID=72110849
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010387669.6A Active CN111583344B (en) | 2020-05-09 | 2020-05-09 | Method, system, computer device and storage medium for space-time calibration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111583344B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110288660A (en) * | 2016-11-02 | 2019-09-27 | 北京信息科技大学 | A Robot Hand-Eye Calibration Method Based on Convex Relaxation Global Optimization Algorithm |
CN111070199A (en) * | 2018-10-18 | 2020-04-28 | 杭州海康威视数字技术股份有限公司 | Hand-eye calibration assessment method and robot |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481284A (en) * | 2017-08-25 | 2017-12-15 | 京东方科技集团股份有限公司 | Method, apparatus, terminal and the system of target tracking path accuracy measurement |
US11386572B2 (en) * | 2018-02-03 | 2022-07-12 | The Johns Hopkins University | Calibration system and method to align a 3D virtual scene and a 3D real world for a stereoscopic head-mounted display |
CN108765498B (en) * | 2018-05-30 | 2019-08-23 | 百度在线网络技术(北京)有限公司 | Monocular vision tracking, device and storage medium |
-
2020
- 2020-05-09 CN CN202010387669.6A patent/CN111583344B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110288660A (en) * | 2016-11-02 | 2019-09-27 | 北京信息科技大学 | A Robot Hand-Eye Calibration Method Based on Convex Relaxation Global Optimization Algorithm |
CN111070199A (en) * | 2018-10-18 | 2020-04-28 | 杭州海康威视数字技术股份有限公司 | Hand-eye calibration assessment method and robot |
Non-Patent Citations (1)
Title |
---|
任杰轩,张旭,刘少丽,王治,吴天一.一种高精度机器人手眼标定方法.现代制造工程.2020,全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111583344A (en) | 2020-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Urban et al. | Mlpnp-a real-time maximum likelihood solution to the perspective-n-point problem | |
CN107980138B (en) | False alarm obstacle detection method and device | |
CN111060101A (en) | Vision-assisted distance SLAM method and device and robot | |
US7333133B2 (en) | Recursive least squares approach to calculate motion parameters for a moving camera | |
CN112541950B (en) | A method and device for calibrating external parameters of a depth camera | |
CN109977466A (en) | A kind of 3-D scanning viewpoint planning method, apparatus and computer readable storage medium | |
US10742952B2 (en) | Three dimensional reconstruction method, apparatus and non-transitory computer readable storage medium thereof | |
CN112465877A (en) | Kalman filtering visual tracking stabilization method based on motion state estimation | |
JP2014216813A (en) | Camera attitude estimation device and program therefor | |
CN111538029A (en) | Vision and radar fusion measuring method and terminal | |
CN111583344B (en) | Method, system, computer device and storage medium for space-time calibration | |
US20250029279A1 (en) | Camera Calibration Method and Apparatus | |
CN116630442B (en) | Visual SLAM pose estimation precision evaluation method and device | |
CN105339981A (en) | Method for registering data using set of primitives | |
CN112313707B (en) | Tracking methods and movable platforms | |
CN116721166B (en) | Binocular camera and IMU rotation external parameter online calibration method, device and storage medium | |
US9245343B1 (en) | Real-time image geo-registration processing | |
CN117889855A (en) | Mobile robot positioning method, mobile robot positioning device, mobile robot positioning equipment and storage medium | |
CN114894222B (en) | External parameter calibration method of IMU-GNSS antenna and related method and equipment | |
CN112344966B (en) | Positioning failure detection method and device, storage medium and electronic equipment | |
CN112325770B (en) | Method and system for evaluating confidence of relative precision of monocular vision measurement at vehicle end | |
Marko et al. | Automatic Stereo Camera Calibration in Real-World Environments Without Defined Calibration Objects | |
CN114076946A (en) | A motion estimation method and device | |
CN111695379A (en) | Ground segmentation method and device based on stereoscopic vision, vehicle-mounted equipment and storage medium | |
CN110349214B (en) | Object positioning method, terminal and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |