[go: up one dir, main page]

CN107316319B - Rigid body tracking method, device and system - Google Patents

Rigid body tracking method, device and system Download PDF

Info

Publication number
CN107316319B
CN107316319B CN201710392600.0A CN201710392600A CN107316319B CN 107316319 B CN107316319 B CN 107316319B CN 201710392600 A CN201710392600 A CN 201710392600A CN 107316319 B CN107316319 B CN 107316319B
Authority
CN
China
Prior art keywords
rigid body
camera
image
relative
dimensional position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710392600.0A
Other languages
Chinese (zh)
Other versions
CN107316319A (en
Inventor
崔珊珊
孙涛
孙恩情
舒玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pico Technology Co Ltd
Original Assignee
Beijing Pico Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pico Technology Co Ltd filed Critical Beijing Pico Technology Co Ltd
Priority to CN201710392600.0A priority Critical patent/CN107316319B/en
Publication of CN107316319A publication Critical patent/CN107316319A/en
Application granted granted Critical
Publication of CN107316319B publication Critical patent/CN107316319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a rigid body tracking method, a device and a system. The method comprises the following steps: acquiring an image of a rigid body acquired by a camera, and determining a three-dimensional position coordinate of a feature point in the image relative to the camera; acquiring the initial posture of the rigid body acquired by the IMU and the posture of the rigid body corresponding to the shooting moment of the camera, and determining the identity information of the image feature point; obtaining matched characteristic point pairs according to the identity information of the characteristic points; calculating rotation and translation information of the feature points in the image according to the matched feature point pairs; calculating position information of a rigid body in the optical system according to the rotation and translation information; the method solves the problems of large calculation amount, large application scene limitation and the like of the existing feature matching algorithm, solves the problem of low spatial positioning precision of the computer graphics tracking algorithm due to low image resolution obtained by a camera, achieves the purpose of performing advantage complementation on the tracking algorithm based on computer graphics and the tracking algorithm of the IMU, and improves the precision of rigid body tracking.

Description

Rigid body tracking method, device and system
Technical Field
The invention relates to the technical field of computers, in particular to a rigid body tracking method, a device and a system.
Background
Existing trace techniques can be implemented in different ways on different physical hardware. The existing tracking technology is mainly implemented in the following ways:
monocular and binocular RGB cameras, IR cameras, depth cameras, etc. based on computer graphics and image processing techniques.
And (II) positioning means based on laser, radar, electromagnetic wave, ultrasonic wave and the like.
And thirdly, an inertial sensing unit based on an IMU and the like.
The above tracking techniques have advantages and disadvantages in the implementation of tracking algorithms, wherein the tracking algorithm based on computer graphics is the most widely used algorithm, and has the advantages of wide application range and low hardware condition, but the positioning accuracy is not high due to the low image resolution. Tracking based on the IMU has the advantages of strong anti-interference capability and stable output, but the IMU has the problem of time drift in the aspect of positioning, so that the positioning is inaccurate.
In addition, feature matching is a fundamental problem in computer vision, and whenever two or more images are involved, the matching of corresponding features is involved. The existing feature matching mainly comprises the following two types, one is a feature matching algorithm based on texture, and the calculated amount is large; the other is a projective invariant-based feature matching algorithm, which has a high limit to the application scenario.
Disclosure of Invention
The invention provides a rigid body tracking method, a rigid body tracking device and a rigid body tracking system, which are used for solving the problems that the positioning precision of the existing tracking algorithm based on computer graphics is not high, the positioning is not accurate due to the time drift of an IMU (inertial measurement Unit), the calculated amount of the existing feature matching algorithm is large, and the application scene is limited.
According to an aspect of the present invention, there is provided a rigid body tracking method, the rigid body including a plurality of feature points, identity information of each feature point on the rigid body and three-dimensional position information with respect to a center of gravity of the rigid body being prestored, and an inertial measurement sensor IMU being provided on the rigid body, the method including:
acquiring an image acquired by a camera during the rigid body motion, and determining three-dimensional position coordinates of the characteristic points of the rigid body in the image relative to the camera;
acquiring the attitude of the rigid body acquired by the IMU in the initial state and the attitude of the camera shooting moment corresponding to the rigid body;
determining the identity information of the feature points of the rigid body in the image according to the attitude of the rigid body in the initial state, the attitude of the camera at the shooting moment, the identity information of each feature point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body;
matching the three-dimensional position coordinates of the feature points of the rigid body in the acquired image relative to the camera with the three-dimensional position information of each feature point on the pre-stored rigid body relative to the gravity center of the rigid body according to the identity information of the feature points of the rigid body in the image to obtain matched feature point pairs;
determining rotation and translation information of the feature points in the image relative to the initial posture according to the feature point pairs; and calculating the position information of the rigid body in the optical system of the camera according to the rotation and translation information.
Preferably, the method further comprises:
and correcting the posture of the rigid body at the shooting moment of the camera acquired by the IMU according to the rotation and translation information.
Preferably, the acquiring an image acquired by a camera during the rigid body motion, and the determining three-dimensional position coordinates of the feature points of the rigid body in the image relative to the camera includes:
calculating a first relative distance between the feature points in the image according to the three-dimensional position coordinates of the feature points in the image obtained by calculation relative to the camera;
calculating a second relative position distance between all the characteristic points of the rigid body according to the three-dimensional position information of each characteristic point on the prestored rigid body relative to the gravity center of the rigid body;
and matching the first relative distance with the second relative distance, and removing the pseudo feature points in the image feature points.
Preferably, the calculating the position information of the rigid body in the camera optical system according to the rotation and translation information comprises:
calculating three-dimensional position coordinates of all feature points on the rigid body relative to a camera according to the rotation and translation information;
and calculating the position information of the rigid body in the optical system of the camera according to the three-dimensional position coordinates of all the characteristic points relative to the camera.
According to another aspect of the present invention, there is provided an apparatus for tracking a rigid body including a plurality of feature points, on which an inertial measurement sensor IMU is provided, the apparatus comprising:
the storage unit is used for prestoring the identity information of each characteristic point on the rigid body and the three-dimensional position information relative to the gravity center of the rigid body;
the image characteristic point position coordinate determination unit is used for acquiring an image acquired by a camera during the rigid body motion and determining the three-dimensional position coordinate of the characteristic point of the rigid body in the image relative to the camera;
the rigid body posture acquisition unit is used for acquiring the posture of the IMU in the initial state of the rigid body and the posture of the camera shooting moment corresponding to the rigid body;
the image characteristic point identity information determining unit is used for determining the identity information of the characteristic points of the rigid body in the image according to the attitude of the rigid body in the initial state, the attitude of the camera at the shooting moment, the identity information of each characteristic point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body;
the image characteristic point pair matching unit is used for matching the three-dimensional position coordinates of the characteristic points of the rigid body in the acquired image relative to the camera with the three-dimensional position information of each characteristic point on the pre-stored rigid body relative to the gravity center of the rigid body according to the identity information of the characteristic points of the rigid body in the image to obtain matched characteristic point pairs;
a rigid body position information determining unit configured to determine rotation and translation information of the feature point in the image with respect to an initial posture, based on the feature point pair; and calculating the position information of the rigid body in the optical system of the camera according to the rotation and translation information.
Preferably, the apparatus further includes a rigid body posture correction unit;
and the rigid body posture correction unit is used for correcting the posture of the rigid body acquired by the IMU at the shooting moment of the camera according to the rotation and translation information.
Preferably, the apparatus further comprises: a pseudo feature point removing unit;
the pseudo feature point removing unit is used for calculating a first relative distance between feature points in the image according to the three-dimensional position coordinates of the feature points in the image obtained by calculation relative to the camera;
calculating a second relative position distance between all the characteristic points of the rigid body according to the three-dimensional position information of each characteristic point on the prestored rigid body relative to the gravity center of the rigid body;
and matching the first relative distance with the second relative distance, and removing the pseudo feature points in the image feature points.
Preferably, the rigid body position information determining unit is configured to calculate three-dimensional position coordinates of all feature points on the rigid body relative to a camera according to the rotation and translation information;
and calculating the position information of the rigid body in the optical system of the camera according to the three-dimensional position coordinates of all the characteristic points relative to the camera.
According to still another aspect of the present invention, there is provided a system for rigid body tracking, comprising a camera, a rigid body, and a control terminal, wherein an IMU is provided at a position of a center of gravity of the rigid body;
the camera is used for acquiring the image of the rigid body and sending the image of the rigid body to the control end;
the IMU is used for acquiring the posture of the rigid body in the initial state and the posture of the camera shooting moment corresponding to the rigid body and sending the postures to the control end;
the control end is used for acquiring images acquired by a camera during the rigid body motion and determining three-dimensional position coordinates of the characteristic points of the rigid body in the images relative to the camera; acquiring the attitude of the rigid body acquired by the IMU in the initial state and the attitude of the camera shooting moment corresponding to the rigid body; determining the identity information of the feature points of the rigid body in the image according to the attitude of the rigid body in the initial state, the attitude of the camera at the shooting moment, the identity information of each feature point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body; matching the three-dimensional position coordinates of the feature points of the rigid body in the acquired image relative to the camera with the three-dimensional position information of each feature point on the pre-stored rigid body relative to the gravity center of the rigid body according to the identity information of the feature points of the rigid body in the image to obtain matched feature point pairs; determining rotation and translation information of the feature points in the image relative to the initial posture according to the feature point pairs; and calculating the position information of the rigid body in the optical system of the camera according to the rotation and translation information.
Preferably, the control end is further configured to correct the posture of the rigid body at the camera shooting time acquired by the IMU according to the rotation and translation information.
The invention has the beneficial effects that: according to the technical scheme, a tracking algorithm based on computer graphics is fused with a tracking algorithm based on an IMU (inertial measurement Unit), firstly, an image acquired by a camera during rigid body motion is acquired, and three-dimensional position coordinates of feature points of the rigid body in the image relative to the camera are determined; acquiring the posture of the rigid body in the initial state and the posture of the camera shooting time corresponding to the rigid body, which are acquired by the IMU, and determining the identity information of the characteristic points of the rigid body in the image according to the posture of the rigid body in the initial state, the posture of the camera shooting time, the identity information of each characteristic point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body; according to the identity information of the feature points of the rigid body in the image, matching the three-dimensional position coordinates of the feature points of the rigid body in the image relative to a camera with the three-dimensional position information of each feature point on a prestored rigid body relative to the gravity center of the rigid body to obtain matched feature point pairs, and solving the problems of large calculated amount, large application scene limitation and the like of the existing feature matching algorithm;
secondly, determining rotation and translation information of the feature points in the image relative to the initial posture according to the matched feature point pairs; the position information of a rigid body in the optical system of the camera is calculated according to the rotation and translation information, so that the problem of low spatial positioning precision of a computer graphics tracking algorithm caused by low resolution of an image acquired by the camera is solved;
and finally, correcting the posture of the rigid body at the shooting moment of the camera acquired by the IMU according to the rotation and translation information, thereby solving the problem of time drift in the spatial positioning process of the IMU. According to the technical scheme, the rigid body position and the corrected posture in the camera optical system obtained through calculation are output, the rigid body tracking is achieved, the purpose of complementing the advantages of a tracking algorithm based on computer graphics and a tracking algorithm of an IMU is achieved, and the accuracy of rigid body tracking is improved.
Drawings
FIG. 1 is a flow chart of a method of rigid body tracking according to one embodiment of the present invention;
FIG. 2 is a flow chart of a method of rigid body tracking according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of a rigid body tracking apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another rigid body tracking apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a system for rigid body tracking according to one embodiment of the present invention.
Detailed Description
The design concept of the invention is as follows: in order to avoid the defects of a tracking algorithm based on computer graphics and an IMU (inertial measurement Unit), solve the problems of large calculated amount, large application scene limitation and the like of the existing feature matching algorithm, realize the accurate tracking of the rigid body, obtain matched feature point pairs by fusing the tracking algorithm based on computer graphics and the tracking algorithm based on the IMU, calculate the position information and the posture of the rigid body in the optical system according to the matched feature point pairs, and correct the posture of the rigid body obtained by the IMU by utilizing the posture of the rigid body in the optical system; and finally, outputting the rigid body position and the corrected posture in the optical system of the camera.
Example one
Fig. 1 is a flow chart of a method of rigid body tracking according to an embodiment of the present invention, as shown in fig. 1,
before step S110, an inertial measurement sensor IMU is disposed on the rigid body, and identity information of each feature point on the rigid body and three-dimensional position information relative to the center of gravity of the rigid body are prestored;
in one embodiment of the present invention, the inertial measurement unit IMU is disposed at the position of the center of gravity of the rigid body, the pre-stored three-dimensional position information of each feature point relative to the center of gravity of the rigid body refers to the geometric three-dimensional position information of each feature point on the rigid body, and the three-dimensional position information of each feature point relative to the center of gravity of the rigid body includes the identity information (for example, specific numbers 1, 2, 3 · ·) of each feature point, the relative distance of each feature point relative to the center of gravity of the rigid body, and the relative direction of each feature point relative to the center of gravity of the rigid body. Once each feature point is set on the rigid body, the three-dimensional position information of each feature point with respect to the center of gravity of the rigid body is fixed. For example, if the rigid body is a virtual reality helmet and the feature point is an infrared light-emitting ball, the infrared light-emitting ball is disposed at the vertex of the virtual reality helmet, and the infrared light-emitting ball is always located at the vertex of the virtual reality helmet no matter how the virtual reality helmet moves.
In step S110, an image acquired by a camera during the rigid body motion is acquired, and three-dimensional position coordinates of feature points of the rigid body in the image relative to the camera are determined.
In one embodiment of the invention, images in rigid body motion are collected by adopting a binocular camera, and three-dimensional position coordinates of characteristic points in the rigid body images relative to the camera are calculated by utilizing the principle of binocular imaging.
In step S120, the posture of the rigid body in the initial state and the posture of the camera shooting time corresponding to the rigid body, which are acquired by the IMU, are acquired.
In one embodiment of the present invention, the angular velocity and acceleration of the rigid body in the three-dimensional space are acquired by the IMU, and the posture of the rigid body is calculated from the angular velocity and acceleration. For example, the rigid body initial posture obtained by the IMU is Q0; and the posture of the rigid body corresponding to the shooting time of the camera acquired by the IMU is Q1.
In step S130, the identity information of the feature points of the rigid body in the image is determined according to the pose of the rigid body in the initial state, the pose of the camera at the shooting time, the identity information of each feature point on the pre-stored rigid body, and the three-dimensional position information relative to the gravity center of the rigid body.
In step S140, according to the identity information of the feature points of the rigid body in the image, matching the three-dimensional position coordinates of the feature points of the rigid body in the acquired image relative to the camera with the three-dimensional position information of each feature point on the pre-stored rigid body relative to the gravity center of the rigid body, so as to obtain matched feature point pairs.
In step S150, determining rotation and translation information of the feature points in the image with respect to the initial pose according to the feature point pairs; and calculating the position information of the rigid body in the optical system of the camera according to the rotation and translation information.
In one embodiment of the invention, rotation and translation information of the feature points in the image relative to the initial pose is calculated by utilizing a perspective N-point positioning PNP algorithm according to the feature point pairs.
In one implementation of the invention, three-dimensional position coordinates of all feature points on the rigid body relative to a camera are calculated according to the rotation and translation information;
and calculating the position information of the rigid body in the optical system of the camera according to the three-dimensional position coordinates of all the characteristic points relative to the camera.
It should be noted that the image acquired by the camera only includes part of the feature points in the rigid body, after the rotation and translation information of the rigid body in the optical system is acquired, the three-dimensional position coordinates of all the feature points relative to the camera can be acquired through calculation, and the position information of the rigid body in the optical system is calculated by using the three-dimensional position coordinates of all the feature points relative to the camera, so that the position information of the rigid body is more accurate. Compared with the existing tracking technology based on computer graphics, the method has the advantages that the positioning accuracy of the rigid body is higher, and the following reasons are as follows: the existing tracking technology based on computer graphics can only acquire part of feature points on a rigid body according to a camera, so that only three-dimensional position coordinates of the part of feature points on the rigid body relative to the camera can be calculated. In addition, the technical scheme of the invention avoids the problem of inaccurate space positioning precision caused by low resolution of the image acquired by the camera.
In one embodiment of the present invention, fig. 2 is a flowchart of a method for rigid body tracking according to one embodiment of the present invention, as shown in fig. 2,
in step S160, correcting the posture of the rigid body at the camera shooting time acquired by the IMU according to the rotation and translation information;
it should be noted that the rotation and translation information of the feature point in the image relative to the initial posture refers to the posture of the rigid body in the optical system of the camera, the IMU has time drift in practical application to cause positioning inaccuracy, and the rotation and translation information obtained by calculation is used to correct the posture of the rigid body acquired by the IMU, so that the posture of the rigid body finally output is more accurate, and further, the rigid body is accurately tracked.
And finally, outputting the position of the rigid body in the camera optical system and the corrected rigid body posture to realize the tracking of the rigid body.
As can be seen from the methods shown in fig. 1 and fig. 2, according to the technical solution of the present invention, a tracking algorithm based on computer graphics is fused with a tracking algorithm based on an IMU, and first, an image acquired by a camera during rigid body motion is obtained, and three-dimensional position coordinates of feature points of the rigid body in the image relative to the camera are determined; acquiring the posture of the rigid body in the initial state and the posture of the camera shooting time corresponding to the rigid body, which are acquired by the IMU, and determining the identity information of the characteristic points of the rigid body in the image according to the posture of the rigid body in the initial state, the posture of the camera shooting time, the identity information of each characteristic point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body; according to the identity information of the feature points of the rigid body in the image, matching the three-dimensional position coordinates of the feature points of the rigid body in the image relative to a camera with the three-dimensional position information of each feature point on a prestored rigid body relative to the gravity center of the rigid body to obtain matched feature point pairs, and solving the problems of large calculated amount, large application scene limitation and the like of the existing feature matching algorithm;
secondly, determining rotation and translation information of the feature points in the image relative to the initial posture according to the matched feature point pairs; the position information of a rigid body in the optical system of the camera is calculated according to the rotation and translation information, so that the problem of low spatial positioning precision of a computer graphics tracking algorithm caused by low resolution of an image acquired by the camera is solved;
and finally, correcting the posture of the rigid body at the shooting moment of the camera acquired by the IMU according to the rotation and translation information, thereby solving the problem of time drift in the spatial positioning process of the IMU. According to the technical scheme, the rigid body position and the corrected posture in the camera optical system obtained through calculation are output, the rigid body tracking is achieved, the purpose of complementing the advantages of a tracking algorithm based on computer graphics and a tracking algorithm of an IMU is achieved, and the accuracy of rigid body tracking is improved.
In order to make the solution of the present invention clearer, a specific example is explained below. Assuming that there are M characteristic points on the rigid body,
s11, pre-storing the identity information of M feature points on the rigid body and the three-dimensional position information of the M feature points relative to the gravity center of the rigid body, for example, a first feature point (a1, b1, c1), a second feature point (a2, b2, c2), a third feature point (a3, b3, c3), a fourth feature point (a4, b4, c4), a fifth feature point (a5, b5, c5), a sixth feature point (a6, a6, a6), a seventh feature point (a7, b7, c7), and an eighth feature point (a8, b8, c 8).
And S12, the IMU is arranged at the gravity center position of the rigid body, so that the posture of the rigid body can be acquired in real time. Capturing images of rigid body motion by using a camera, identifying N (N < M) feature points in the image, and calculating three-dimensional position coordinates of the N feature points relative to the binocular camera by using a binocular imaging principle, for example, (x1, y1, z1), (x2, y2, z2), (x3, y3, z3), (x4, y4, z4) and (x5, y5, z 5);
s13, calculating a first relative position distance between the N feature points according to the three-dimensional position coordinates of the N feature points relative to the binocular camera, for example, the first relative distance includes a1, B2, A3, a4 and a 5; meanwhile, calculating a second relative position distance between all the feature points of the rigid body according to the three-dimensional position information of each feature point on the pre-stored rigid body relative to the gravity center of the rigid body, wherein the second relative distance comprises A1, A2, A3, A4, A5, A6, A7 and A8; and matching the first relative distance with the second relative distance to remove the pseudo feature point B2 in the image feature points, thereby achieving the purpose of removing noise points and improving the rigid body positioning accuracy.
S14, assuming that the number of feature points obtained by removing the dummy feature points after step S13 is N1, the three-dimensional position coordinates of N1 feature points with respect to the camera at this time are (x1, y1, z1), (x3, y3, z3), (x4, y4, z4), and (x5, y5, z 5); the IMU is used for obtaining the posture Q1 of the rigid body at the moment and the posture Q0 of the rigid body in the initial state, the rotation and translation information of the rigid body can be obtained according to the initial posture Q0 of the rigid body and the posture Q1 at the moment, and the three-dimensional position coordinates of the N1 characteristic points relative to the camera can be restored to the three-dimensional position coordinates of the N1 characteristic points in the initial posture Q0 of the rigid body according to the obtained rotation and translation information of the rigid body. Assuming that the three-dimensional position coordinates of the first feature point in the rigid body initial posture are (x0, y0, z0), the three-dimensional position coordinates of the first feature point relative to the binocular camera can be restored to the three-dimensional position coordinates of the first feature point in the rigid body initial posture according to the rotation matrix (x1, y1, z1) + translation (x01, y01, z 01); similarly, (x3, y3, z3) three-dimensional position coordinates restored to the rigid body initial posture Q0 are (x03, y03, z03), (x4, y4, z4) three-dimensional position coordinates restored to the rigid body initial posture Q0 are (x04, y04, z04) and (x5, y5, z5) three-dimensional position coordinates restored to the rigid body initial posture Q0 are (x05, y05, z05), the first feature point in the image is determined to be the 5 th feature point on the rigid body by using the upper, lower, left, right and equal spatial position information, the third feature point in the image is the 4 th feature point on the rigid body, the fourth feature point in the image is the 3 rd feature point on the rigid body, and the fifth feature point in the image is the 2 nd feature point on the rigid body.
S15, matching the first characteristic point in the image with the 5 th characteristic point on the rigid body to obtain matched characteristic point pairs [ (x1, y1, z1), (a5, b5, c5) ]; matching the third characteristic point in the image with the 4 th characteristic point on the rigid body to obtain matched characteristic point pairs [ (x3, y3, z3), (a4, b4, c4) ]; matching the fourth characteristic point in the image with the 3 rd characteristic point on the rigid body to obtain a matched characteristic point pair [ (x4, y4, z4), (a3, b3, c3) ]; and matching the fifth characteristic point in the image with the 2 nd characteristic point on the rigid body to obtain matched characteristic point pairs [ (x5, y5, z5), (a2, b2, c2) ].
S16, inputting matched feature point pairs [ (x1, y1, z1), (a5, b5, c5) ], [ (x3, y3, z3), (a4, b4, c4) ], [ (x4, y4, z4), (a3, b3, c3) ] and [ (x5, y5, z5), (a2, b2, c2) ] into a PNP algorithm, and the PNP calculates the rotation and translation information of the feature points in the image relative to the initial pose Q0; the three-dimensional position coordinates of the remaining feature points (M-N1 feature points) on the rigid body in the rigid body initial state Q0 relative to the camera are calculated according to the three-dimensional position coordinates of the feature points in the rigid body initial posture Q0 and the pre-stored three-dimensional position information of each feature point on the rigid body relative to the gravity center of the rigid body; and converting the three-dimensional position coordinates of the residual characteristic points (M-N1 characteristic points) on the rigid body relative to the camera in the rigid body initial state Q0 into the three-dimensional position coordinates of the residual characteristic points (M-N1 characteristic points) on the rigid body relative to the camera at the shooting time of the camera according to the rotation and translation information, thereby calculating the position information of the rigid body in the camera optical system according to the three-dimensional position coordinates of all the characteristic points relative to the camera.
S17, correcting the posture of the rigid body obtained by the IMU at the shooting moment of the camera according to the rotation and translation information;
and S18, outputting the position of the rigid body in the camera optical system and the corrected rigid body posture, and realizing the tracking of the rigid body.
Example two
Fig. 3 is a schematic structural diagram of an apparatus for rigid body tracking according to an embodiment of the present invention, and as shown in fig. 3, the apparatus for rigid body tracking, in which an inertial measurement unit IMU is disposed on a rigid body including a plurality of feature points, includes:
the storage unit 210 is configured to pre-store identity information of each feature point on the rigid body and three-dimensional position information of the center of gravity of the rigid body;
an image feature point position coordinate determining unit 220, configured to obtain an image acquired by a camera during the rigid body motion, and determine a three-dimensional position coordinate of a feature point of the rigid body in the image relative to the camera;
a rigid body posture obtaining unit 230, configured to obtain a posture of the rigid body acquired by the IMU in an initial state and a posture of the camera shooting time corresponding to the rigid body;
an image feature point identity information determining unit 240, configured to determine identity information of feature points of the rigid body in the image according to a posture of the rigid body in an initial state, a posture of the camera at a shooting time, identity information of each feature point on the pre-stored rigid body, and three-dimensional position information relative to a center of gravity of the rigid body;
an image feature point pair matching unit 250, configured to match, according to the identity information of the feature points of the rigid body in the image, the three-dimensional position coordinates of the feature points of the rigid body in the acquired image relative to the camera with three-dimensional position information of each feature point on a pre-stored rigid body relative to the center of gravity of the rigid body, so as to obtain matched feature point pairs;
a rigid body position information determining unit 260 for determining rotation and translation information of the feature points in the image with respect to the initial posture, based on the feature point pairs; and calculating the position information of the rigid body in the optical system of the camera according to the rotation and translation information.
In an embodiment of the present invention, fig. 4 is a schematic structural diagram of another rigid body tracking apparatus according to an embodiment of the present invention, and as shown in fig. 4, a rigid body posture correcting unit 270 is configured to correct a posture of a rigid body acquired by the IMU at the camera shooting time according to the rotation and translation information; and outputting the position of the rigid body in the camera optical system and the corrected rigid body posture to realize the tracking of the rigid body.
Therefore, according to the technical scheme of the invention, a tracking algorithm based on computer graphics and a tracking algorithm based on an IMU are fused, firstly, an image acquired by a camera during rigid body motion is acquired, and three-dimensional position coordinates of feature points of the rigid body in the image relative to the camera are determined; acquiring the posture of the rigid body in the initial state and the posture of the camera shooting time corresponding to the rigid body, which are acquired by the IMU, and determining the identity information of the characteristic points of the rigid body in the image according to the posture of the rigid body in the initial state, the posture of the camera shooting time, the identity information of each characteristic point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body; according to the identity information of the feature points of the rigid body in the image, matching the three-dimensional position coordinates of the feature points of the rigid body in the image relative to a camera with the three-dimensional position information of each feature point on a prestored rigid body relative to the gravity center of the rigid body to obtain matched feature point pairs, and solving the problems of large calculated amount, large application scene limitation and the like of the existing feature matching algorithm;
secondly, determining rotation and translation information of the feature points in the image relative to the initial posture according to the matched feature point pairs; the position information of a rigid body in the optical system of the camera is calculated according to the rotation and translation information, so that the problem of low spatial positioning precision of a computer graphics tracking algorithm caused by low resolution of an image acquired by the camera is solved;
and finally, correcting the posture of the rigid body at the shooting moment of the camera acquired by the IMU according to the rotation and translation information, thereby solving the problem of time drift in the spatial positioning process of the IMU. According to the technical scheme, the rigid body position and the corrected posture in the camera optical system obtained through calculation are output, the rigid body tracking is achieved, the purpose of complementing the advantages of a tracking algorithm based on computer graphics and a tracking algorithm of an IMU is achieved, and the accuracy of rigid body tracking is improved.
As also shown in fig. 4, the apparatus 200 further comprises: a pseudo feature point removing unit 280;
the pseudo feature point removing unit 280 is configured to calculate a first relative distance between feature points in the image according to the three-dimensional position coordinates of the feature points in the calculated image relative to the camera; calculating a second relative position distance between all the characteristic points of the rigid body according to the three-dimensional position information of each characteristic point on the prestored rigid body relative to the gravity center of the rigid body; and matching the first relative distance with the second relative distance to remove the pseudo feature points in the image feature points, thereby avoiding the situation that the positioning accuracy of the rigid body is reduced due to the impurities existing in the feature points in the image.
In an embodiment of the present invention, the image feature point identity information determining unit 240 is configured to obtain, by using an IMU, a posture of a rigid body corresponding to a shooting time of the camera, and restore, according to the posture of the rigid body, three-dimensional position coordinates of a feature point in the calculated image relative to the camera to three-dimensional position coordinates in an initial posture of the rigid body; and determining the identity information (for example, the ID of the feature point) of the feature point in the image by using spatial position information (for example, upper, lower, left and right, and the like) according to the three-dimensional position coordinates of the feature point in the initial posture of the rigid body and pre-stored three-dimensional position information of each feature point on the rigid body relative to the gravity center of the rigid body. The identity information of the feature points in the image is acquired, so that the feature points in the image can be accurately matched with the feature points on the pre-stored rigid body.
In an embodiment of the present invention, the rigid body position information determining unit 260 is configured to calculate three-dimensional position coordinates of all feature points on the rigid body with respect to a camera according to the rotation and translation information; and calculating the position information of the rigid body in the optical system of the camera according to the three-dimensional position coordinates of all the characteristic points relative to the camera. The method and the device utilize the three-dimensional position coordinates of all the characteristic points on the rigid body relative to the camera to calculate the position information of the rigid body, and compared with the prior tracking technology based on computer graphics, the method and the device utilize the three-dimensional position coordinates of part of the characteristic points on the rigid body relative to the binocular camera to calculate the position information of the rigid body more accurately, solve the problem of low space positioning precision caused by low image resolution and realize the accurate tracking of the rigid body.
It should be noted that the working processes of the apparatuses shown in fig. 3 and fig. 4 are the same as the implementation steps of the embodiments of the method shown in fig. 1 and fig. 2, and the description of the same parts is omitted.
EXAMPLE III
Fig. 5 is a schematic diagram of a rigid body tracking system according to an embodiment of the present invention, and as shown in fig. 5, the system 30 includes a camera 310, a rigid body 320 (in this embodiment, the rigid body refers to a head-mounted display device), and a control end 330, wherein an IMU340 is disposed on the rigid body (in this embodiment, the IMU is disposed at a position of a center of gravity of the rigid body); in practical applications, the rigid body may be any virtual reality device.
The camera 310 is configured to collect an image of the rigid body 320, and send the image of the rigid body 320 to the control end 330;
the IMU340 is configured to acquire a posture of the rigid body 320 in an initial state and a posture of the camera 310 at a shooting time corresponding to the rigid body 320, and send the postures to the control end 330;
the control end 330 is configured to obtain an image, acquired by the camera 310, of the rigid body 320 during motion, and determine three-dimensional position coordinates of feature points of the rigid body 320 in the image relative to the camera 310; acquiring the attitude of the rigid body 320 in the initial state acquired by the IMU340 and the attitude of the camera 310 corresponding to the rigid body 320 at the shooting moment; determining the identity information of the feature points of the rigid body 320 in the image according to the attitude of the rigid body 320 in the initial state, the attitude of the camera 310 at the shooting moment, the identity information of each feature point on the pre-stored rigid body 320 and the three-dimensional position information relative to the gravity center of the rigid body 320; according to the identity information of the feature points of the rigid body 320 in the image, matching the three-dimensional position coordinates of the feature points of the rigid body 320 in the image relative to the camera 310 with the pre-stored three-dimensional position information of each feature point on the rigid body 320 relative to the gravity center of the rigid body 320 to obtain matched feature point pairs; determining rotation and translation information of the feature points in the image relative to the initial posture according to the feature point pairs; and calculates position information of the rigid body 320 in the optical system of the camera 310 based on the rotation and translation information.
In an embodiment of the present invention, the control end 330 is further configured to correct the posture of the rigid body 320 at the shooting time of the camera 310, acquired by the IMU320, according to the rotation and translation information. And outputting the position of the rigid body 320 in the optical system of the camera 310 and the corrected posture of the rigid body 320, thereby realizing the tracking of the rigid body 320.
In one embodiment of the present invention, the camera 310 communicates with the control end 330 in a wired or wireless manner; the IMU340 communicates with the control end 330 in a wired or wireless manner. In practical application, a suitable communication mode can be selected according to actual needs.
Therefore, according to the technical scheme of the invention, a tracking algorithm based on computer graphics and a tracking algorithm based on an IMU are fused, firstly, an image acquired by a camera during rigid body motion is acquired, and three-dimensional position coordinates of feature points of the rigid body in the image relative to the camera are determined; acquiring the posture of the rigid body in the initial state and the posture of the camera shooting time corresponding to the rigid body, which are acquired by the IMU, and determining the identity information of the characteristic points of the rigid body in the image according to the posture of the rigid body in the initial state, the posture of the camera shooting time, the identity information of each characteristic point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body; according to the identity information of the feature points of the rigid body in the image, matching the three-dimensional position coordinates of the feature points of the rigid body in the image relative to a camera with the three-dimensional position information of each feature point on a prestored rigid body relative to the gravity center of the rigid body to obtain matched feature point pairs, and solving the problems of large calculated amount, large application scene limitation and the like of the existing feature matching algorithm;
secondly, determining rotation and translation information of the feature points in the image relative to the initial posture according to the matched feature point pairs; the position information of a rigid body in the optical system of the camera is calculated according to the rotation and translation information, so that the problem of low spatial positioning precision of a computer graphics tracking algorithm caused by low resolution of an image acquired by the camera is solved;
and finally, correcting the posture of the rigid body at the shooting moment of the camera acquired by the IMU according to the rotation and translation information, thereby solving the problem of time drift in the spatial positioning process of the IMU. According to the technical scheme, the rigid body position and the corrected posture in the camera optical system obtained through calculation are output, the rigid body tracking is achieved, the purpose of complementing the advantages of a tracking algorithm based on computer graphics and a tracking algorithm of an IMU is achieved, and the accuracy of rigid body tracking is improved.
In an embodiment of the present invention, the control end 330 is further configured to calculate a first relative distance between the feature points in the image according to the calculated three-dimensional position coordinates of the feature points in the image relative to the camera 310;
calculating a second relative position distance between all the characteristic points of the rigid body according to the three-dimensional position information of each characteristic point on the prestored rigid body relative to the gravity center of the rigid body;
and matching the first relative distance with the second relative distance to remove the pseudo feature points in the image feature points, so as to avoid the influence of impurities on the feature points in the image on the positioning precision of the rigid body.
It should be noted that the working process of the system shown in fig. 5 is the same as the implementation steps of the embodiments of the method shown in fig. 1 and fig. 2, and the description of the same parts is omitted.
In summary, according to the technical scheme of the present invention, a computer graphics-based tracking algorithm is fused with an IMU-based tracking algorithm, and first, an image acquired by a camera during rigid body motion is obtained, and three-dimensional position coordinates of feature points of the rigid body in the image relative to the camera are determined; acquiring the posture of the rigid body in the initial state and the posture of the camera shooting time corresponding to the rigid body, which are acquired by the IMU, and determining the identity information of the characteristic points of the rigid body in the image according to the posture of the rigid body in the initial state, the posture of the camera shooting time, the identity information of each characteristic point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body; according to the identity information of the feature points of the rigid body in the image, matching the three-dimensional position coordinates of the feature points of the rigid body in the image relative to a camera with the three-dimensional position information of each feature point on a prestored rigid body relative to the gravity center of the rigid body to obtain matched feature point pairs, and solving the problems of large calculated amount, large application scene limitation and the like of the existing feature matching algorithm;
secondly, determining rotation and translation information of the feature points in the image relative to the initial posture according to the matched feature point pairs; the position information of a rigid body in the optical system of the camera is calculated according to the rotation and translation information, so that the problem of low spatial positioning precision of a computer graphics tracking algorithm caused by low resolution of an image acquired by the camera is solved;
and finally, correcting the posture of the rigid body at the shooting moment of the camera acquired by the IMU according to the rotation and translation information, thereby solving the problem of time drift in the spatial positioning process of the IMU. According to the technical scheme, the rigid body position and the corrected posture in the camera optical system obtained through calculation are output, the rigid body tracking is achieved, the purpose of complementing the advantages of a tracking algorithm based on computer graphics and a tracking algorithm of an IMU is achieved, and the accuracy of rigid body tracking is improved.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A method for tracking a rigid body, the rigid body including a plurality of feature points, identity information of each feature point on the rigid body and three-dimensional position information with respect to a center of gravity of the rigid body being pre-stored, an inertial measurement sensor IMU being provided on the rigid body, the method comprising:
acquiring an image acquired by a camera during the rigid body motion, and determining three-dimensional position coordinates of the characteristic points of the rigid body in the image relative to the camera;
acquiring the attitude of the rigid body acquired by the IMU in the initial state and the attitude of the camera shooting moment corresponding to the rigid body;
determining the identity information of the feature points of the rigid body in the image according to the attitude of the rigid body in the initial state, the attitude of the camera at the shooting moment, the identity information of each feature point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body;
matching the three-dimensional position coordinates of the feature points of the rigid body in the acquired image relative to the camera with the three-dimensional position information of each feature point on the pre-stored rigid body relative to the gravity center of the rigid body according to the identity information of the feature points of the rigid body in the image to obtain matched feature point pairs;
determining rotation and translation information of the feature points in the image relative to the initial posture according to the feature point pairs; and calculating the position information of the rigid body in the optical system of the camera according to the rotation and translation information.
2. The method of rigid body tracking according to claim 1, further comprising:
and correcting the posture of the rigid body at the shooting moment of the camera acquired by the IMU according to the rotation and translation information.
3. The method for rigid body tracking according to claim 1, wherein the obtaining an image of the camera while the rigid body is in motion, and the determining three-dimensional position coordinates of the feature point of the rigid body in the image with respect to the camera comprises:
calculating a first relative distance between the feature points in the image according to the three-dimensional position coordinates of the feature points in the image obtained by calculation relative to the camera;
calculating a second relative position distance between all the characteristic points of the rigid body according to the three-dimensional position information of each characteristic point on the prestored rigid body relative to the gravity center of the rigid body;
and matching the first relative distance with the second relative distance, and removing the pseudo feature points in the image feature points.
4. The method of rigid body tracking according to claim 1, wherein said calculating position information of the rigid body in the camera optical system from the rotation and translation information comprises:
calculating three-dimensional position coordinates of all feature points on the rigid body relative to a camera according to the rotation and translation information;
and calculating the position information of the rigid body in the optical system of the camera according to the three-dimensional position coordinates of all the characteristic points relative to the camera.
5. An apparatus for rigid body tracking, the rigid body containing a plurality of feature points, an inertial measurement sensor (IMU) disposed on the rigid body, the apparatus comprising:
the storage unit is used for prestoring the identity information of each characteristic point on the rigid body and the three-dimensional position information relative to the gravity center of the rigid body;
the image characteristic point position coordinate determination unit is used for acquiring an image acquired by a camera during the rigid body motion and determining the three-dimensional position coordinate of the characteristic point of the rigid body in the image relative to the camera;
the rigid body posture acquisition unit is used for acquiring the posture of the IMU in the initial state of the rigid body and the posture of the camera shooting moment corresponding to the rigid body;
the image characteristic point identity information determining unit is used for determining the identity information of the characteristic points of the rigid body in the image according to the attitude of the rigid body in the initial state, the attitude of the camera at the shooting moment, the identity information of each characteristic point on the prestored rigid body and the three-dimensional position information relative to the gravity center of the rigid body;
the image characteristic point pair matching unit is used for matching the three-dimensional position coordinates of the characteristic points of the rigid body in the acquired image relative to the camera with the three-dimensional position information of each characteristic point on the pre-stored rigid body relative to the gravity center of the rigid body according to the identity information of the characteristic points of the rigid body in the image to obtain matched characteristic point pairs;
a rigid body position information determining unit configured to determine rotation and translation information of the feature point in the image with respect to an initial posture, based on the feature point pair; and calculating the position information of the rigid body in the optical system of the camera according to the rotation and translation information.
6. The apparatus for rigid body tracking according to claim 5, further comprising a rigid body posture correction unit;
and the rigid body posture correction unit is used for correcting the posture of the rigid body acquired by the IMU at the shooting moment of the camera according to the rotation and translation information.
7. The apparatus of claim 5, wherein the apparatus further comprises: a pseudo feature point removing unit;
the pseudo feature point removing unit is used for calculating a first relative distance between feature points in the image according to the three-dimensional position coordinates of the feature points in the image obtained by calculation relative to the camera;
calculating a second relative position distance between all the characteristic points of the rigid body according to the three-dimensional position information of each characteristic point on the prestored rigid body relative to the gravity center of the rigid body;
and matching the first relative distance with the second relative distance, and removing the pseudo feature points in the image feature points.
8. The apparatus of claim 5,
the rigid body position information determining unit is used for calculating three-dimensional position coordinates of all the characteristic points on the rigid body relative to the camera according to the rotation and translation information;
and calculating the position information of the rigid body in the optical system of the camera according to the three-dimensional position coordinates of all the characteristic points relative to the camera.
9. A rigid body tracking system is characterized by comprising a camera, a rigid body and a control end, wherein an IMU is arranged on the rigid body;
the camera is used for acquiring the image of the rigid body and sending the image of the rigid body to the control end;
the IMU is used for acquiring the posture of the rigid body in the initial state and the posture of the camera shooting moment corresponding to the rigid body and sending the postures to the control end;
the control end is used for acquiring images acquired by a camera during the rigid body motion and determining three-dimensional position coordinates of the characteristic points of the rigid body in the images relative to the camera; acquiring the attitude of the rigid body acquired by the IMU in the initial state and the attitude of the camera shooting moment corresponding to the rigid body; determining the identity information of the feature points of the rigid body in the image according to the attitude of the rigid body in the initial state, the attitude of the camera at the shooting moment, the pre-stored identity information of each feature point on the rigid body and the three-dimensional position information relative to the gravity center of the rigid body; matching the three-dimensional position coordinates of the feature points of the rigid body in the acquired image relative to the camera with the three-dimensional position information of each feature point on the pre-stored rigid body relative to the gravity center of the rigid body according to the identity information of the feature points of the rigid body in the image to obtain matched feature point pairs; determining rotation and translation information of the feature points in the image relative to the initial posture according to the feature point pairs; and calculating the position information of the rigid body in the optical system of the camera according to the rotation and translation information.
10. The system of claim 9, wherein the control end is further configured to correct the posture of the rigid body at the camera shooting time acquired by the IMU according to the rotation and translation information.
CN201710392600.0A 2017-05-27 2017-05-27 Rigid body tracking method, device and system Active CN107316319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710392600.0A CN107316319B (en) 2017-05-27 2017-05-27 Rigid body tracking method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710392600.0A CN107316319B (en) 2017-05-27 2017-05-27 Rigid body tracking method, device and system

Publications (2)

Publication Number Publication Date
CN107316319A CN107316319A (en) 2017-11-03
CN107316319B true CN107316319B (en) 2020-07-10

Family

ID=60181521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710392600.0A Active CN107316319B (en) 2017-05-27 2017-05-27 Rigid body tracking method, device and system

Country Status (1)

Country Link
CN (1) CN107316319B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110383336B (en) * 2017-11-15 2022-11-01 深圳市瑞立视多媒体科技有限公司 Rigid body configuration method, device, terminal equipment and computer storage medium
CN110544278B (en) * 2018-05-29 2022-09-16 杭州海康机器人技术有限公司 Rigid body motion capture method and device and AGV pose capture system
CN111383282B (en) * 2018-12-29 2023-12-01 杭州海康威视数字技术股份有限公司 Pose information determining method and device
CN110393165B (en) * 2019-07-11 2021-06-25 浙江大学宁波理工学院 Open sea aquaculture net cage bait feeding method based on automatic bait feeding boat
CN110910423B (en) * 2019-11-15 2022-08-23 小狗电器互联网科技(北京)股份有限公司 Target tracking method and storage medium
CN110956106B (en) * 2019-11-20 2023-10-10 广州方硅信息技术有限公司 Live broadcast on-demand processing method, device, storage medium and equipment
CN113984051B (en) * 2020-04-30 2024-11-12 深圳市瑞立视多媒体科技有限公司 Method, device, equipment and storage medium for fusion of IMU and rigid body posture
CN112508992B (en) * 2020-12-11 2022-04-19 深圳市瑞立视多媒体科技有限公司 Method, device and equipment for tracking rigid body position information
CN112465857B (en) * 2020-12-11 2024-08-09 深圳市瑞立视多媒体科技有限公司 Method for tracking rigid body position information, device, equipment and storage medium thereof
CN112433629B (en) * 2021-01-28 2021-06-08 深圳市瑞立视多媒体科技有限公司 Rigid body posture determination method and device of double-light-ball interactive pen and computer equipment
CN113627261B (en) * 2021-07-12 2024-10-11 深圳市瑞立视多媒体科技有限公司 Method for recovering correct pose of head rigid body, device, equipment and storage medium thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1711516A (en) * 2002-11-07 2005-12-21 奥林巴斯株式会社 Motion detection apparatus
US7925049B2 (en) * 2006-08-15 2011-04-12 Sri International Stereo-based visual odometry method and system
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN105931272A (en) * 2016-05-06 2016-09-07 上海乐相科技有限公司 Method and system for tracking object in motion
CN106600627A (en) * 2016-12-07 2017-04-26 成都通甲优博科技有限责任公司 Rigid body motion capturing method and system based on mark point
CN106595640A (en) * 2016-12-27 2017-04-26 天津大学 Moving-base-object relative attitude measuring method based on dual-IMU-and-visual fusion and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1711516A (en) * 2002-11-07 2005-12-21 奥林巴斯株式会社 Motion detection apparatus
US7925049B2 (en) * 2006-08-15 2011-04-12 Sri International Stereo-based visual odometry method and system
CN105931272A (en) * 2016-05-06 2016-09-07 上海乐相科技有限公司 Method and system for tracking object in motion
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN106600627A (en) * 2016-12-07 2017-04-26 成都通甲优博科技有限责任公司 Rigid body motion capturing method and system based on mark point
CN106595640A (en) * 2016-12-27 2017-04-26 天津大学 Moving-base-object relative attitude measuring method based on dual-IMU-and-visual fusion and system

Also Published As

Publication number Publication date
CN107316319A (en) 2017-11-03

Similar Documents

Publication Publication Date Title
CN107316319B (en) Rigid body tracking method, device and system
CN106643699B (en) Space positioning device and positioning method in virtual reality system
CN107223269B (en) Three-dimensional scene positioning method and device
JP6008397B2 (en) AR system using optical see-through HMD
US10347029B2 (en) Apparatus for measuring three dimensional shape, method for measuring three dimensional shape and three dimensional shape measurement program
JP2016019194A (en) Image processing apparatus, image processing method, and image projection device
CN112729327B (en) Navigation method, navigation device, computer equipment and storage medium
CN112242009B (en) Display effect fusion method, system, storage medium and main control unit
CN110969706B (en) Augmented reality device, image processing method, system and storage medium thereof
JP2010145389A (en) Method of correcting three-dimensional erroneous array of attitude angle sensor by using single image
Makibuchi et al. Vision-based robust calibration for optical see-through head-mounted displays
CN114018291A (en) A method and device for calibrating inertial measurement unit parameters
JP6061334B2 (en) AR system using optical see-through HMD
US20230157539A1 (en) Computer-implemented method for determining a position of a center of rotation of an eye using a mobile device, mobile device and computer program
CN112614231B (en) Information display method and information display system
CN112284381A (en) Visual inertia real-time initialization alignment method and system
US11436756B2 (en) Calibrating a machine vision camera
JP2018101211A (en) On-vehicle device
CN114794667B (en) Tool calibration method, system, device, electronic equipment and readable storage medium
JP2020181059A (en) Imaging device, method for controlling the same, attitude angle calculation device, program, and storage medium
CN116295327A (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN107911687B (en) Robot teleoperation auxiliary system based on binocular stereo vision
WO2017057426A1 (en) Projection device, content determination device, projection method, and program
JP7169940B2 (en) Drawing superimposing device and program
JP6437811B2 (en) Display device and display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant