[go: up one dir, main page]

CN110411457B - Positioning method, system, terminal and storage medium based on stroke perception and vision fusion - Google Patents

Positioning method, system, terminal and storage medium based on stroke perception and vision fusion Download PDF

Info

Publication number
CN110411457B
CN110411457B CN201910795676.7A CN201910795676A CN110411457B CN 110411457 B CN110411457 B CN 110411457B CN 201910795676 A CN201910795676 A CN 201910795676A CN 110411457 B CN110411457 B CN 110411457B
Authority
CN
China
Prior art keywords
camera
moment
odometer
frame
relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910795676.7A
Other languages
Chinese (zh)
Other versions
CN110411457A (en
Inventor
谢一
张百超
于璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zongmu Technology Shanghai Co Ltd
Original Assignee
Zongmu Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zongmu Technology Shanghai Co Ltd filed Critical Zongmu Technology Shanghai Co Ltd
Priority to CN201910795676.7A priority Critical patent/CN110411457B/en
Publication of CN110411457A publication Critical patent/CN110411457A/en
Application granted granted Critical
Publication of CN110411457B publication Critical patent/CN110411457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明提供一种基于行程感知与视觉融合的定位方法、系统、终端和存储介质,准备阶段,获取视觉数据和行程数据;增量对应关系转换,确定初始位置,以后一时刻j对前一时刻i行程感知设备获取行程数据中提取增量的转换关系;约束,以行程感知设备提取增量的转换关系作为约束条件约束视觉感知设备。采用行程感知设备感知车身运动的长度和航向角信息,将行程感知设备感知的长度或者轨迹信息作为相机的轨迹约束,解决了单目相机的尺度不确定问题,也能防止视觉定位误差累计和尺度漂移。此外,仅仅采用行程感知的长度约束,对于非平面场景的定位,也能起到很好的效果,不局限于平面场景。

The present invention provides a positioning method, system, terminal and storage medium based on the fusion of stroke perception and vision. In the preparation stage, visual data and stroke data are acquired; the incremental correspondence relationship is converted to determine the initial position, and the conversion relationship of the incremental is extracted from the stroke data acquired by the stroke perception device at the previous moment i at the next moment j; the constraint is to constrain the visual perception device with the conversion relationship of the incremental extracted by the stroke perception device as a constraint condition. The stroke perception device is used to perceive the length and heading angle information of the vehicle body movement, and the length or trajectory information perceived by the stroke perception device is used as the trajectory constraint of the camera, which solves the scale uncertainty problem of the monocular camera and can also prevent the accumulation of visual positioning errors and scale drift. In addition, only the length constraint of stroke perception can also play a good effect on the positioning of non-planar scenes, not limited to planar scenes.

Description

Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
Technical Field
The invention relates to the technical field of automobile electronics, in particular to a positioning method, a positioning system, a positioning terminal and a storage medium based on stroke sensing and vision fusion.
Background
The simultaneous localization and mapping (simultaneous localization AND MAPPING, abbreviated as SLAM) with vision as a center is a technology for simultaneously calculating the position, the posture orientation (pose) and the three-dimensional coordinates of the environmental point cloud of the camera through image matching. For a monocular camera, an unknown scale factor (scale) exists between the calculated track and the real track length, so that the positioning and mapping results lack practicability; in addition, the positioning result of the monocular camera is also easy to generate scale drift, so that the monocular camera is not applicable in a large range; the invention uses the vehicle-mounted wheel speed pulse odometer as the scale information constraint, so that the positioning and the map construction have real scales, and the availability of a large range is improved.
The current vision SLAM technology mainly obtains a pixel matching relation of frames through image matching, calculates pose of corresponding moments of the frames and three-dimensional coordinates of matching points through a multi-view geometric equation to form three-dimensional point cloud, then re-projects the three-dimensional point cloud onto an image, and obtains optimal camera positions and point cloud coordinates by minimizing the sum of squares of projection errors of all matching image points. In the process of calculating the camera position, the monocular camera does not know the absolute length of the two frame positions, so that the absolute length can be set at will, and the absolute length is the source of scale uncertainty; to address this problem, additional sensors are typically required to provide dimensional information. The most commonly used solutions at present include: dual (multi) view, camera fusion inertial measurement unit IMU, camera and global positioning system gnss+imu, etc.
The principle of binocular positioning and mapping is basically the same as monocular, except that the true scale can be obtained using the optical axis distance between the two cameras as a scale reference.
The technical method fused with the IMU mainly utilizes the acceleration and angular velocity information of the IMU to obtain the monocular scale. Fusion with GNSS/IMU can also provide a geographic coordinate system on IMU basis, but is limited by use of GNSS not all-weather.
The monocular disadvantage is that the track of the camera is different from the real track by a scale factor (scale) and cannot be determined by the camera itself; in addition, the monocular camera is easy to generate scale drift when used in a large range, so that the front and back scales of the map are inconsistent.
Binocular or multi-view cameras can determine the dimensions of camera motion, but the hardware cost is higher, the data transmission and processing capacity is larger, and good calibration and calibration of the cameras are also required.
The combination of the IMU and the camera can determine the dimension of the camera, but the hardware requirement is higher: the camera is a global shutter and is in accurate hardware time synchronization and firm space rigid connection with the IMU; in addition, the motion is required to have sufficient acceleration and angular velocity in three dimensions. For automobiles, these conditions are not ideal and therefore not practical.
The fused solution of the camera and the GNSS or gnss+imu is not available in indoor or urban building groups, not an all-weather solution, and faces the same problem of combining the camera and the IMU.
Currently, related patents and papers are used for fusing a vehicle body odometer with vision. As in US patent No. :"Robust sensor fusion for mapping and localization in a simultaneous localization and mapping(SLAM)system",, US20050182518A1, which discloses a system for SLAM-based robotic sensor fusion, the invention relates to a method and apparatus that allows measurements from multiple sensors to be combined or fused in a robust manner. For example, the sensor may correspond to a sensor used by a mobile device (e.g., a robot) for positioning and/or mapping. The measurements may be fused to estimate the measurements, for example to estimate the pose of the robot. The method ensures the robust integrity of data through multi-sensor fusion, focuses on mutual confirmation of state integrity between the multi-sensor fusion and the sensors, and is realized by a probability function based on particle filtering, and the actual scene is also a plane.
2013 Paper Heng,Lionel&Li,Bo&Pollefeys,Marc.(2013).CamOdoCal:Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry.Proceedings of the...IEEE/RSJ International Conference on Intelligent Robots and Systems.IEEE/RSJ International Conference on Intelligent Robots and Systems.1793-1800.10.1109/IROS.2013.6696592. published in IEEE conference discloses a method for automatic intrinsic and extrinsic calibration using multiple universal cameras and ranging methods.
2015 Paper Raúl Mur-Artal,J.M.M.Montiel and Juan D.Tardós.ORB-SLAM:A Versatile and Accurate Monocular SLAM System.IEEE Transactions on Robotics,vol.31,no.5,pp.1147-1163,2015. published in IEEE conference discloses a precise multifunctional monocular SLAM system.
2017 Paper Li,Dongxuan&Eckenhoff,Kevin&Wu,Kanzhi&Wang,Yue&Xiong,Rong&Huang,Guoquan.(2017).Gyro-aided camera-odometer online calibration and localization.3579-3586.10.23919/ACC.2017.7963501. published in ACC conference discloses an online calibration and positioning method based on a gyro-assisted camera odometer.
2017 Paper Yijia,He&Guo,Yue&Ye,Aixue&Yuan,Kui.(2017).Camera-odometer calibration and fusion using graph based optimization.1624-1629.10.1109/ROBIO.2017.8324650. published in 2017 paper International Conference on Robotics and Biomimetics of the society of robotics and biomimetics internationally discloses a method of using graph-based optimized camera odometer calibration and fusion.
2018 Paper Zheng,Fan&Liu,Yun-Hui.(2018).SE(2)-Constrained Visual Inertial Fusion for Ground Vehicles.IEEE Sensors Journal.PP.1-1.10.1109/JSEN.2018.2873055., published in IEEE Sensors Journal journal of 2018, discloses a constrained visual inertial fusion method for a ground vehicle.
The above is closest to the prior art herein, the pose constraint provided by the wheel type odometer is also used for constraining and optimizing the pose of the camera, but none of the methods requires that the odometer operates on a plane, the object of the method is a robot encoder odometer with higher precision, the method is based on the inherent odometer of the automobile, the precision is lower than that of the robot encoder, the scheme in [4-6] has low practicability here, and most importantly, the scheme based on the variant of the formula (8) can be used for the automobile under the condition of climbing, so that the constraint framework of the paper is broken through.
Disclosure of Invention
In order to solve the above and other potential technical problems, the invention provides a positioning method, a system, a terminal and a storage medium based on stroke sensing and vision fusion, which adopt a stroke sensing device to sense the length and course angle information of the motion of a vehicle body, take the length or track information sensed by the stroke sensing device as the track constraint of a camera, solve the problem of uncertain scale of a monocular camera, and can also prevent the accumulation of vision positioning errors and scale drift. In addition, the method only adopts the length constraint of travel perception, has good effect on the positioning of non-planar scenes, and is not limited to planar scenes.
The positioning method based on the stroke sensing and vision fusion comprises the following steps:
The preparation stage, obtaining visual data and travel data;
The increment corresponding relation conversion is carried out, an initial position is determined, and the increment conversion relation is extracted from the travel data acquired by the travel sensing equipment at the previous moment i at the later moment j;
Constraint, namely constraint visual perception equipment is carried out by taking the conversion relation of the increment extracted by the travel perception equipment as constraint condition.
Further, the travel sensing device is a non-visual sensing device, the non-visual sensing device is used for accurately acquiring the travel distance of the vehicle, and the non-visual sensing device can be an encoder, an odometer and a differential odometer.
Further, the system further comprises a weight adjustment module, wherein the weight adjustment module dynamically adjusts the weight of the cost function of visual perception pose optimization according to the relative credibility of the journey sensing device and/or the relative credibility of the visual perception device.
In the preparation stage, firstly, according to the method for automatic internal and external calibration by using a plurality of universal cameras and a ranging method in the background technology, the relative rotation translation relations R dc and t dc between the calibration cameras and the center of the rear axle of the automobile body are carried out, and the origin of coordinates of the odometer is selected as the projection of the center of the rear axle on the ground.
Wherein R dc is the first letter of Rotate (rotation), and subscript d represents Odometry, i.e., the odometer; the subscript c denotes camera, i.e., camera. R dc represents the rotational relationship from camera to odometer, which is a 3x3 orthogonal matrix.
Where t dc is the first letter of Translation, and subscript d denotes Odometry, i.e., the odometer; the subscript c denotes camera, i.e., camera. So t dc is a three-dimensional vector representing the displacement vector from the camera to the odometer.
Further, in the conversion of the increment corresponding relation, determining an initial position, and in the conversion relation of extracting the increment from the stroke data acquired by the stroke sensing device at the previous moment i at the later moment j:
when the vehicle body moves on the plane, i represents the vehicle body camera acquisition data at the ith moment, and j represents the vehicle body camera acquisition data at the jth moment. The rotational translation R ci,Tci、Rcj,Tcj of any two frames i and j of the body camera has the following constraint relation with the heading angle Y ij and the displacement d ij calculated from mileage:
In formula (1): d ij represents the expression of the translation vector of the camera and the odometer, and the function expression of the expression is represented by R dc as the rotation relation between the camera and the odometer, Matrix transpose representing camera to odometer rotation relationship,Representing the rotation relationship of the camera at the j-th moment,The matrix transpose representing the camera conversion relationship at the i-th moment, T dc representing the translation vector from the camera to the odometer, T cj representing the translation relationship of the camera at the j-th moment, and T ci representing the translation relationship of the camera at the i-th moment.
In formula (2): matrix transposition representing camera rotation relationship at j moment, R ci representing camera rotation relationship at i moment,/>, and the like Rotational relation matrix transposition,/>, representing camera to odometerRepresenting the rotational relationship of the camera to the odometer, Y ij represents the change in heading angle from the i-th time to the j-th time.
And thus, the conversion relation of the extracted increment in the travel data, namely, the moment-i rotation translation relation R ci,Tci and the moment-j rotation translation relation R cj,Tcj are obtained.
If the body is not moving on a plane but on a sloping road, the above formula is slightly biased but the magnitude is negligible and can be considered to be valid at all times.
The positioning method based on the stroke sensing and vision fusion comprises the following steps:
s01: visual initialization: after the vehicle is started, two frames (1 st frame and 2 nd frame) of the video are selected for initialization, the two frames move at a certain position, and more fields of view coincide; and extracting characteristic points of each frame of picture to complete the matching of the two frames of images. The initial data of the first frame captured image is a set point, and the incremental correspondence (R c1,Tc1) of the second frame captured image relative to the first frame captured image includes a rotational relationship and a translational relationship. The rotation relation is set as a 3x3 identity matrix, the rotation increment of the second frame captured image is obtained by multiplying an initial value, and the translation increment of the second frame captured image is obtained by adding initial values in a vector mode. The delta correspondence (R c2,Tc2) for the third frame is calculated from the multi-view geometry equation by matching pixels (e.g., using a base matrix decomposition); the length of T c1 is set to 1. This process can be implemented by mainstream visual SLAM, see in detail the precise multi-functional monocular SLAM system mentioned in the background.
S02: wheel speed pulse encoder readings between 1 and 2 frames are obtained. Assuming the two-time pulse readings differences Δl 12 and Δr 12 for the rear left and right wheels, the tire radius R, diameter d, for the wheel encoder, there is a pulse count ppr (pulse per round) per revolution of the tire. The displacement d ij and heading angle Y 12 between the two frames are calculated from the mileage differences of the respective wheels according to the Ackerman principle.Is the length corresponding to one pulse reading, then the first frame heading angle: /(I)First frame Displacement
Thus, the heading angle Y ij and the displacement d ij to the arbitrary i-th time and the arbitrary j-th time are as follows:
For Δl ij and Δr ij, the tire radius R, for a wheel encoder, the tire has a pulse count ppr (pulse per round) per revolution. The displacement d ij and heading angle Y 12 between the two frames are calculated from the mileage differences of the respective wheels according to the Ackerman principle. Is the length corresponding to one pulse reading, then:
Wherein DeltaL ij represents the increment from the ith time to the jth time of the rear left wheel, deltaR ij represents the increment from the ith time to the jth time of the rear right wheel, d ij represents the displacement increment between two frames calculated according to the lobida rule, Is the length corresponding to one pulse reading, so S (Δr ij+ΔLij)/2 is the average of the rear left and right wheel path, and Y ij represents the angle of heading angle.
In the initialization stage, the driving track of the vehicle is required to be a straight line, so that the change of the course angle from the first frame to the second frame is ensured to be Y 12 =0, and the movement length of the wheel type odometer is equal to the movement length of the camera. And T c1, namely the length of the translation of the first frame of the camera is adjusted to d 12, and the length of the displacement increment between the first frame and the second frame is unchanged in direction, so that the two dimensions are consistent.
S03: for any ith frame and jth frame, calculating the displacement d ij and the course angle change Y ij between the two frames according to formulas (3) and (4), and adding the displacement d ij and the course angle change Y ij into an error function as prior constraints of the camera pose for optimization. The rotation of the camera is R ci、Rcj, and the translation T ci、Tcj of the camera is realized, so that the T ci、Tcj does not need to be adjusted in scale and only needs to be optimized.
Conventional visual SLAM optimizes the projection errors of the point cloud and camera. Assuming that there are several three-dimensional map points P (kth denoted as P k) between the i (j) th frame, the map points are (u i,vi) at the two-dimensional plane pixels corresponding to the i (j) th frame, the cost function of visual SLAM optimization is:
Wherein f x,fy,cx,cy is a camera internal reference, is calibrated in advance, and does not need optimization. The summation traverses all frames and the feature points of the frames. Iterative optimization is performed through a nonlinear optimization process to minimize a loss function cost, and finally an optimized camera pose R ci、Rcj,Tci、Tcj is obtained; X-coordinate representing kth point p,/> Representing the y-coordinate of the kth point p,The z coordinate of the kth point p is represented, pk is the kth point in the three-dimensional map, and (u i,vi) represents a two-dimensional planar pixel corresponding to the ith frame image.
R cj represents a camera rotation relationship at the j-th moment, R ci represents a camera rotation relationship at the i-th moment, T ci represents a translation relationship of an odometer at the i-th moment of the camera, and T cj represents a translation relationship of the camera to the odometer at the j-th moment.
S04: based on the formula (6), according to the formulas (1) and (2), a cost function is added in visual pose optimization:
wherein Y ij is calculated according to formula (3), d ij is calculated according to the right of formula (4), and represents rotation and displacement of the wheel speed odometer are converted into camera displacement, and log operation is to calculate the Rodrigas angle of the rotation matrix, namely the rotation angle. σ 1 and σ 2 are corresponding weight factors, which are generally adjusted according to actual data conditions.
Further, the equation (7) for adding the visual pose optimization to the cost function in the step S04 has a weakened version:
i.e. only the length constraint between two moments is added. The relationship belongs to the deduction of the formula (1), and can also replace the formula (7), and the effects of improving the positioning precision and recovering the dimension can still be achieved.
Positioning system based on car wheel speed odometer and vision fuses includes:
The visual data acquisition module is used for acquiring camera or camera data and respectively giving data labels according to time stamps of the acquisition time of the camera or camera data;
The odometer data acquisition module comprises a course angle Y ij acquisition module and an odometer displacement acquisition module,
The course angle Y ij obtaining module obtains the reading of a wheel speed pulse encoder of each wheel, the radius r value of the wheel tire and the pulse count ppr of each wheel tire turn, and the course angle Y ij is obtained by multiplying the wheel speed pulse encoder of a first wheel by the first pulse count and the wheel speed pulse encoder of a second wheel by the second pulse count and dividing the second pulse count by the wheel tire diameter d in two wheels of the same driving shaft;
the odometer displacement acquisition module acquires the reading of a wheel speed pulse encoder of each wheel, the value of the radius r of a wheel tire and the pulse count ppr of each wheel tire revolution, and the odometer displacement is acquired by multiplying the wheel speed pulse encoder of a first wheel by the first pulse count and the wheel speed pulse encoder of a second wheel by the second pulse count in two wheels of the same driving shaft and dividing the heading angle YIj by the Rodrigas angle;
A constraint module comprising a first constraint module and a second constraint module,
A first constraining module constrains camera motion delta in motion delta generated from vehicle body mileage data at the same time, i through j, the first constraining module being a constraint that constrains a vehicle translation vector,
The second constraint module obtains the heading angle deflection of the vehicle body as the constraint of the rotation increment according to the reading difference of the wheel speed odometer, and the second constraint module is the constraint for constraining the rotation vector of the vehicle.
Further, the system also comprises a cost function optimization module, the cost function constraint module and an influence factor of the cost function constraint module, wherein the influence factor can be used for assisting in judging the relative credibility of the cost function.
As described above, the present invention has the following advantageous effects:
1) The intrinsic wheel speed pulse odometer of the vehicle body is adopted, the length and course angle information of the motion of the vehicle body are calculated according to the reading of the odometer, the length or the track information of the odometer is used as the track constraint of the camera, the problem of uncertain scale of the monocular camera is solved, and the accumulation of visual positioning errors and scale drift can be prevented.
2) The method can play a good role in positioning the non-planar scene, and is not limited to the planar scene.
3) The visual track is consistent with the real scale and can be used for navigation and positioning reference; in a larger driving range, the positioning and mapping precision is improved.
4) The inherent wheel speed pulse odometer of the vehicle body is adopted, so that the hardware cost is not required to be increased additionally; there is little additional computational effort compared to IMUs or binocular.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flow chart of the present invention.
Fig. 2 shows a flow chart of another embodiment of the present invention.
Fig. 3 shows a flow chart of the camera and mileage data according to the present invention.
Fig. 4 shows a camera and odometer flow chart in another embodiment of the invention.
Fig. 5 is a schematic diagram showing the positioning of extracted feature points of the camera according to the present invention.
Fig. 6 is a schematic diagram of another embodiment of the present invention for locating extracted feature points of a camera.
Fig. 7 is a schematic diagram of another embodiment of the present invention for locating extracted feature points of a camera.
Fig. 8 is a schematic diagram of another embodiment of the present invention for locating extracted feature points of a camera.
Fig. 9 is a schematic diagram of another embodiment of the present invention for locating extracted feature points of a camera.
FIG. 10 is a schematic diagram showing the return to actual position after the camera extracts the bias of the feature point positioning and is constrained by the odometer.
FIG. 11 is a schematic diagram showing the return to actual position after the camera extracts the offset of the feature point location at another moment and is constrained by the odometer.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be understood that the structures, proportions, sizes, etc. shown in the drawings are for illustration purposes only and should not be construed as limiting the invention to the extent that it can be practiced, since modifications, changes in the proportions, or otherwise, used in the practice of the invention, are not intended to be critical to the essential characteristics of the invention, but are intended to fall within the spirit and scope of the invention. Also, the terms such as "upper," "lower," "left," "right," "middle," and "a" and the like recited in the present specification are merely for descriptive purposes and are not intended to limit the scope of the invention, but are intended to provide relative positional changes or modifications without materially altering the technical context in which the invention may be practiced.
With reference to figures 1 to 11 of the drawings,
The positioning method based on the stroke sensing and vision fusion comprises the following steps:
The preparation stage, obtaining visual data and travel data;
The increment corresponding relation conversion is carried out, an initial position is determined, and the increment conversion relation is extracted from the travel data acquired by the travel sensing equipment at the previous moment i at the later moment j;
Constraint, namely constraint visual perception equipment is carried out by taking the conversion relation of the increment extracted by the travel perception equipment as constraint condition.
Further, the travel sensing device is a non-visual sensing device, the non-visual sensing device is used for accurately acquiring the travel distance of the vehicle, and the non-visual sensing device can be an encoder, an odometer and a differential odometer.
Further, the system further comprises a weight adjustment module, wherein the weight adjustment module dynamically adjusts the weight of the cost function of visual perception pose optimization according to the relative credibility of the journey sensing device and/or the relative credibility of the visual perception device.
In the preparation stage, firstly, according to the method for automatic internal and external calibration by using a plurality of universal cameras and a ranging method in the background technology, the relative rotation translation relations R dc and t dc between the calibration cameras and the center of the rear axle of the automobile body are carried out, and the origin of coordinates of the odometer is selected as the projection of the center of the rear axle on the ground.
Wherein R dc is the first letter of Rotate (rotation), and subscript d represents Odometry, i.e., the odometer; the subscript c denotes camera, i.e., camera. R dc represents the rotational relationship from camera to odometer, which is a 3x3 orthogonal matrix.
Where t dc is the first letter of Translation, and subscript d denotes Odometry, i.e., the odometer; the subscript c denotes camera, i.e., camera. So t dc is a three-dimensional vector representing the displacement vector from the camera to the odometer.
When the vehicle body moves on the plane, i represents the vehicle body camera acquisition data at the ith moment, and j represents the vehicle body camera acquisition data at the jth moment. The rotational translation R ci,Tci、Rcj,Tcj of any two frames i and j of the body camera has the following constraint relation with the heading angle Y ij and the displacement d ij calculated from mileage:
In formula (1): d ij represents the expression of the translation vector of the camera and the odometer, and the function expression of the expression is represented by R dc as the rotation relation between the camera and the odometer, Matrix transpose representing camera to odometer rotation relationship,Representing the rotation relationship of the camera at the j-th moment,The matrix transpose representing the camera conversion relationship at the i-th moment, T dc representing the translation vector from the camera to the odometer, T cj representing the translation relationship of the camera at the j-th moment, and T ci representing the translation relationship of the camera at the i-th moment.
In formula (2): matrix transposition representing camera rotation relationship at j moment, R ci representing camera rotation relationship at i moment,/>, and the like Rotational relation matrix transposition,/>, representing camera to odometerRepresenting the rotational relationship of the camera to the odometer, Y ij represents the change in heading angle from the i-th time to the j-th time.
If the body is not moving on a plane but on a sloping road, the above formula is slightly biased but the magnitude is negligible and can be considered to be valid at all times.
The positioning method based on the stroke sensing and vision fusion comprises the following steps:
s01: visual initialization: after the vehicle is started, two frames (1 st frame and 2 nd frame) of the video are selected for initialization, the two frames move at a certain position, and more fields of view coincide; and extracting characteristic points of each frame of picture to complete the matching of the two frames of images. The initial data of the first frame captured image is a set point, and the incremental correspondence (R c1,Tc1) of the second frame captured image relative to the first frame captured image includes a rotational relationship and a translational relationship. The rotation relation is set as a 3x3 identity matrix, the rotation increment of the second frame captured image is obtained by multiplying an initial value, and the translation increment of the second frame captured image is obtained by adding initial values in a vector mode. The delta correspondence (R c2,Tc2) for the third frame is calculated from the multi-view geometry equation by matching pixels (e.g., using a base matrix decomposition); the length of T c1 is set to 1. This process can be implemented by mainstream visual SLAM, see in detail the precise multi-functional monocular SLAM system mentioned in the background.
S02: wheel speed pulse encoder readings between 1 and 2 frames are obtained. Assuming the two-time pulse readings differences Δl 12 and Δr 12 for the rear left and right wheels, the tire radius R, diameter d, for the wheel encoder, there is a pulse count ppr (pulse per round) per revolution of the tire. The displacement d ij and heading angle Y 12 between the two frames are calculated from the mileage differences of the respective wheels according to the Ackerman principle.Is the length corresponding to one pulse reading, then the first frame heading angle: /(I)First frame Displacement
Thus, the heading angle Y ij and the displacement d ij to the arbitrary i-th time and the arbitrary j-th time are as follows:
For Δl ij and Δr ij, the tire radius R, for a wheel encoder, the tire has a pulse count ppr (pulse per round) per revolution. The displacement d ij and heading angle Y 12 between the two frames are calculated from the mileage differences of the respective wheels according to the Ackerman principle. Is the length corresponding to one pulse reading, then:
Wherein DeltaL ij represents the increment from the ith time to the jth time of the rear left wheel, deltaR ij represents the increment from the ith time to the jth time of the rear right wheel, d ij represents the displacement increment between two frames calculated according to the lobida rule, Is the length corresponding to one pulse reading, so S (Δr ij+ΔLij)/2 is the average of the rear left and right wheel path, and Y ij represents the angle of heading angle.
In the initialization stage, the driving track of the vehicle is required to be a straight line, so that the change of the course angle from the first frame to the second frame is ensured to be Y 12 =0, and the movement length of the wheel type odometer is equal to the movement length of the camera. And T c1, namely the length of the translation of the first frame of the camera is adjusted to d 12, and the length of the displacement increment between the first frame and the second frame is unchanged in direction, so that the two dimensions are consistent.
S03: for any ith frame and jth frame, calculating the displacement d ij and the course angle change Y ij between the two frames according to formulas (3) and (4), and adding the displacement d ij and the course angle change Y ij into an error function as prior constraints of the camera pose for optimization. The rotation of the camera is R ci、Rcj, and the translation T ci、Tcj of the camera is realized, so that the T ci、Tcj does not need to be adjusted in scale and only needs to be optimized.
Conventional visual SLAM optimizes the projection errors of the point cloud and camera. Assuming that there are several three-dimensional map points P (kth denoted as P k) between the i (j) th frame, the map points are (u i,vi) at the two-dimensional plane pixels corresponding to the i (j) th frame, the cost function of visual SLAM optimization is:
Wherein f x,fy,cx,cy is a camera internal reference, is calibrated in advance, and does not need optimization. The summation traverses all frames and the feature points of the frames. Iterative optimization is performed through a nonlinear optimization process to minimize a loss function cost, and finally an optimized camera pose R ci、Rcj,Tci、Tcj is obtained; X-coordinate representing kth point p,/> Representing the y-coordinate of the kth point p,The z coordinate of the kth point p is represented, pk is the kth point in the three-dimensional map, and (u i,vi) represents a two-dimensional planar pixel corresponding to the ith frame image.
R cj represents a camera rotation relationship at the j-th moment, R ci represents a camera rotation relationship at the i-th moment, T ci represents a translation relationship of an odometer at the i-th moment of the camera, and T cj represents a translation relationship of the camera to the odometer at the j-th moment.
S04: based on the formula (6), according to the formulas (1) and (2), a cost function is added in visual pose optimization:
wherein Y ij is calculated according to formula (3), d ij is calculated according to the right of formula (4), and represents rotation and displacement of the wheel speed odometer are converted into camera displacement, and log operation is to calculate the Rodrigas angle of the rotation matrix, namely the rotation angle. σ 1 and σ 2 are corresponding weight factors, which are generally adjusted according to actual data conditions.
Further, the equation (7) for adding the visual pose optimization to the cost function in the step S04 has a weakened version:
i.e. only the length constraint between two moments is added. The relationship belongs to the deduction of the formula (1), and can also replace the formula (7), and the effects of improving the positioning precision and recovering the dimension can still be achieved.
Positioning system based on car wheel speed odometer and vision fuses includes:
The visual data acquisition module is used for acquiring camera or camera data and respectively giving data labels according to time stamps of the acquisition time of the camera or camera data;
The odometer data acquisition module comprises a course angle Y ij acquisition module and an odometer displacement acquisition module,
The course angle Y ij obtaining module obtains the reading of a wheel speed pulse encoder of each wheel, the radius r value of the wheel tire and the pulse count ppr of each wheel tire turn, and the course angle Y ij is obtained by multiplying the wheel speed pulse encoder of a first wheel by the first pulse count and the wheel speed pulse encoder of a second wheel by the second pulse count and dividing the second pulse count by the wheel tire diameter d in two wheels of the same driving shaft;
the odometer displacement acquisition module acquires the reading of a wheel speed pulse encoder of each wheel, the value of the radius r of a wheel tire and the pulse count ppr of each wheel tire revolution, and the odometer displacement is acquired by multiplying the wheel speed pulse encoder of a first wheel by the first pulse count and the wheel speed pulse encoder of a second wheel by the second pulse count and dividing the course angle Y ij by the Rodrigas angle in two wheels of the same driving shaft;
A constraint module comprising a first constraint module and a second constraint module,
A first constraining module constrains camera motion delta in motion delta generated from vehicle body mileage data at the same time, i through j, the first constraining module being a constraint that constrains a vehicle translation vector,
The second constraint module obtains the heading angle deflection of the vehicle body as the constraint of the rotation increment according to the reading difference of the wheel speed odometer, and the second constraint module is the constraint for constraining the rotation vector of the vehicle.
Further, the system also comprises a cost function optimization module, the cost function constraint module and an influence factor of the cost function constraint module, wherein the influence factor can be used for assisting in judging the relative credibility of the cost function.
As a preferred embodiment, the present embodiment further provides a terminal device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack-mounted cloud, a blade cloud, a tower cloud, or a rack-mounted cloud (including an independent cloud or a cloud cluster formed by multiple clouds) capable of executing a program, and so on. The terminal device of this embodiment includes at least, but is not limited to: a memory, a processor, and the like, which may be communicatively coupled to each other via a system bus. It should be noted that a terminal device having a component memory, a processor, but it should be understood that not all of the illustrated components are required to be implemented, and that alternative positioning systems based on a wheel speed odometer and visual fusion may implement more or fewer components.
As a preferred embodiment, the memory (i.e., readable storage medium) includes flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the memory may be an internal storage unit of a computer device, such as a hard disk or memory of the computer device. In other embodiments, the memory may also be an external storage device of the computer device, such as a plug-in hard disk provided on the computer device, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Of course, the memory may also include both internal storage units of the computer device and external storage devices. In this embodiment, the memory is typically used to store an operating system and various application software installed on the computer device, such as positioning method program codes based on the integration of travel sensing and vision in the embodiment, and the like. In addition, the memory can be used to temporarily store various types of data that have been output or are to be output.
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a cloud, an App application store, etc., on which a computer program is stored, which when executed by a processor, performs a corresponding function. The computer readable storage medium of this embodiment realizes the integrity of the positioning method program code based on the stroke sensing and vision fusion when executed by the processor.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims of this invention, which are within the skill of those skilled in the art, be included within the spirit and scope of this invention.

Claims (11)

1. The positioning method based on the stroke sensing and vision fusion is characterized by comprising the following steps of:
the preparation stage, obtaining visual data and travel data, and setting an initial value;
The method comprises the steps of converting increment corresponding relation, determining an initial position, and extracting increment conversion relation from travel data acquired by travel sensing equipment at a previous moment i at a later moment j, wherein when a vehicle body moves on a plane, i represents the vehicle body camera acquisition data at the moment i, j represents the vehicle body camera acquisition data at the moment j, and the following constraint relation exists between rotation translation R ci,Tci、Rcj,Tcj of any two frames i and j of the vehicle body camera and a course angle Y ij and displacement d ij calculated from mileage:
In formula (1): d ij represents the expression of the translation vector of the camera and the odometer, and the function expression of the expression is represented by R dc as the rotation relation between the camera and the odometer, Matrix transpose representing camera to odometer rotation relationship,Representing the rotation relationship of the camera at the j-th moment,A matrix transpose representing a camera conversion relationship at the i-th moment, T dc representing a translation vector from the camera to the odometer, T cj representing a translation relationship of the camera at the j-th moment, and T ci representing a translation relationship of the camera at the i-th moment;
In formula (2): matrix transposition representing camera rotation relationship at j moment, R ci representing camera rotation relationship at i moment,/>, and the like Rotational relation matrix transposition,/>, representing camera to odometerThe rotation relation from the camera to the odometer is represented, and Y ij represents the change of the course angle from the ith moment to the jth moment;
The conversion relation of the increment is extracted from the travel data, namely, the rotation translation relation R ci,Tci at moment i and the rotation translation relation R cj,Tcj at moment j are obtained;
Constraint, namely constraint visual perception equipment is carried out by taking the conversion relation of the increment extracted by the travel perception equipment as constraint condition.
2. The positioning method based on the stroke sensing and vision integration according to claim 1, wherein the stroke sensing device is a non-vision sensing device, the non-vision sensing device is used for accurately acquiring the traveling distance of the vehicle, and the non-vision sensing device can be an encoder, an odometer and a differential odometer.
3. The positioning method based on journey sensing and vision fusion according to claim 2, further comprising a weight adjustment module that dynamically adjusts the weight of the cost function of vision sensing pose optimization with the relative credibility of the journey sensing device and/or the relative credibility of the vision sensing device.
4. The positioning method based on stroke sensing and vision fusion according to claim 1, wherein the preparing step, obtaining vision data and stroke data, and setting up initial values specifically comprises:
After the vehicle is started, a first frame and a second frame in a video or a video intercepting part are selected for initialization, and the two frames move at a certain position and have more vision overlapping; extracting characteristic points of each frame of picture to complete the matching of two frames of images; the initial data of the first frame captured image is a set point, and the incremental correspondence (R c1,Tc1) of the second frame captured image relative to the first frame captured image includes a rotational relationship and a translational relationship.
5. The positioning method based on the stroke sensing and the vision fusion according to claim 4, wherein the rotation relation is set to be a 3x3 identity matrix, the rotation increment of the second frame captured image is obtained by multiplying an initial value, and the translation increment of the second frame captured image is obtained by adding initial values in a vector manner; the incremental correspondence (R c2,Tc2) of the third frame is calculated from the multi-view geometry equation by matching pixels; the length of T c1 is set to 1.
6. The positioning method based on stroke sensing and vision fusion as claimed in claim 5, wherein said preparing step for setting up initial values for the stroke data comprises:
obtaining wheel speed encoder readings between the first frame and the second frame, assuming two moment pulse readings differences Δl 12 and Δr 12 for the rear left wheel and the rear right wheel, tire radius R, diameter d, and for a wheel encoder, a pulse count ppr for each revolution of the tire; calculating the displacement d ij and the course angle Y 12 between two frames from the mileage difference of each wheel according to the Ackerman principle; Is the length to which one pulse reading corresponds,
The first frame heading angle:
First frame displacement: I.e. the travel data is acquired and an initial value is established.
7. The positioning method based on stroke sensing and vision fusion according to claim 6, wherein the preparing step, the acquiring the stroke data specifically includes:
The acquired travel data are generalized to the course angle Y ij and the displacement d ij at any ith moment and any jth moment by setting initial values:
Tire radius r, for a wheel encoder, the tire has a pulse count ppr (pulse per round) per revolution; calculating the displacement d ij and the course angle Y 12 between two frames from the mileage difference of each wheel according to the Ackerman principle; is the length corresponding to one pulse reading, then:
Wherein DeltaL ij represents the increment from the ith time to the jth time of the rear left wheel, deltaR ij represents the increment from the ith time to the jth time of the rear right wheel, d ij represents the displacement increment between two frames calculated according to the lobida rule, Is the length corresponding to one pulse reading, so S (DeltaR ij+ΔLij)/2 is the average of the distances of the rear left wheel and the rear right wheel, and Y ij represents the angle of the course angle;
When the travel data is acquired and an initial value is set, the travel track of the vehicle is required to be a straight line, so that the change Y 12 = 0 of the course angle from the first frame to the second frame is ensured, and the movement length of the wheel type odometer is equal to the movement length of the camera; and (5) adjusting the length of the translation of the T c1, namely the first frame of the camera, to be d 12, wherein the length of the displacement increment between the first frame and the second frame is unchanged in direction, so that the consistency of the scales is ensured.
8. The positioning method based on the stroke sensing and vision fusion as recited in claim 7, wherein the constraining, using the conversion relation of the increment extracted by the stroke sensing device as the constraint condition, constrains the vision sensing device, comprises the following specific steps:
The constraint mode of the constraint module is specifically as follows: assuming that there are several three-dimensional map points P (kth denoted as P k) between the i (j) th frame, the map points are (u i,vi) at the two-dimensional plane pixels corresponding to the i (j) th frame, the cost function of visual SLAM optimization is:
adding a cost function, namely a constraint of the travel data acquisition module on the visual data acquisition module in visual pose optimization:
Constraining the visual data acquisition module to acquire data by using a formula (7) so as to achieve the effects of improving positioning accuracy and recovering scale;
Wherein R cj represents a camera rotation relationship at the j-th moment, R ci represents a camera rotation relationship at the i-th moment, T ci represents a translation relationship of an odometer at the i-th moment of the camera, and T cj represents a translation relationship from the camera to the odometer at the j-th moment; the log operation is to find the Rodrigas angle of the rotation matrix; σ 1 and σ 2 are corresponding weight factors, and are generally adjusted according to actual data conditions;
wherein t dc is a three-dimensional vector representing the displacement vector from the camera to the odometer;
Wherein R dc represents the rotational relationship from the camera to the odometer;
Wherein the method comprises the steps of X-coordinate representing kth point p,Representing the y-coordinate of the kth point p,The z coordinate of the kth point p is represented, pk is the kth point in the three-dimensional map, and (u i,vi) represents a two-dimensional plane pixel corresponding to the ith frame of image;
Wherein f x,fy,cx,cy is a camera internal parameter, and performing iterative optimization through a nonlinear optimization process to minimize a loss function cost, and finally obtaining an optimized camera pose R ci、Rcj,Tci、Tcj.
9. The positioning method based on stroke sensing and vision fusion according to claim 8, wherein the constraint mode of the constraint module is specifically:
Assuming that there are several three-dimensional map points P (kth denoted as P k) between the i (j) th frame, the map points are (u i,vi) at the two-dimensional plane pixels corresponding to the i (j) th frame, the cost function of visual SLAM optimization is:
adding a cost function, namely a constraint of the travel data acquisition module on the visual data acquisition module in visual pose optimization:
Constraining the visual data acquisition module to acquire data by using a formula (8) so as to achieve the effects of improving positioning accuracy and recovering scale;
Wherein R cj represents a camera rotation relationship at the j-th moment, R ci represents a camera rotation relationship at the i-th moment, T ci represents a translation relationship of an odometer at the i-th moment of the camera, and T cj represents a translation relationship from the camera to the odometer at the j-th moment; the log operation is to find the Rodrigas angle of the rotation matrix; σ 1 and σ 2 are corresponding weight factors, and are generally adjusted according to actual data conditions;
wherein t dc is a three-dimensional vector representing the displacement vector from the camera to the odometer;
Wherein R dc represents the rotational relationship from the camera to the odometer;
Wherein the method comprises the steps of X-coordinate representing kth point p,Representing the y-coordinate of the kth point p,The z coordinate of the kth point p is represented, pk is the kth point in the three-dimensional map, and (u i,vi) represents a two-dimensional plane pixel corresponding to the ith frame of image;
Wherein f x,fy,cx,cy is a camera internal parameter, and performing iterative optimization through a nonlinear optimization process to minimize a loss function cost, and finally obtaining an optimized camera pose R ci、Rcj,Tci、Tcj.
10. A mobile terminal, characterized by: which may be a vehicle-mounted terminal or a mobile phone mobile terminal performing the positioning method based on the integration of journey sensing and vision as claimed in any one of claims 1 to 9.
11. A computer storage medium being a computer program written in accordance with the stroke-aware and vision fusion based positioning method according to any of claims 1-9.
CN201910795676.7A 2019-08-27 2019-08-27 Positioning method, system, terminal and storage medium based on stroke perception and vision fusion Active CN110411457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910795676.7A CN110411457B (en) 2019-08-27 2019-08-27 Positioning method, system, terminal and storage medium based on stroke perception and vision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910795676.7A CN110411457B (en) 2019-08-27 2019-08-27 Positioning method, system, terminal and storage medium based on stroke perception and vision fusion

Publications (2)

Publication Number Publication Date
CN110411457A CN110411457A (en) 2019-11-05
CN110411457B true CN110411457B (en) 2024-04-19

Family

ID=68368740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910795676.7A Active CN110411457B (en) 2019-08-27 2019-08-27 Positioning method, system, terminal and storage medium based on stroke perception and vision fusion

Country Status (1)

Country Link
CN (1) CN110411457B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113375656B (en) * 2020-03-09 2023-06-27 杭州海康威视数字技术股份有限公司 Positioning method and device
CN111595336B (en) * 2020-07-27 2020-10-27 北京云迹科技有限公司 Method and device for determining robot positioning information
CN112581533B (en) * 2020-12-16 2023-10-03 百度在线网络技术(北京)有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN112710305B (en) * 2020-12-21 2022-12-06 北京百度网讯科技有限公司 Vehicle positioning method and device
CN113124880B (en) * 2021-05-18 2023-06-13 杭州迦智科技有限公司 Map building and positioning method and device based on two sensor data fusion
CN113516692B (en) * 2021-05-18 2024-07-19 上海汽车集团股份有限公司 SLAM method and device for multi-sensor fusion
CN113793381A (en) * 2021-07-27 2021-12-14 武汉中海庭数据技术有限公司 Monocular visual information and wheel speed information fusion positioning method and system
CN114018284B (en) * 2021-10-13 2024-01-23 上海师范大学 A vision-based wheel speed odometer correction method
CN117289702A (en) * 2023-10-25 2023-12-26 浙江极氪智能科技有限公司 Target vehicle position correction method, device, equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106840148A (en) * 2017-01-24 2017-06-13 东南大学 Wearable positioning and path guide method based on binocular camera under outdoor work environment
CN107941217A (en) * 2017-09-30 2018-04-20 杭州迦智科技有限公司 A kind of robot localization method, electronic equipment, storage medium, device
CN108253963A (en) * 2017-12-20 2018-07-06 广西师范大学 A kind of robot active disturbance rejection localization method and alignment system based on Multi-sensor Fusion
WO2018127329A1 (en) * 2017-01-03 2018-07-12 Connaught Electronics Ltd. Visual odometry
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN108961337A (en) * 2018-06-15 2018-12-07 深圳地平线机器人科技有限公司 In-vehicle camera course angle scaling method and device, electronic equipment and vehicle
WO2019052567A1 (en) * 2017-09-18 2019-03-21 中车株洲电力机车研究所有限公司 Virtual turnout system and method for virtual rail vehicle
WO2019136613A1 (en) * 2018-01-09 2019-07-18 深圳市沃特沃德股份有限公司 Indoor locating method and device for robot
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9677897B2 (en) * 2013-11-13 2017-06-13 Elwha Llc Dead reckoning system for vehicles

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018127329A1 (en) * 2017-01-03 2018-07-12 Connaught Electronics Ltd. Visual odometry
CN106840148A (en) * 2017-01-24 2017-06-13 东南大学 Wearable positioning and path guide method based on binocular camera under outdoor work environment
WO2019052567A1 (en) * 2017-09-18 2019-03-21 中车株洲电力机车研究所有限公司 Virtual turnout system and method for virtual rail vehicle
CN107941217A (en) * 2017-09-30 2018-04-20 杭州迦智科技有限公司 A kind of robot localization method, electronic equipment, storage medium, device
CN108253963A (en) * 2017-12-20 2018-07-06 广西师范大学 A kind of robot active disturbance rejection localization method and alignment system based on Multi-sensor Fusion
WO2019136613A1 (en) * 2018-01-09 2019-07-18 深圳市沃特沃德股份有限公司 Indoor locating method and device for robot
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN108961337A (en) * 2018-06-15 2018-12-07 深圳地平线机器人科技有限公司 In-vehicle camera course angle scaling method and device, electronic equipment and vehicle
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
动态环境下基于旋转-平移解耦的立体视觉里程计算法;康轶非;宋永端;宋宇;闫德立;;机器人(第06期);全文 *
基于单目SLAM的实时场景三维重建;王潇榕;白国振;郎俊;;农业装备与车辆工程;20181010(第10期);全文 *
基于单目视觉与惯导融合的无人机位姿估计;熊敏君;卢惠民;熊丹;肖军浩;吕鸣;;计算机应用;20171220(第S2期);全文 *
基于直线检测的室内移动机器人视觉定位方法;周绍磊;吴修振;祁亚辉;公维思;;华中科技大学学报(自然科学版)(第10期);全文 *

Also Published As

Publication number Publication date
CN110411457A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
US10788830B2 (en) Systems and methods for determining a vehicle position
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
CN109991636B (en) Map construction method and system based on GPS, IMU and binocular vision
CN110859044B (en) Integrated sensor calibration in natural scenes
CN111862673B (en) Parking lot vehicle self-positioning and map construction method based on top view
CN110345937A (en) Appearance localization method and system are determined in a kind of navigation based on two dimensional code
CN112669354B (en) Multi-camera motion state estimation method based on incomplete constraint of vehicle
CN107180215A (en) Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically
CN112136021B (en) System and method for constructing landmark-based high definition map
WO2018142900A1 (en) Information processing device, data management device, data management system, method, and program
CN110207714A (en) A kind of method, onboard system and the vehicle of determining vehicle pose
CN108845335A (en) Unmanned aerial vehicle ground target positioning method based on image and navigation information
WO2020133172A1 (en) Image processing method, apparatus, and computer readable storage medium
CN116184430B (en) A pose estimation algorithm integrating lidar, visible light camera, and inertial measurement unit
CN108613675B (en) Low-cost unmanned aerial vehicle mobile measurement method and system
CN114638897A (en) Multi-camera system initialization method, system and device based on non-overlapping views
CN112424568A (en) System and method for constructing high-definition map
CN113580134B (en) Visual positioning method, device, robot, storage medium and program product
CN117330052A (en) Positioning and mapping methods and systems based on the fusion of infrared vision, millimeter wave radar and IMU
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
Javed et al. PanoVILD: A challenging panoramic vision, inertial and LiDAR dataset for simultaneous localization and mapping
CN112862818B (en) Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
CN117760419A (en) A visual-inertial SLAM method and system integrating QR code positioning
CN108322698B (en) System and method based on fusion of multiple cameras and inertial measurement unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB03 Change of inventor or designer information

Inventor after: Xie Yi

Inventor after: Zhang Baichao

Inventor after: Yu Xuan

Inventor before: Song Yu

Inventor before: Xie Yi

Inventor before: Zhang Baichao

Inventor before: Yu Xuan

CB03 Change of inventor or designer information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant