[go: up one dir, main page]

CN110207714B - Method for determining vehicle pose, vehicle-mounted system and vehicle - Google Patents

Method for determining vehicle pose, vehicle-mounted system and vehicle Download PDF

Info

Publication number
CN110207714B
CN110207714B CN201910576519.7A CN201910576519A CN110207714B CN 110207714 B CN110207714 B CN 110207714B CN 201910576519 A CN201910576519 A CN 201910576519A CN 110207714 B CN110207714 B CN 110207714B
Authority
CN
China
Prior art keywords
vehicle
pose
poses
information
monocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910576519.7A
Other languages
Chinese (zh)
Other versions
CN110207714A (en
Inventor
柴文楠
刘中元
蒋少峰
李良
周建
潘力澜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN201910576519.7A priority Critical patent/CN110207714B/en
Publication of CN110207714A publication Critical patent/CN110207714A/en
Application granted granted Critical
Publication of CN110207714B publication Critical patent/CN110207714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a method for determining a vehicle pose, a vehicle-mounted system and a vehicle. The method provided by the embodiment of the invention comprises the following steps: acquiring first inertial navigation information and first automobile odometer information of a vehicle; determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information; acquiring external parameters of N monocular cameras of a vehicle, wherein N is an integer greater than 1; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses.

Description

Method for determining vehicle pose, vehicle-mounted system and vehicle
Technical Field
The invention relates to the technical field of intelligent automobiles, in particular to a method for determining a vehicle pose, a vehicle-mounted system and a vehicle.
Background
In the field of autonomous driving, monocular vision positioning and Mapping (SLAM) refers to an autonomous vehicle that creates a map that is consistent with the real environment using a single vision sensor (e.g., a monocular camera) and determines its position in the map at the same time.
Monocular vision synchronous positioning and mapping (SLAM) and an inertial navigation system are fused to position the vehicle. However, the monocular camera usually has a limited field of view, and sometimes the accuracy and continuity of monocular SLAM positioning are affected due to too few detected image features or parallax.
Disclosure of Invention
The embodiment of the invention provides a method for determining a vehicle pose, a vehicle-mounted system and a vehicle, which are used for obtaining a conversion pose according to external parameters of a plurality of monocular cameras of the vehicle, fusing to obtain a fused pose of the vehicle, and further improving the pose estimation precision of the vehicle.
In view of this, a first aspect of the present invention provides a method of determining a vehicle pose, which may include:
acquiring first inertial navigation information and first automobile odometer information of a vehicle;
determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information;
acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1;
fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras;
respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle;
and obtaining the fusion pose of the vehicle according to the N conversion poses.
Alternatively, in some embodiments of the present invention,
the obtaining of the fusion pose of the vehicles according to the transformation poses of the N vehicles comprises:
determining a weight for each transformed pose according to the N transformed poses and a first current estimated pose of the vehicle;
and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
the obtaining of the fusion pose of the vehicles according to the transformation poses of the N vehicles comprises:
determining the weight of each conversion pose according to a preset algorithm and the N conversion poses;
and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Optionally, in some embodiments of the present invention, the method further includes:
and according to the external parameters of the N monocular cameras, fusing the output information of the inertial navigation odometer with the SLAM algorithm of each monocular camera to construct a visual feature map of the N monocular cameras.
Optionally, in some embodiments of the present invention, the method further includes:
and fusing the N monocular camera visual feature maps into a reference visual feature map according to the common vehicle track in the N monocular camera visual feature maps.
Optionally, in some embodiments of the present invention, the acquiring external parameters of N monocular cameras of the vehicle includes:
acquiring visual odometer information;
and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle.
Optionally, in some embodiments of the present invention, the method further includes:
acquiring second inertial navigation information and second automobile odometer information of the vehicle;
and determining a second current estimated pose of the vehicle according to the fused pose of the vehicle, second inertial navigation information of the vehicle and second automobile odometer information.
A second aspect of the present invention provides an in-vehicle system, which may include:
the acquisition module is used for acquiring first inertial navigation information and first automobile odometer information of the automobile; acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1;
the processing module is used for determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses.
Alternatively, in some embodiments of the present invention,
the processing module is specifically configured to determine a weight of each of the conversion poses according to the N conversion poses and the first current estimated pose of the vehicle; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
the processing module is specifically configured to determine a weight of each conversion pose according to a preset algorithm and the N conversion poses; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
and the processing module is also used for fusing the output information of the inertial navigation odometer with the SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras to construct a visual feature map of the N monocular cameras.
Alternatively, in some embodiments of the present invention,
and the processing module is also used for fusing the N monocular camera visual characteristic maps into a reference visual characteristic map according to a common vehicle track in the N monocular camera visual characteristic maps.
Alternatively, in some embodiments of the present invention,
the acquisition module is specifically used for acquiring visual odometer information; and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle.
Alternatively, in some embodiments of the present invention,
the acquisition module is further used for acquiring second inertial navigation information and second automobile odometer information of the vehicle;
the processing module is further configured to determine a second current estimated pose of the vehicle according to the fused pose of the vehicle, second inertial navigation information of the vehicle, and second vehicle odometer information.
A third aspect of the invention provides a vehicle that may include an on-board system as described in any of the second and second aspects of the invention.
A fourth aspect of the present invention provides a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute a method of determining a vehicle pose disclosed in the first aspect of the embodiments of the present invention.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, first inertial navigation information and first automobile odometer information of a vehicle are obtained; determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information; acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses. The conversion pose can be obtained according to the external parameters of the monocular cameras of the vehicle, fusion is carried out, and the fusion pose of the vehicle is obtained, so that the pose estimation precision of the vehicle is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following briefly introduces the embodiments and the drawings used in the description of the prior art, and obviously, the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to the drawings.
FIG. 1 is a schematic diagram of an embodiment of a method for determining the pose of a vehicle according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a work flow applied by the embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of an on-board system in accordance with an embodiment of the present invention;
FIG. 4 is a schematic view of an embodiment of a vehicle according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an embodiment of an on-board system in an embodiment of the invention.
Detailed Description
The embodiment of the invention provides a method for determining a vehicle pose, a vehicle-mounted system and a vehicle, which are used for obtaining a conversion pose according to external parameters of a plurality of monocular cameras of the vehicle, fusing to obtain a fused pose of the vehicle, and further improving the pose estimation precision of the vehicle.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. The embodiments based on the present invention should fall into the protection scope of the present invention.
The following is a brief description of the terms involved in the embodiments of the present invention:
an inertial navigation system (INS, hereinafter referred to as inertial navigation) is an autonomous navigation system that does not depend on external information and does not radiate energy to the outside, and includes an inertial measurement unit. The working environment of the device not only comprises the air and the ground, but also can be underwater. The basic working principle of inertial navigation is based on Newton's law of mechanics, and by measuring the acceleration of a carrier in an inertial reference system, integrating the acceleration with time and transforming the acceleration into a navigation coordinate system, information such as speed, yaw angle and position in the navigation coordinate system can be obtained.
The automobile odometer: refers to a device for measuring travel (e.g., mounted on a vehicle), such as a wheel pulse counter.
It should be noted that, in the embodiment of the present invention, the inertial navigation system and the automobile odometer are generally referred to as an inertial navigation odometer.
Aiming at the defects of the prior art, the invention provides a scheme for improving the estimation of the pose of a vehicle by using a method for carrying out multi-monocular SLAM and inertial navigation fusion based on a multi-monocular camera. The following further describes the technical solution of the present invention by way of an embodiment, as shown in fig. 1, which is a schematic diagram of an embodiment of a method for determining a vehicle pose in an embodiment of the present invention, and the method may include:
101. first inertial navigation information and first automobile odometer information of the vehicle are obtained.
The vehicle-mounted system acquires first inertial navigation information and first automobile odometer information of the vehicle.
For example, the vehicle may be equipped with sensors such as an Inertial Measurement Unit (IMU), a wheel pulse counter, etc., and these sensors may be used as a positioning module (e.g., a body odometer) of the vehicle to calculate a driving distance of the vehicle, and further, may position the vehicle. Fig. 2 is a schematic diagram of a work flow applied to the embodiment of fig. 1 of the present invention.
102. And determining a first current estimation pose of the vehicle according to the first inertial navigation information and the first automobile odometer information.
And the vehicle-mounted system pre-estimates a first current estimated pose of the vehicle under a global coordinate system according to the first inertial navigation information and the first vehicle odometer information.
The first inertial navigation information can include attitude, speed and position change information provided by integration, and can be fused with displacement information provided by an automobile odometer by using algorithms such as Kalman filtering and the like, so that a first current estimation pose of the vehicle under a global coordinate system can be determined.
103. And acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1.
The obtaining of the external parameters of the N monocular cameras of the vehicle may include: acquiring visual odometer information; and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle. Furthermore, according to the visual odometer information and the first current estimated pose of the vehicle, determining the external parameters of the N monocular cameras of the vehicle by using an online external parameter calibration algorithm (such as a hand-eye calibration method).
It can be understood that the parameter of the monocular camera represents the rotation information and the translation information of the coordinate system of the monocular camera relative to the inertial coordinate system of the vehicle.
It should be noted that the monocular camera may be a single monocular camera provided in the front windshield of the vehicle or in the rear-view mirror of the vehicle. That is, if more than one monocular camera is installed on the vehicle, such as front view, back view and side view, the invention is applicable to the technical scheme provided by the invention.
Optionally, the vehicle-mounted electronic device determines at least two initial frames from a real environment image sequence captured by a monocular camera of the vehicle, and obtains an actual moving distance of the vehicle between capturing moments of the at least two initial frames through a positioning module of the vehicle.
In the embodiment of the invention, the monocular camera can be a monocular camera arranged on a front windshield of the vehicle or a rearview mirror of the vehicle. The initial frame is an image frame for initializing monocular vision SLAM, and relative pose change of the monocular camera between the initial frames is determined by comparing difference of image positions of the same characteristic point in at least two initial frames, so that initialization of the monocular vision SLAM is completed. It will be appreciated that in order to increase the success rate of monocular vision SLAM initialization, the initial frame should meet certain requirements. For example, the initial frame may be set as two consecutive image frames with the number of feature points respectively greater than a preset threshold (e.g., 100).
Monocular vision SLAM initialization includes the following steps: 1. performing feature point matching in at least two initial frames; 2. based on the matched feature points, the inter-frame pose of the monocular camera can be determined according to the initial frame by utilizing a pair geometry principle, namely the relative pose change (including translation amount and rotation amount) of the monocular camera between the initial frames; 3. determining the depth of a feature point in an image by utilizing a triangulation principle based on the inter-frame pose of a monocular camera between at least two initial frames; 4. and constructing an initial SLAM map according to the inter-frame pose of the monocular camera between at least two initial frames and the depth of the feature points in the image.
Further, in the subsequent mapping process, the pose of the monocular camera corresponding to each image frame can be represented as the relative pose change of the monocular camera between the image frame and the image frame before the image frame. In general, the relative pose change between the image frame and the previous image frame can be specifically expressed as M times of the relative pose change of the monocular camera between the initial frames, where M is a real number. Therefore, if the translation amount t of the monocular camera is directly normalized during initialization, so that the interframe pose of the monocular camera and the restored feature point depth do not contain scale information during initialization, the finally obtained SLAM map can only determine that the relative position change of the monocular camera is accurate, and cannot reflect the real geographic environment.
Therefore, the finally constructed SLAM map comprises the corresponding relation between the pose of the monocular camera and the three-dimensional space position of the feature point, the corresponding relation takes the image in the real environment image sequence as the constraint, and when the monocular camera is positioned at a certain pose and shoots a certain feature point, the obtained image is a certain frame image in the real environment image sequence. That is, the SLAM map includes a pose sequence of the monocular camera and three-dimensional spatial positions of feature points included in each image in the real environment image sequence.
104. And fusing an inertial navigation mileage calculation method with the visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras.
The vehicle-mounted system can determine the current pose of each monocular camera according to the first current estimated pose of the vehicle, which is externally referred by the N monocular cameras.
For example, for a monocular camera system such as camera 1, if the camera external parameter (the pose of the camera with respect to the vehicle body/inertial navigation system) is not calibrated, the vehicle-mounted system may calibrate the camera external parameter using an optimization method based on the pre-estimated poses of the visual odometer and the inertial odometer, i.e., the first current estimated pose.
If the camera external parameters are calibrated, the current pose of the camera can be estimated in real time according to the visual SLAM algorithm of each monocular camera which is mature at present and an algorithm (such as monocular version VI-ORB-SLAM2 or VINS and the like) fused with the inertial navigation odometer, and a visual feature map is established. It will be appreciated that other monocular cameras are also subject to the principles described above, determining the corresponding current pose.
105. And respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle.
And then, respectively converting the coordinate system of the current pose of the camera obtained by the monocular SLAM system according to the external parameters of the camera, and solving N converted poses of the corresponding vehicle in the global coordinate system.
It can be understood that the SLAM system finds the current pose of the camera in the visual global coordinate system. By means of camera external reference, namely relative pose of the camera relative to a vehicle inertia coordinate system, a transformation matrix from a visual coordinate system to a vehicle coordinate system can be obtained; and then, the transformation matrix and the current pose of the camera mentioned at the beginning under the visual coordinate system are utilized to calculate the transformation pose of the vehicle in a vehicle global coordinate system (namely a navigation coordinate system in the scheme).
106. And obtaining the fusion pose of the vehicle according to the N conversion poses.
The obtaining of the fusion pose of the vehicles according to the transformation poses of the N vehicles comprises:
determining a weight for each transformed pose according to the N transformed poses and a first current estimated pose of the vehicle; obtaining a fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses;
or,
determining the weight of each conversion pose according to a preset algorithm and the N conversion poses; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses. If the time synchronization problem exists among the N positions, before the positions are fused, position synchronization processing is carried out, for example, position calculation or interpolation is carried out by using inertial navigation to obtain a time-synchronized position.
It can be understood that the vehicle-mounted system can calculate respective pose estimation weights (reflecting pose estimation precision) according to process parameters (such as external reference reliability, residual errors of the vision odometer and inertial navigation estimated poses, rear-end optimization scale, presence or absence of detected closed loops and the like) of each monocular camera system, and therefore the current vehicle pose after each system is fused is obtained through a weighted average method. If a monocular SLAM fails at the current time, it is ignored, and if its process parameters are too poor, its weight may also be set to zero. The information of the external parameter credibility is the external parameter credibility of the corresponding monocular camera acquired when the external parameter of each monocular camera is acquired. The information such as the residual error of the estimated pose of the visual odometer and the inertial navigation, the rear-end optimization scale, the presence or absence of the detected closed loop and the like is acquired in the process of fusing the monocular SLAM and the inertial navigation.
Exemplarily, as shown in the figure, the conversion poses respectively corresponding to the N cameras are: a1, A2 and A3 … AN respectively correspond to the weights of M1, M2 and M3 … MN, and the sum of the weights is 1, so that the fusion pose of the vehicle is as follows:
A1*M1+A2*M2+A3*M3+…+AN*MN。
optionally, in some embodiments of the present invention, the inertial navigation odometer output information is fused with the visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras, so as to construct N monocular camera visual feature maps. And further, fusing the N monocular camera visual feature maps into a reference visual feature map according to a common vehicle track in the N monocular camera visual feature maps.
Illustratively, to facilitate direct reuse of the visual feature maps created by each monocular camera system (e.g., for map matching relocation), the sets of visual feature maps may be kept separate. If the scene mapping requirement exists, the multiple sets of visual feature maps can be fused into the same visual feature map according to the common vehicle track.
Among them, it can be understood that the visual feature map in the embodiment of the present invention is a map composed of visual features (e.g., orb (organized FAST and Rotated feature points)) constructed by SLAM algorithm. FAST is a common visual corner extraction method, and BRIEF is a common feature point description mode.
Further, after the vehicle-mounted system obtains the fusion pose of the vehicle, the vehicle-mounted system can also obtain second inertial navigation information and second automobile odometer information of the vehicle; and determining a second current estimated pose of the vehicle according to the fused pose of the vehicle, second inertial navigation information of the vehicle and second automobile odometer information.
Illustratively, given the current pose, the next step pose is estimated as x (k +1) ═ x (k) + dx (k). Where dx (k) is derived from the inertial navigation and odometer outputs over the interval dt.
Wherein x comprises position and attitude vectors, the change in attitude of dx (k) between x (k +1) and x (k) is calculated by integrating the gyroscope output (angular velocity) in the IMU over time, and the change in position is calculated by multiplying the odometer output (distance traveled) by the trigonometric function of attitude.
The vehicle-mounted system outputs a fusion pose of the vehicle and a plurality of sets of independent visual feature maps built by the multi-camera system. The fusion pose is used for the next step of inertial navigation pose pre-estimation, and the next step of inertial navigation pose pre-estimation can be used for calculating the fusion pose correspondingly, which is equivalent to using the output fusion of a multi-monocular camera SLAM system, iteratively optimizing each subsystem, and reducing the positioning drift error.
In the embodiment of the invention, first inertial navigation information and first automobile odometer information of a vehicle are obtained; determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information; acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses. The conversion pose can be obtained according to the external parameters of the monocular cameras of the vehicle, fusion is carried out, and the fusion pose of the vehicle is obtained, so that the pose estimation precision of the vehicle is improved. The defect that the visual field of a monocular camera is limited is overcome, and algorithm integration and transplantation from a single camera to any multi-camera system are easy to realize based on a relatively mature visual SLAM algorithm (such as a monocular version ORB-SLAM2, VINS and the like) of the monocular camera. The multiple cameras are not required to have a common viewing area, and the external reference calibration among the multiple cameras is not required. Namely, a system design for realizing the fusion of a parallel multi-monocular camera vision SLAM algorithm and an inertial navigation mileage calculation method on a vehicle-mounted system; calculating weights according to the process parameters of the monocular camera SLAM system and calculating a fusion pose according to the weights; the current fusion pose is used for the next pose pre-estimation, and each monocular SLAM can be iteratively optimized to reduce the drift error of the monocular SLAM.
Fig. 3 is a schematic diagram of an embodiment of an in-vehicle system according to an embodiment of the present invention. The method can comprise the following steps:
the acquiring module 301 is configured to acquire first inertial navigation information and first automobile odometer information of a vehicle; acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1;
a processing module 302, configured to determine a first current estimated pose of the vehicle according to the first inertial navigation information and the first vehicle odometer information; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses.
Alternatively, in some embodiments of the present invention,
a processing module 302, configured to determine a weight of each of the transformed poses according to the N transformed poses and the first current estimated pose of the vehicle; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
a processing module 302, configured to determine a weight of each conversion pose according to a preset algorithm and the N conversion poses; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
Alternatively, in some embodiments of the present invention,
the processing module 302 is further configured to fuse the output information of the inertial navigation odometer with the SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras, and construct a visual feature map of the N monocular cameras.
Alternatively, in some embodiments of the present invention,
the processing module 302 is further configured to fuse the N monocular camera visual feature maps into a reference visual feature map according to a common vehicle trajectory in the N monocular camera visual feature maps.
Alternatively, in some embodiments of the present invention,
an obtaining module 301, specifically configured to obtain visual odometer information; and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle.
Alternatively, in some embodiments of the present invention,
the obtaining module 301 is further configured to obtain second inertial navigation information and second vehicle odometer information of the vehicle;
the processing module 302 is further configured to determine a second current estimated pose of the vehicle according to the fused pose of the vehicle, the second inertial navigation information of the vehicle, and the second vehicle odometer information.
Fig. 4 is a schematic diagram of an embodiment of a vehicle according to an embodiment of the present invention, where the vehicle includes an on-board system shown in fig. 3.
Fig. 5 is a schematic diagram of an embodiment of an in-vehicle system according to an embodiment of the present invention. The method can comprise the following steps:
a memory 501 in which executable program code is stored;
a processor 502 coupled to a memory 501;
a vehicle positioning module 503;
the vehicle positioning module 503 acquires first inertial navigation information and first vehicle odometer information of the vehicle and transmits the first inertial navigation information and the first vehicle odometer information to the processor 502, and the processor 502 calls the executable program code stored in the memory 501 to execute any one of the methods for determining the vehicle pose shown in fig. 1.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. A method of determining a pose of a vehicle, comprising:
acquiring first inertial navigation information and first automobile odometer information of a vehicle, wherein the first inertial navigation information comprises the attitude of the vehicle, the speed of the vehicle and the position change information of the vehicle;
determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information;
acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1;
fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras;
respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle;
and obtaining the fusion pose of the vehicle according to the N conversion poses.
2. The method according to claim 1, wherein the deriving a fusion pose of the vehicle from the N transformation poses comprises:
determining a weight for each transformed pose according to the N transformed poses and a first current estimated pose of the vehicle;
and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
3. The method according to claim 1, wherein the obtaining of the fusion pose of the vehicles according to the transformation poses of the N vehicles comprises:
determining the weight of each conversion pose according to a preset algorithm and the N conversion poses, wherein the preset algorithm comprises the weights corresponding to the N conversion poses respectively;
and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
4. The method according to any one of claims 1-3, further comprising:
and according to the external parameters of the N monocular cameras, fusing the output information of the inertial navigation odometer with the SLAM algorithm of each monocular camera to construct a visual feature map of the N monocular cameras.
5. The method of claim 4, further comprising:
and fusing the N monocular camera visual feature maps into a reference visual feature map according to the common vehicle track in the N monocular camera visual feature maps.
6. The method according to any one of claims 1-3, wherein said obtaining N monocular camera external parameters of the vehicle comprises:
acquiring visual odometer information;
and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle.
7. The method according to any one of claims 1-3, further comprising:
acquiring second inertial navigation information and second automobile odometer information of the vehicle;
and determining a second current estimated pose of the vehicle according to the fused pose of the vehicle, second inertial navigation information of the vehicle and second automobile odometer information.
8. An in-vehicle system, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first inertial navigation information and first automobile odometer information of a vehicle, and the first inertial navigation information comprises the attitude of the vehicle, the speed of the vehicle and the position change information of the vehicle; acquiring external parameters of N monocular cameras of the vehicle, wherein N is an integer greater than 1;
the processing module is used for determining a first current estimated pose of the vehicle according to the first inertial navigation information and the first automobile odometer information; fusing an inertial navigation mileage calculation method with a visual SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras and the first current estimation pose so as to estimate the current poses of the N monocular cameras; respectively carrying out coordinate transformation on the current poses of the N monocular cameras to obtain N transformation poses of the vehicle; and obtaining the fusion pose of the vehicle according to the N conversion poses.
9. The in-vehicle system according to claim 8,
the processing module is specifically configured to determine a weight of each of the conversion poses according to the N conversion poses and the first current estimated pose of the vehicle; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
10. The in-vehicle system according to claim 8,
the processing module is specifically configured to determine a weight of each conversion pose according to a preset algorithm and the N conversion poses, where the preset algorithm includes weights corresponding to the N conversion poses respectively; and obtaining the fusion pose of the vehicle by adopting a weighted average method according to the weight of each conversion pose and the N conversion poses.
11. The on-board system according to any one of claims 8-10,
and the processing module is also used for fusing the output information of the inertial navigation odometer with the SLAM algorithm of each monocular camera according to the external parameters of the N monocular cameras to construct a visual feature map of the N monocular cameras.
12. The in-vehicle system according to claim 11,
and the processing module is also used for fusing the N monocular camera visual characteristic maps into a reference visual characteristic map according to a common vehicle track in the N monocular camera visual characteristic maps.
13. The on-board system according to any one of claims 8-10,
the acquisition module is specifically used for acquiring visual odometer information; and determining the external parameters of the N monocular cameras of the vehicle according to the visual odometry information and the first current estimated pose of the vehicle.
14. The on-board system according to any one of claims 8-10,
the acquisition module is further used for acquiring second inertial navigation information and second automobile odometer information of the vehicle;
the processing module is further configured to determine a second current estimated pose of the vehicle according to the fused pose of the vehicle, second inertial navigation information of the vehicle, and second vehicle odometer information.
15. A vehicle, characterized by comprising an on-board system according to any of claims 8-14.
CN201910576519.7A 2019-06-28 2019-06-28 Method for determining vehicle pose, vehicle-mounted system and vehicle Active CN110207714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910576519.7A CN110207714B (en) 2019-06-28 2019-06-28 Method for determining vehicle pose, vehicle-mounted system and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910576519.7A CN110207714B (en) 2019-06-28 2019-06-28 Method for determining vehicle pose, vehicle-mounted system and vehicle

Publications (2)

Publication Number Publication Date
CN110207714A CN110207714A (en) 2019-09-06
CN110207714B true CN110207714B (en) 2021-01-19

Family

ID=67795335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910576519.7A Active CN110207714B (en) 2019-06-28 2019-06-28 Method for determining vehicle pose, vehicle-mounted system and vehicle

Country Status (1)

Country Link
CN (1) CN110207714B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12165354B2 (en) 2021-03-10 2024-12-10 Beijing Tusen Zhitu Technology Co., Ltd. Pose estimation method and device, related equipment and storage medium

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112577479B (en) * 2019-09-27 2024-04-12 北京魔门塔科技有限公司 Multi-sensor fusion vehicle positioning method and device based on map element data
CN111127584A (en) * 2019-11-19 2020-05-08 奇点汽车研发中心有限公司 Method and device for establishing visual map, electronic equipment and storage medium
CN113340313B (en) * 2020-02-18 2024-04-16 北京四维图新科技股份有限公司 Method and device for determining navigation map parameters
CN114199275B (en) * 2020-09-18 2024-06-21 阿里巴巴集团控股有限公司 Method and device for determining parameters of sensor
CN114693781B (en) * 2020-12-25 2025-09-02 华为终端有限公司 Positioning method and electronic equipment
CN112965076B (en) * 2021-01-28 2024-05-24 上海思岚科技有限公司 Multi-radar positioning system and method for robot
CN113074726A (en) * 2021-03-16 2021-07-06 深圳市慧鲤科技有限公司 Pose determination method and device, electronic equipment and storage medium
CN113203428A (en) * 2021-05-28 2021-08-03 拉扎斯网络科技(上海)有限公司 Mileage counting device, data counting method based on same and interface
CN113390411B (en) * 2021-06-10 2022-08-09 中国北方车辆研究所 Foot type robot navigation and positioning method based on variable configuration sensing device
CN113899363B (en) 2021-09-29 2022-10-21 北京百度网讯科技有限公司 Vehicle positioning method and device and automatic driving vehicle
CN113884098B (en) * 2021-10-15 2024-01-23 上海师范大学 An iterative Kalman filter positioning method based on embodied models
CN114001742B (en) * 2021-10-21 2024-06-04 广州小鹏自动驾驶科技有限公司 Vehicle positioning method, device, vehicle and readable storage medium
CN114170320B (en) * 2021-10-29 2022-10-28 广西大学 Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion
CN114627152B (en) * 2022-02-18 2025-07-08 上海欧菲智能车联科技有限公司 Reversing auxiliary method and device, electronic equipment and storage medium
CN114485654A (en) * 2022-02-24 2022-05-13 中汽创智科技有限公司 Multi-sensor fusion positioning method and device based on high-precision map
CN118505756A (en) * 2024-07-18 2024-08-16 比亚迪股份有限公司 Pose generation method and device, electronic equipment, storage medium, product and vehicle

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167814A1 (en) * 2006-12-01 2008-07-10 Supun Samarasekera Unified framework for precise vision-aided navigation
CN104748727A (en) * 2013-12-31 2015-07-01 中国科学院沈阳自动化研究所 Array type high-speed visual odometer and realization method thereof
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder
CN106708037A (en) * 2016-12-05 2017-05-24 北京贝虎机器人技术有限公司 Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
CN107229063A (en) * 2017-06-26 2017-10-03 奇瑞汽车股份有限公司 A kind of pilotless automobile navigation and positioning accuracy antidote merged based on GNSS and visual odometry
CN107504969A (en) * 2017-07-24 2017-12-22 哈尔滨理工大学 Four rotor-wing indoor air navigation aids of view-based access control model and inertia combination
CN109887032A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular vision SLAM

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102768042B (en) * 2012-07-11 2015-06-24 清华大学 Visual-inertial combined navigation method
CN103940434B (en) * 2014-04-01 2017-12-15 西安交通大学 Real-time lane detection system based on monocular vision and inertial navigation unit
CN109387198B (en) * 2017-08-03 2022-07-15 北京自动化控制设备研究所 Inertia/vision milemeter combined navigation method based on sequential detection
CN108759815B (en) * 2018-04-28 2022-11-15 温州大学激光与光电智能制造研究院 Information fusion integrated navigation method used in global visual positioning method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167814A1 (en) * 2006-12-01 2008-07-10 Supun Samarasekera Unified framework for precise vision-aided navigation
CN104748727A (en) * 2013-12-31 2015-07-01 中国科学院沈阳自动化研究所 Array type high-speed visual odometer and realization method thereof
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder
CN106708037A (en) * 2016-12-05 2017-05-24 北京贝虎机器人技术有限公司 Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
CN107229063A (en) * 2017-06-26 2017-10-03 奇瑞汽车股份有限公司 A kind of pilotless automobile navigation and positioning accuracy antidote merged based on GNSS and visual odometry
CN107504969A (en) * 2017-07-24 2017-12-22 哈尔滨理工大学 Four rotor-wing indoor air navigation aids of view-based access control model and inertia combination
CN109887032A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular vision SLAM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Visual-Aided INS Using Converted Measurements;Turner J. Montgomery, Meir Pachter;《IFAC-PapersOnLine》;20161231;全文 *
单目视觉惯性融合方法在无人机位姿估计中的应用;茹祥宇等;《控制与信息技术》;20181231;全文 *
基于嵌入式并行处理的视觉惯导SLAM算法研究;张建越;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190115(第01期);全文 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12165354B2 (en) 2021-03-10 2024-12-10 Beijing Tusen Zhitu Technology Co., Ltd. Pose estimation method and device, related equipment and storage medium

Also Published As

Publication number Publication date
CN110207714A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110207714B (en) Method for determining vehicle pose, vehicle-mounted system and vehicle
CN109887032B (en) Monocular vision SLAM-based vehicle positioning method and system
US10436884B2 (en) Calibration of laser and vision sensors
EP3378033B1 (en) Systems and methods for correcting erroneous depth information
US11205283B2 (en) Camera auto-calibration with gyroscope
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
KR102219843B1 (en) Estimating location method and apparatus for autonomous driving
US20190033867A1 (en) Systems and methods for determining a vehicle position
CN103591955B (en) Integrated navigation system
WO2020253260A1 (en) Time synchronization processing method, electronic apparatus, and storage medium
JP2019074505A (en) Position estimation method, device, and computer program
CN112074875A (en) Method and system for constructing group optimization depth information of 3D characteristic graph
EP2175237B1 (en) System and methods for image-based navigation using line features matching
US20170017839A1 (en) Object detection apparatus, object detection method, and mobile robot
KR101985344B1 (en) Sliding windows based structure-less localization method using inertial and single optical sensor, recording medium and device for performing the method
CN108603933B (en) System and method for fusing sensor outputs with different resolutions
CN109443348A (en) It is a kind of based on the underground garage warehouse compartment tracking for looking around vision and inertial navigation fusion
CN114638897B (en) Multi-camera system initialization method, system and device based on non-overlapping views
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
CN113587934A (en) Robot, indoor positioning method and device and readable storage medium
CN111308415A (en) Online pose estimation method and device based on time delay
CN112204344A (en) Pose acquisition method and system and movable platform
CN108322698B (en) System and method based on fusion of multiple cameras and inertial measurement unit
CN112528719A (en) Estimation device, estimation method, and storage medium
CN113034538B (en) Pose Tracking Method and Device for Visual Inertial Navigation Equipment, and Visual Inertial Navigation Equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210106

Address after: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant after: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Address before: Room 245, No. 333, jiufo Jianshe Road, Zhongxin Guangzhou Knowledge City, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190906

Assignee: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

Assignor: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Contract record no.: X2021440000219

Denomination of invention: A method for determining vehicle posture, on-board system and vehicle

Granted publication date: 20210119

License type: Common License

Record date: 20211220

EE01 Entry into force of recordation of patent licensing contract
TR01 Transfer of patent right

Effective date of registration: 20240228

Address after: 510000 No.8 Songgang street, Cencun, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Patentee before: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right