[go: up one dir, main page]

CN114199275B - Method and device for determining parameters of sensor - Google Patents

Method and device for determining parameters of sensor Download PDF

Info

Publication number
CN114199275B
CN114199275B CN202010985174.3A CN202010985174A CN114199275B CN 114199275 B CN114199275 B CN 114199275B CN 202010985174 A CN202010985174 A CN 202010985174A CN 114199275 B CN114199275 B CN 114199275B
Authority
CN
China
Prior art keywords
frame
value
image
determining
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010985174.3A
Other languages
Chinese (zh)
Other versions
CN114199275A (en
Inventor
韩冰
张涛
边威
黄帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010985174.3A priority Critical patent/CN114199275B/en
Publication of CN114199275A publication Critical patent/CN114199275A/en
Application granted granted Critical
Publication of CN114199275B publication Critical patent/CN114199275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/23Testing, monitoring, correcting or calibrating of receiver elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Manufacturing & Machinery (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure discloses a method and apparatus for determining parameters of a sensor. The method comprises the following steps: acquiring images, inertial navigation measurement data and GNSS positioning data in a set time period; determining the normalized pose of each frame of image; based on the normalized pose, the GNSS positioning data and the inertial navigation measurement data, obtaining a scale initial value of the vision sensor and an initial value of a second external parameter of the vision sensor relative to a world coordinate system; obtaining second external parameters and scale optimization values through optimization based on the normalized pose, the GNSS positioning data, the inertial navigation measurement data and the first external parameters between the visual sensor and the inertial measurement unit; and obtaining a scale target value, a second exogenous target value and a speed target value of each frame of image of the vision sensor through global optimization based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame of image. The working parameters of the sensors in the multi-sensor fusion positioning system can be initialized quickly and accurately.

Description

Method and device for determining parameters of sensor
Technical Field
The disclosure relates to the field of positioning technology, and in particular, to a method and a device for determining parameters of a sensor.
Background
The multi-sensor fusion positioning system has the advantages of high positioning precision and low cost, and is one of solutions for realizing high-precision positioning, and the precision of the working parameters of the obtained sensors plays a vital role in the precision of positioning results output by the system in the system initialization stage.
Disclosure of Invention
In view of the above, the present disclosure has been made in order to provide a parameter determining method and apparatus of a sensor that overcomes or at least partially solves the above-mentioned problems.
In a first aspect, an embodiment of the present disclosure provides a method for determining a parameter of a sensor, including:
Acquiring more than two frames of images shot by a visual sensor in a set time period, inertial navigation measurement data output by an inertial measurement unit corresponding to shooting time of each frame of images and GNSS positioning data output by a GNSS receiver;
determining the normalized position and posture of each frame of image;
Based on the normalized position and posture of each frame of image, GNSS positioning data and inertial navigation measurement data, obtaining an initial value of the scale of the vision sensor and an initial value of a second external parameter of the vision sensor relative to a world coordinate system;
Optimizing the initial value of the scale and the initial value of the second external parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data and the first external parameter between the visual sensor and the inertial measurement unit to obtain a second external parameter optimized value and a scale optimized value;
And performing global optimization on the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame of image to obtain a target value of the scale of the vision sensor, a second extrinsic target value and a speed target value of each frame of image.
In some optional embodiments, the obtaining the initial value of the scale of the vision sensor and the initial value of the second external parameter of the vision sensor relative to the world coordinate system based on the normalized position and posture of each frame of image, the GNSS positioning data and the inertial navigation measurement data specifically includes:
based on the normalized position of each frame of image and GNSS positioning data, obtaining an initial value of the scale of the vision sensor;
Based on the normalized position and posture of each frame of image, GNSS positioning data and inertial navigation measurement data, an initial value of the vision sensor relative to a second external parameter of the world coordinate system is obtained.
In some optional embodiments, the obtaining the initial value of the scale of the vision sensor based on the normalized position of each frame of image and the GNSS positioning data specifically includes:
and optimizing the set value of the scale of the visual sensor according to the normalized position of each frame of image and the positioning position in the GNSS positioning data to obtain the initial value of the scale of the visual sensor.
In some optional embodiments, the obtaining the initial value of the second external parameter of the vision sensor relative to the world coordinate system based on the normalized position and posture of each frame of image, the GNSS positioning data and the inertial navigation measurement data specifically includes:
determining a first position parameter mean value according to the normalized position of each frame of image, and determining a second position parameter mean value according to the positioning position in the GNSS positioning data corresponding to each frame of image;
Determining three-axis acceleration under a visual coordinate system according to the gesture of each frame of image, the three-axis acceleration in inertial navigation measurement data and the first external parameter, obtaining a first acceleration parameter mean value according to the three-axis acceleration under the visual coordinate system, and determining a second acceleration parameter mean value under a world coordinate system according to the gravity acceleration and the GNSS speed in GNSS positioning data corresponding to each frame of image;
and determining an initial value of the vision sensor relative to a second external parameter of the world coordinate system according to the first position parameter mean value, the second position parameter mean value, the first acceleration parameter mean value and the second acceleration parameter mean value.
In some optional embodiments, the determining the initial value of the vision sensor relative to the second external parameter of the world coordinate system according to the first position parameter mean value, the second position parameter mean value, the first acceleration parameter mean value and the second acceleration parameter mean value specifically includes:
determining an initial rotation value of a second external parameter according to the first position parameter mean value and the second position parameter mean value;
rotating the first acceleration parameter mean value according to the initial rotation value of the second external parameter to obtain a third acceleration parameter mean value;
Determining a first projection of the second position parameter mean on a plane with the third acceleration parameter mean as a normal line and a second projection of the second position parameter mean on the plane with the second acceleration parameter mean as a normal line;
determining a re-rotation value of the second external parameter according to the first projection and the second projection;
And determining the initial value of the second external parameter according to the initial rotation value and the re-rotation value of the second external parameter.
In some optional embodiments, the optimizing the initial value of the scale and the initial value of the second parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data, and the first parameter between the vision sensor and the inertial measurement unit to obtain a second parameter optimized value and a scale optimized value specifically includes:
Determining the position of the visual sensor under the world coordinate system according to the normalized position of each frame of image, the initial value of the scale of the visual sensor and the initial value of the second external parameter of the visual sensor relative to the world coordinate system, and determining a first position residual error according to the position of the visual sensor under the world coordinate system of each frame of image and the positioning position in GNSS positioning data;
Determining a first acceleration according to the gravity acceleration and the GNSS speed in the GNSS positioning data corresponding to each frame of image, determining a second acceleration according to the posture of each frame of image, the triaxial acceleration in the inertial navigation measurement data and the initial values of the first external parameter and the second external parameter, and taking the difference value between the first acceleration and the second acceleration as an acceleration residual;
And carrying out iterative optimization on the residual error and the iterative optimization on the first position residual error and the acceleration residual error to obtain a second extrinsic optimization value and a scale optimization value.
In some optional embodiments, the global optimization is performed on the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame image, so as to obtain a target value of the scale of the vision sensor, a target value of the second extrinsic, and a target value of the speed of each frame image, which specifically includes:
Determining the position of the visual sensor under the world coordinate system according to the normalized position of each frame of image, the optimized value of the scale of the visual sensor and the optimized value of the second external parameter of the visual sensor relative to the world coordinate system, and determining a second position residual error according to the position of the visual sensor under the world coordinate system of each frame of image and the positioning position in the GNSS positioning data;
determining a position pre-integration residual error according to inertial navigation position pre-integration in inertial navigation measurement data of each frame of image, position parameters of inertial navigation relative to a global positioning system, the first external parameters and gravity acceleration;
determining a speed pre-integration residual error according to the speed initial value of each frame of image, the inertial navigation speed pre-integration in inertial navigation measurement data, the first external parameter and the gravity acceleration;
and carrying out iterative optimization on the residual errors of the position pre-integral residual error, the speed pre-integral residual error and the second position residual error to obtain a target value of the scale of the vision sensor, a second exogenous target value and a speed target value of each frame of image.
In some alternative embodiments, further comprising:
determining the posture of the inertial measurement unit according to the posture of each frame of image and the first external parameter;
And optimizing the initial value of the zero offset of the gyroscope of the inertial measurement unit based on the attitude, the angular velocity pre-integration and the angular velocity pre-integration of the inertial measurement unit of each frame of image to obtain the target value of the zero offset of the gyroscope.
In some optional embodiments, the determining the normalized position and pose of each frame of image specifically includes:
And determining the normalized relative position and relative posture of each frame of image relative to the first frame of image by using a characteristic point matching and epipolar geometry method.
In some optional embodiments, the acquiring the inertial measurement data output by the inertial measurement unit and the GNSS positioning data output by the GNSS receiver, where the inertial measurement data corresponds to the capturing time of each frame of image, and the images captured by the visual sensor in the set period of time include:
Acquiring more than two frames of images shot by a visual sensor in a set time period, inertial navigation measurement data output by an inertial measurement unit and GNSS positioning data output by a GNSS receiver;
Taking shooting time of each frame of image as a reference, performing differential computation on the inertial navigation measurement data and the GNSS positioning data, and determining the inertial navigation measurement data and the GNSS positioning data corresponding to each frame of image.
In a second aspect, embodiments of the present disclosure provide a parameter determining apparatus of a sensor, including:
the acquisition module is used for acquiring more than two frames of images shot by the visual sensor in a set time period, inertial navigation measurement data output by the inertial measurement unit corresponding to shooting time of each frame of images and GNSS positioning data output by the GNSS receiver;
the determining module is used for determining the normalized position and posture of each frame of image;
The initial optimization module is used for obtaining an initial value of the scale of the vision sensor and an initial value of a second external parameter of the vision sensor relative to the world coordinate system based on the normalized position and the posture of each frame of image, GNSS positioning data and inertial navigation measurement data;
The local optimization module is used for optimizing the initial value of the scale and the initial value of the second external parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data and the first external parameter between the visual sensor and the inertial measurement unit to obtain a second external parameter optimized value and a scale optimized value;
And the global optimization module is used for carrying out global optimization on the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame of image to obtain a target value of the scale of the vision sensor, a second extrinsic target value and a speed target value of each frame of image.
In a third aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored thereon computer instructions that, when executed by a processor, implement the above-described method of determining parameters of a sensor.
In a fourth aspect, embodiments of the present disclosure provide a server, including: the sensor parameter determining device comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the sensor parameter determining method when executing the program.
The method for determining parameters of a sensor provided by the embodiment of the invention comprises the steps of acquiring more than two frames of images shot by a visual sensor in a set time period, inertial navigation measurement data output by an inertial measurement unit corresponding to shooting time of each frame of images and GNSS positioning data output by a GNSS receiver, and executing the following steps: determining the normalized position and posture of each frame of image; based on the normalized position and posture of each frame of image, GNSS positioning data and inertial navigation measurement data, obtaining an initial value of the scale of the vision sensor and an initial value of a second external parameter of the vision sensor relative to a world coordinate system; optimizing the initial value of the scale and the initial value of the second external parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data and the first external parameter between the visual sensor and the inertial measurement unit to obtain a second external parameter optimized value and a scale optimized value; and carrying out global optimization on the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame of image to obtain a target value of the scale of the vision sensor, a second extrinsic target value and a speed target value of each frame of image. The beneficial effects of this technical scheme include at least:
(1) The initial value of the scale of the visual sensor and the initial value of the second external parameter of the visual sensor relative to the world coordinate system are reasonably determined, then the initial value is more accurately optimized on the basis of local optimization, and global optimization is realized, so that the final sensor parameter determination result has high precision.
(2) The working parameter determination work of the sensor is completed once, namely the initialization work of the sensor is completed instead of VIO fusion, and the alignment of the VIO and the GNSS is performed under the condition that the result of the VIO fusion is verified to be reasonable, so that the initialization time is shortened.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the disclosure. The objectives and other advantages of the disclosure will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the present disclosure is described in further detail below through the accompanying drawings and examples.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. In the drawings:
FIG. 1 is a flow chart of a method for determining parameters of a sensor in accordance with a first embodiment of the present disclosure;
FIG. 2 is a flowchart showing the implementation of step S13 in FIG. 1;
FIG. 3 is a flowchart showing the implementation of step S14 in FIG. 1;
FIG. 4 is an exemplary diagram of a method of determining parameters of a sensor in an embodiment of the present disclosure;
FIG. 5 is a flowchart of a specific implementation of a method for determining a target value of zero bias of a gyroscope;
FIG. 6 is a flowchart of a specific implementation of a method for determining initial values of a second external parameter in a second embodiment of the disclosure;
FIG. 7 is a flowchart showing the implementation of step S64 in FIG. 6;
Fig. 8 is a schematic structural view of a parameter determining apparatus of a sensor in an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to solve the problems of low initialization precision and long time consumption of a multi-sensor fusion positioning system in the prior art, the embodiment of the disclosure provides a method and a device for determining parameters of a sensor, which can quickly and accurately initialize working parameters of the sensor in the multi-sensor fusion positioning system.
The multi-sensor fusion positioning system needs to be initialized after each start, and working parameters of each sensor are determined and used for subsequent positioning.
Firstly, more than two frames of images shot by a visual sensor in a set time period are acquired, and inertial navigation measurement data output by an inertial measurement unit corresponding to shooting time of each frame of images and GNSS positioning data output by a GNSS receiver are acquired.
In one embodiment, the method may include acquiring more than two frames of images captured by a visual sensor during a set period of time, inertial measurement data output by an inertial measurement unit, and GNSS positioning data output by a GNSS receiver; taking shooting time of each frame of image as a reference, performing differential computation on the inertial navigation measurement data and the GNSS positioning data, and determining the inertial navigation measurement data and the GNSS positioning data corresponding to each frame of image.
Specifically, the set time period may be a time period of 2-10 s at an initial driving stage of the vehicle where the multi-sensor fusion positioning system is located, for example, the two or more images may be 30 frames of images acquired in a 2-second time period; correspondingly, the acquired inertial measurement data output by the inertial measurement unit and the acquired GNSS positioning data output by the GNSS receiver are data acquired in the same time period, and the inertial measurement data and the GNSS positioning data are subjected to time difference alignment according to the shooting time of the image.
And determining the working parameters of the sensor according to the acquired image corresponding to the shooting time of each frame of image, inertial navigation measurement data and GNSS positioning data by the method in the following embodiment. For convenience of description, inertial measurement data corresponding to the shooting time of the ith frame image and related data in the GNSS positioning data are described as inertial measurement data at the ith time and related data in the GNSS positioning data.
Example 1
An embodiment of the present disclosure provides a method for determining parameters of a sensor, a flow of which is shown in fig. 1, including the following steps:
Step S11: and determining the normalized position and posture of each frame of image.
In one embodiment, the method can include determining a normalized relative position and relative pose of each frame image with respect to the first frame image by means of feature point matching and epipolar geometry.
Specifically, the method comprises the steps of extracting and correlating characteristic points of multiple frames of images through SFM (Structure From Motion, SFM is a technology for estimating a three-dimensional structure from a series of multiple two-dimensional image sequences containing visual motion information), extracting and matching corresponding characteristic points, and establishing a relative relation among the multiple frames of images; and solving the polar geometry to obtain the relative pose of two frames under the normalized depth, combining the three-dimensional position of the matched characteristic points of the two frames, combining the Point-to-motion (PERSPECTIVE-n-Point, PNP) algorithm, recovering the normalized position and the pose of each frame of image, and particularly recovering the normalized relative position and the relative pose of each frame of image relative to the first frame of image.
Step S12: based on the normalized position and posture of each frame of image, GNSS positioning data and inertial navigation measurement data, an initial value of the scale of the vision sensor and an initial value of a second external parameter of the vision sensor relative to the world coordinate system are obtained.
In one embodiment, the method may include obtaining an initial value of a scale of the vision sensor based on the normalized position of each frame of image and the GNSS positioning data; based on the normalized position and posture of each frame of image, GNSS positioning data and inertial navigation measurement data, an initial value of the vision sensor relative to a second external parameter of the world coordinate system is obtained. The initial value of the second external parameter of the initial value sum of the specific scale is determined as follows:
1. An initial value of the scale is determined.
When the vision sensor is a monocular camera, the obtained position data is normalized, so that the specific position needs to be determined by initializing the scale and normalizing the position.
And optimizing the set value of the scale of the visual sensor according to the normalized position of each frame of image and the positioning position in the GNSS positioning data to obtain the initial value of the scale of the visual sensor.
Specifically, according to the normalized position of each frame of image and the positioning position in the GNSS positioning data, determining a third position residual error:
Wherein, The method comprises the steps that the positioning position in GNSS positioning data corresponding to an ith frame image is specifically the position of a GNSS in a world coordinate system relative to an initial moment, the ith moment is the moment when a visual sensor shoots the ith frame image, the initial moment is the moment when the visual sensor shoots a 0 th frame image, and i=0, 1 … … n and n+1 are the total frame number of the images; /(I)The normalized position of the ith frame of image is specifically the normalized position of the vision sensor relative to the initial point of the vision sensor under the initial vision coordinate system, the initial vision coordinate system is the vision coordinate system when the shooting time of the 0 th frame of image is reached, and the initial point is the position of the vision coordinate system at the initial moment; s is the scale of the vision sensor.
And starting to iteratively optimize the third position residual error from the scale set value to obtain the initial value of the scale of the visual sensor. The scale set value may be an obtained empirical value, or may be a latest initialization result, or may be a value modified empirically based on the latest initialization result.
Specifically, the termination condition of the iterative optimization may be that the value of the corresponding residual is lower than a preset threshold.
2. An initial value of the second external parameter is determined.
In one embodiment, the method may include determining a first position parameter mean value according to a normalized position of each frame of image, and determining a second position parameter mean value according to a positioning position in GNSS positioning data corresponding to each frame of image; determining three-axis acceleration under a visual coordinate system according to the gesture of each frame of image, the three-axis acceleration in inertial navigation measurement data and a first external parameter between a visual sensor and an inertial measurement unit, obtaining a first acceleration parameter mean value according to the three-axis acceleration under the visual coordinate system, and determining a second acceleration parameter mean value under the world coordinate system according to the gravity acceleration and the GNSS speed in GNSS positioning data corresponding to each frame of image; and determining an initial value of the vision sensor relative to a second external parameter of the world coordinate system according to the first position parameter mean value, the second position parameter mean value, the first acceleration parameter mean value and the second acceleration parameter mean value.
Step S13: and optimizing the initial value of the scale and the initial value of the second external parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data and the first external parameter between the visual sensor and the inertial measurement unit to obtain a second external parameter optimized value and a scale optimized value.
Referring to fig. 2, the method specifically includes the following steps:
Step S131: and determining the position of the visual sensor under the world coordinate system according to the normalized position of each frame of image, the initial value of the scale of the visual sensor and the initial value of the second external parameter of the visual sensor relative to the world coordinate system, and determining the first position residual error according to the position of the visual sensor under the world coordinate system of each frame of image and the positioning position in the GNSS positioning data.
Specifically, according to the normalized position of each frame imageInitial value of the dimension S of the vision sensor, and second extrinsic parameter/>, of the vision sensor with respect to the world coordinate systemDetermining the position of the vision sensor in the world coordinate systemBased on the position of the vision sensor in the world coordinate system and the positioning position/>, in the GNSS positioning dataDetermining a first position residual r (z p, x):
step S132: determining a first acceleration according to the gravity acceleration and the GNSS speed in the GNSS positioning data corresponding to each frame of image, determining a second acceleration according to the posture of each frame of image, the triaxial acceleration in the inertial navigation measurement data and the initial values of the first external parameter and the second external parameter, and taking the difference value between the first acceleration and the second acceleration as an acceleration residual.
Specifically, according to the gravity acceleration g and the GNSS speed in the GNSS positioning data corresponding to each frame of imageDetermining the first accelerationAccording to the pose/>, of each frame of imageTriaxial acceleration a i in inertial navigation measurement data, first external parameter/>, between vision sensor and inertial measurement unitAnd a second extrinsic parameter/>, relative to the world coordinate system, of the vision sensorDetermining the initial value of the second accelerationTaking the difference between the first acceleration and the second acceleration as an acceleration residual r (z acc, x):
wherein x in the first position residual error and the acceleration residual error is a parameter which needs to be optimized, Δt is the difference in the photographing time of the adjacent two frames of images.
Step S133: and carrying out iterative optimization on the residual error and the iterative optimization on the first position residual error and the acceleration residual error to obtain a second extrinsic optimization value and a scale optimization value.
Residual sum of first position residual sum acceleration residual sumAnd (5) performing iterative optimization to obtain an optimized value of the scale and an optimized value of the second external parameter.
Specifically, a first external parameter between the vision sensor and the inertial measurement unit is a first external parameter at the shooting time of each frame of image; the second external parameter of the vision sensor relative to the world coordinate system, specifically refers to the second external parameter of the vision sensor relative to the world coordinate system when the first frame of image is shot.
The optimization of the acceleration residual error ensures that a data source used for initialization does not need to consider motion excitation, is applicable to the condition of insufficient excitation such as straight running, plane and the like, and expands the applicability of the scheme.
Step S14: and carrying out global optimization on the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame of image to obtain a target value of the scale of the vision sensor, a second extrinsic target value and a speed target value of each frame of image.
Referring to fig. 3, the method specifically includes the following steps:
Step S141: and determining the position of the visual sensor under the world coordinate system according to the normalized position of each frame of image, the optimized value of the scale of the visual sensor and the optimized value of the second external parameter of the visual sensor relative to the world coordinate system, and determining a second position residual error according to the position of the visual sensor under the world coordinate system of each frame of image and the positioning position in the GNSS positioning data.
The determination of the specific second position residual r (z p, x') is similar to the determination of the first position residual, except that the determination of the first position residual r (z p, x) is based on the initial value of the dimension S of the vision sensor and the second external reference of the vision sensor with respect to the world coordinate systemIs the initial value of (2); determination of the second position residual r (z p, x') is based on the optimized value of the scale S of the vision sensor, and the second extrinsic parameters/>, relative to the world coordinate system, of the vision sensorIs used for the optimization of the values of (a).
Step S142: and determining a position pre-integration residual error according to the inertial navigation position pre-integration in the inertial navigation measurement data of each frame of image, the position parameters of the inertial navigation relative to the global positioning system, the first external parameters and the gravity acceleration.
Specifically, the inertial navigation position is pre-integrated according to the i+1th momentPosition parameter/>, of inertial navigation relative to global positioning system at ith momentFirst external parametersAnd the gravity acceleration g, determining the position pre-integral residual as
Step S143: and determining a speed pre-integration residual error according to the speed initial value of each frame of image, the inertial navigation speed pre-integration in the inertial navigation measurement data, the first external parameter and the gravity acceleration.
Specifically, inertial navigation speed pre-integration in inertial navigation measurement data is performed according to the initial speed value v i' of each frame of image(Inertial navigation speed pre-integral at time i+1), first external parameterAnd the gravity acceleration g, determining a speed pre-integral residual error
Obtaining the sum of the position pre-integral residual and the speed pre-integral residual according to the position pre-integral residual and the speed pre-integral residual, namely, pre-integral residual r (z b, x'):
x' in the residual is a parameter that needs to determine the final optimized target value,
Step S144: and carrying out iterative optimization on the residual errors of the position pre-integral residual error, the speed pre-integral residual error and the second position residual error to obtain a target value of the scale of the vision sensor, a second exogenous target value and a speed target value of each frame of image.
Residual sum of position pre-integral residual, velocity pre-integral residual and second position residualAnd (3) performing iterative optimization to obtain a target value of the scale, a target value of the second external parameter and a target value of the speed of each frame of image.
The specific implementation of the method for determining the sensor parameters of the multi-sensor fusion positioning system, as shown in fig. 4, may include Step1: the method comprises the steps of carrying out initial calculation on data acquired by a vision sensor Camera, a global positioning system GPS or RTK and an inertial measurement unit IMU, mainly carrying out SFM calculation on vision data to obtain normalized positions and postures of each frame of image, and carrying out time alignment calculation on the data acquired by the global positioning system and the data acquired by the inertial measurement unit; step2: calculating an optimized initial value to obtain an initial value of a scale S of the vision sensor and an initial value of a second external parameter w_R_co of the vision sensor and the global positioning system; step3: fine optimizing S and R (w_R_co), and iteratively optimizing the relative position of the SFM, the relative position residual error of the GPS and the gravity direction residual error to obtain the values of S and w_R_co after local fine optimization; step4: and (3) performing global fine optimization, namely iteratively optimizing the residual sum of the residual error of the pre-integral position, the residual error of the pre-integral speed and the residual error of the relative position of the SFM and the residual error of the relative position of the GPS to obtain target values of S, w _R_co and vk (the speed of the k moment image) after the global fine optimization.
The method for determining parameters of a sensor provided by the embodiment of the invention comprises the steps of acquiring more than two frames of images shot by a visual sensor in a set time period, inertial navigation measurement data output by an inertial measurement unit corresponding to shooting time of each frame of images and GNSS positioning data output by a GNSS receiver, and executing the following steps: determining the normalized position and posture of each frame of image; based on the normalized position and posture of each frame of image, GNSS positioning data and inertial navigation measurement data, obtaining an initial value of the scale of the vision sensor and an initial value of a second external parameter of the vision sensor relative to a world coordinate system; optimizing the initial value of the scale and the initial value of the second external parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data and the first external parameter between the visual sensor and the inertial measurement unit to obtain a second external parameter optimized value and a scale optimized value; and carrying out global optimization on the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame of image to obtain a target value of the scale of the vision sensor, a second extrinsic target value and a speed target value of each frame of image. The beneficial effects of this technical scheme include at least:
(1) The initial value of the scale of the visual sensor and the initial value of the second external parameter of the visual sensor relative to the world coordinate system are reasonably determined, the initial value is more accurately optimized on the basis of local optimization, global optimization is further realized, and therefore the final sensor parameter determination result is high in accuracy.
(2) The working parameter determination work of the sensor is completed once, namely the initialization work of the sensor is completed instead of VIO fusion, and the alignment of the VIO and the GNSS is performed under the condition that the result of the VIO fusion is verified to be reasonable, so that the initialization time is shortened.
The optimization of the sensor operating parameters, in addition to determining the target value of the scale of the vision sensor, the target value of the second appearance and the target value of the speed of each frame of image, also need to determine the target value of the zero bias of the gyroscope, referring to fig. 5, the determination of the target value of the zero bias of the gyroscope may include the following steps:
step S51: and determining the posture of the inertial measurement unit according to the posture of each frame of image and the first external parameter.
Step S52: and optimizing the initial value of the zero offset of the gyroscope of the inertial measurement unit based on the attitude, the angular velocity pre-integration and the angular velocity pre-integration of the inertial measurement unit of each frame of image to obtain the target value of the zero offset of the gyroscope.
Determining an attitude rotation residual error according to the attitude of an inertial measurement unit of each frame of image, the angular velocity pre-integration, the offset of the angular velocity pre-integration to the zero offset of a gyroscope of the inertial measurement unit and a pre-determined initial value of the zero offset of the gyroscope; and carrying out iterative optimization on the attitude rotation residual error to obtain an optimized value of zero offset of the gyroscope.
Specifically, the determined attitude rotation residual may be:
Wherein bg is zero offset of the gyroscope; The attitude of the moment inertia measurement unit relative to the initial moment is the quaternion of the attitude; /(I) Pre-integrating the inertial navigation angular velocity at the (i+1) th moment; /(I)And performing zero offset partial guide on the gyroscope for inertial navigation angular velocity pre-integration, and specifically, performing Jacobian matrix.
Example two
A second embodiment of the present disclosure provides a specific implementation of a method for determining an initial value of a second external parameter of a vision sensor with respect to a world coordinate system, where a flow is shown in fig. 6, and the method includes the following steps:
Step S61: and determining a first position parameter mean value according to the normalized position of each frame of image, and determining a second position parameter mean value according to the positioning position in the GNSS positioning data corresponding to each frame of image.
Normalized position from each frame of imageDetermining a first location parameter mean
According to the positioning position in the GNSS positioning data corresponding to each frame of imageDetermining a second location parameter mean
Step S62: and determining the triaxial acceleration under the visual coordinate system according to the posture of each frame of image, the triaxial acceleration in the inertial navigation measurement data and the first external parameters.
Specifically, the triaxial acceleration in the inertial navigation measurement data is the triaxial acceleration in the IMU coordinate system, and the triaxial acceleration in the visual coordinate system, specifically the triaxial acceleration in the initial visual coordinate system, can be determined by using the following formula
Wherein a i is the triaxial acceleration in the inertial navigation measurement data at the ith moment, namely the triaxial acceleration under the IMU coordinate system; is a first external parameter between the vision sensor and the inertial measurement unit at the ith moment; /(I) The gesture of the ith frame image is specifically the gesture of the vision sensor relative to the initial point of the vision sensor under the initial vision coordinate system.
And rotating the triaxial acceleration measured by inertial navigation according to the gesture of the visual sensor, and then rotating according to the first external parameter to obtain the triaxial acceleration under the visual coordinate system.
Step S63: and obtaining a first acceleration parameter average value according to the triaxial acceleration under the visual coordinate system, and determining a second acceleration parameter average value under the world coordinate system according to the gravity acceleration and the GNSS speed in the GNSS positioning data corresponding to each frame of image.
Obtaining a first acceleration parameter mean value according to the triaxial acceleration under the initial visual coordinate system
Determining a second acceleration parameter in the world coordinate system according to the difference of the GNSS speeds in the GNSS positioning data corresponding to each frame of image
Wherein,The GNSS speed in the GNSS positioning data corresponding to each frame of image is specifically the global positioning system speed under the world coordinate system; Δt is the difference of the shooting time of two adjacent frames of images; /(I)And a second acceleration parameter determined for the global positioning system speed difference under the world coordinate system at the ith moment.
According to the gravitational acceleration g and the second acceleration parameterDetermining a second acceleration parameter mean/>, in the world coordinate system
Step S64: and determining an initial value of the vision sensor relative to a second external parameter of the world coordinate system according to the first position parameter mean value, the second position parameter mean value, the first acceleration parameter mean value and the second acceleration parameter mean value.
Specifically, referring to fig. 7, the following steps may be included:
step S641: and determining an initial rotation value of the second external parameter according to the first position parameter mean value and the second position parameter mean value.
Determining an initial rotation quaternion q o of rotation between the first position parameter mean and the second position parameter mean:
Step S642: and rotating the first acceleration parameter average value according to the initial rotation value of the second external parameter to obtain a third acceleration parameter average value.
Mean value of first acceleration parameters under initial vision coordinate systemRotating according to the initial rotation quaternion q o to obtain a third acceleration parameter mean value, namely the normalized triaxial acceleration direction parameter mean value/>, after initial rotation
Step S643: and determining a first projection of the second position parameter mean on a plane with the third acceleration parameter mean as a normal line, and a second projection of the second position parameter mean on the plane with the second acceleration parameter mean as a normal line.
Determining a second location parameter meanAtProjection onto a plane being a normalAnd second location parameter meanSecond acceleration parameter mean/>, in world coordinate systemProjection onto a plane being normal
Step S644: a re-rotation value of the second external parameter is determined based on the first projection and the second projection.
Determining projectionAnd projectionThe rotation quaternion q 1 between:
q 1 is determined as the value of the second external rotation again.
Step S645: and determining the initial value of the second external parameter according to the initial rotation value and the re-rotation value of the second external parameter.
Determining a rotational quaternion of the visual sensor relative to the world coordinate system based on an initial rotational value of the second external parameter, i.e., an initial rotational quaternion q o, and a re-rotational value of the second external parameter, i.e., a rotational quaternion q 1
Rotational quaternion of visual sensor relative to world coordinate systemI.e. the rotation of the vision sensor relative to the world coordinate system, is a representation of the second appearance.
The general method for obtaining the rotation quaternion of the vision sensor relative to the world coordinate system through the two matched vector pairs is that firstly, the initial quaternion q 0 is calculated through the position modulus matching pair, the remaining degrees of freedom are acceleration modulus vector under the rotation initial vision coordinate system through q 0 rotation of the rotation degrees of which the positions are combined after rotation, the rotation degrees are under the middle coordinate system, then the projection vector of the two acceleration modulus vectors on the plane with the rotation axis as the normal line is obtained through the rotation axis vector, q 1 is calculated through the two projection vectors, and finally the rotation of the vision sensor relative to the world coordinate system is obtained through combination.
Based on the inventive concept of the present disclosure, an embodiment of the present disclosure further provides a parameter determining apparatus of a sensor, a structure of which is shown in fig. 8, including:
The acquiring module 81 is configured to acquire at least two frames of images captured by the vision sensor in a set period of time, inertial measurement data output by the inertial measurement unit corresponding to a capturing time of each frame of images, and GNSS positioning data output by the GNSS receiver;
A determining module 82 for determining a normalized position and pose of each frame of image;
The initial optimization module 83 is configured to obtain an initial value of a scale of the vision sensor and an initial value of a second external parameter of the vision sensor relative to the world coordinate system based on the normalized position and posture of each frame of image, GNSS positioning data and inertial navigation measurement data;
The local optimization module 84 is configured to optimize the initial value of the scale and the initial value of the second external parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data, and the first external parameter between the vision sensor and the inertial measurement unit, to obtain a second external parameter optimized value and a scale optimized value;
The global optimization module 85 is configured to globally optimize the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data, and the GNSS positioning data of each frame of image, so as to obtain a target value of the scale of the vision sensor, a target value of the second extrinsic, and a target value of the speed of each frame of image.
In one embodiment, the initial optimization module 83 is specifically configured to:
Based on the normalized position of each frame of image and GNSS positioning data, obtaining an initial value of the scale of the vision sensor; based on the normalized position and posture of each frame of image, GNSS positioning data and inertial navigation measurement data, an initial value of the vision sensor relative to a second external parameter of the world coordinate system is obtained.
In one embodiment, the initial optimization module 83 obtains an initial value of the scale of the vision sensor based on the normalized position of each frame of image and the GNSS positioning data, which is specifically configured to:
and optimizing the set value of the scale of the visual sensor according to the normalized position of each frame of image and the positioning position in the GNSS positioning data to obtain the initial value of the scale of the visual sensor.
In one embodiment, the initial optimization module 83 obtains an initial value of the visual sensor relative to the second external parameter of the world coordinate system based on the normalized position and posture of each frame of image, the GNSS positioning data and the inertial navigation measurement data, which is specifically configured to:
Determining a first position parameter mean value according to the normalized position of each frame of image, and determining a second position parameter mean value according to the positioning position in the GNSS positioning data corresponding to each frame of image; determining three-axis acceleration under a visual coordinate system according to the gesture of each frame of image, the three-axis acceleration in inertial navigation measurement data and the first external parameter, obtaining a first acceleration parameter mean value according to the three-axis acceleration under the visual coordinate system, and determining a second acceleration parameter mean value under a world coordinate system according to the gravity acceleration and the GNSS speed in GNSS positioning data corresponding to each frame of image; and determining an initial value of the vision sensor relative to a second external parameter of the world coordinate system according to the first position parameter mean value, the second position parameter mean value, the first acceleration parameter mean value and the second acceleration parameter mean value.
In one embodiment, the initial optimization module 83 determines an initial value of the vision sensor relative to the second external parameter of the world coordinate system according to the first position parameter average, the second position parameter average, the first acceleration parameter average, and the second acceleration parameter average, which is specifically configured to:
Determining an initial rotation value of a second external parameter according to the first position parameter mean value and the second position parameter mean value; rotating the first acceleration parameter mean value according to the initial rotation value of the second external parameter to obtain a third acceleration parameter mean value; determining a first projection of the second position parameter mean on a plane with the third acceleration parameter mean as a normal line and a second projection of the second position parameter mean on the plane with the second acceleration parameter mean as a normal line; determining a re-rotation value of the second external parameter according to the first projection and the second projection; and determining the initial value of the second external parameter according to the initial rotation value and the re-rotation value of the second external parameter.
In one embodiment, the initial optimization module 83 is specifically configured to:
determining the position of the visual sensor under the world coordinate system according to the normalized position of each frame of image, the initial value of the scale of the visual sensor and the initial value of the second external parameter of the visual sensor relative to the world coordinate system, and determining a first position residual error according to the position of the visual sensor under the world coordinate system of each frame of image and the positioning position in GNSS positioning data; determining a first acceleration according to the gravity acceleration and the GNSS speed in the GNSS positioning data corresponding to each frame of image, determining a second acceleration according to the posture of each frame of image, the triaxial acceleration in the inertial navigation measurement data and the initial values of the first external parameter and the second external parameter, and taking the difference value between the first acceleration and the second acceleration as an acceleration residual; and carrying out iterative optimization on the residual error and the iterative optimization on the first position residual error and the acceleration residual error to obtain a second extrinsic optimization value and a scale optimization value.
In one embodiment, the local optimization module 84 is specifically configured to:
Determining the position of the visual sensor under the world coordinate system according to the normalized position of each frame of image, the optimized value of the scale of the visual sensor and the optimized value of the second external parameter of the visual sensor relative to the world coordinate system, and determining a second position residual error according to the position of the visual sensor under the world coordinate system of each frame of image and the positioning position in the GNSS positioning data; determining a position pre-integration residual error according to inertial navigation position pre-integration in inertial navigation measurement data of each frame of image, position parameters of inertial navigation relative to a global positioning system, the first rotation external parameter and gravity acceleration; determining a speed pre-integration residual error according to the speed initial value of each frame of image, the inertial navigation speed pre-integration in inertial navigation measurement data, the first rotation external parameter and the gravity acceleration; and carrying out iterative optimization on the residual errors of the position pre-integral residual error, the speed pre-integral residual error and the second position residual error to obtain a target value of the scale of the vision sensor, a second exogenous target value and a speed target value of each frame of image.
In one embodiment, the apparatus further includes a gyroscope zero bias optimization module 86 configured to:
Determining the posture of the inertial measurement unit according to the posture of each frame of image and the first external parameter; and optimizing the initial value of the zero offset of the gyroscope of the inertial measurement unit based on the attitude, the angular velocity pre-integration and the angular velocity pre-integration of the inertial measurement unit of each frame of image to obtain the target value of the zero offset of the gyroscope.
In one embodiment, the determining module 82 is specifically configured to:
And determining the normalized relative position and relative posture of each frame of image relative to the first frame of image by using a characteristic point matching and epipolar geometry method.
In one embodiment, the obtaining module 81 is specifically configured to:
Acquiring more than two frames of images shot by a visual sensor in a set time period, inertial navigation measurement data output by an inertial measurement unit and GNSS positioning data output by a GNSS receiver; taking shooting time of each frame of image as a reference, performing differential computation on the inertial navigation measurement data and the GNSS positioning data, and determining the inertial navigation measurement data and the GNSS positioning data corresponding to each frame of image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Based on the inventive concepts of the present disclosure, embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer instructions that, when executed by a processor, implement the above-described method of determining parameters of a sensor.
Based on the inventive concept of the present disclosure, an embodiment of the present disclosure further provides a server, including: the sensor parameter determining device comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the sensor parameter determining method when executing the program.
Unless specifically stated otherwise, terms such as processing, computing, calculating, determining, displaying, or the like, may refer to an action and/or process of one or more processing or computing systems, or similar devices, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the processing system's registers or memories into other data similarly represented as physical quantities within the processing system's memories, registers or other such information storage, transmission or display devices. Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
It should be understood that the specific order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, the present disclosure is directed to less than all of the features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this disclosure.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. The processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. These software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or". The terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.

Claims (11)

1. A method of determining parameters of a sensor, comprising:
Acquiring more than two frames of images shot by a visual sensor in a set time period, inertial navigation measurement data output by an inertial measurement unit corresponding to shooting time of each frame of images and GNSS positioning data output by a GNSS receiver;
determining the normalized position and posture of each frame of image;
based on the normalized position of each frame of image and GNSS positioning data, obtaining an initial value of the scale of the vision sensor;
Based on the normalized position and posture of each frame of image, GNSS positioning data and inertial navigation measurement data, obtaining an initial value of a second external parameter of the vision sensor relative to a world coordinate system;
Optimizing the initial value of the scale and the initial value of the second external parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data and the first external parameter between the visual sensor and the inertial measurement unit to obtain a second external parameter optimized value and a scale optimized value;
And performing global optimization on the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame of image to obtain a target value of the scale of the vision sensor, a second extrinsic target value and a speed target value of each frame of image.
2. The method of claim 1, wherein the obtaining the initial value of the scale of the vision sensor based on the normalized position of each frame of image and the GNSS positioning data specifically comprises:
and optimizing the set value of the scale of the visual sensor according to the normalized position of each frame of image and the positioning position in the GNSS positioning data to obtain the initial value of the scale of the visual sensor.
3. The method of claim 1, wherein the obtaining the initial value of the second external parameter of the vision sensor relative to the world coordinate system based on the normalized position and posture of each frame of image, the GNSS positioning data and the inertial navigation measurement data specifically comprises:
determining a first position parameter mean value according to the normalized position of each frame of image, and determining a second position parameter mean value according to the positioning position in the GNSS positioning data corresponding to each frame of image;
Determining three-axis acceleration under a visual coordinate system according to the gesture of each frame of image, the three-axis acceleration in inertial navigation measurement data and the first external parameter, obtaining a first acceleration parameter mean value according to the three-axis acceleration under the visual coordinate system, and determining a second acceleration parameter mean value under a world coordinate system according to the gravity acceleration and the GNSS speed in GNSS positioning data corresponding to each frame of image;
and determining an initial value of the vision sensor relative to a second external parameter of the world coordinate system according to the first position parameter mean value, the second position parameter mean value, the first acceleration parameter mean value and the second acceleration parameter mean value.
4. A method according to claim 3, wherein the determining the initial value of the vision sensor relative to the second external parameter of the world coordinate system according to the first position parameter mean, the second position parameter mean, the first acceleration parameter mean and the second acceleration parameter mean specifically comprises:
determining an initial rotation value of a second external parameter according to the first position parameter mean value and the second position parameter mean value;
rotating the first acceleration parameter mean value according to the initial rotation value of the second external parameter to obtain a third acceleration parameter mean value;
Determining a first projection of the second position parameter mean on a plane with the third acceleration parameter mean as a normal line and a second projection of the second position parameter mean on the plane with the second acceleration parameter mean as a normal line;
determining a re-rotation value of the second external parameter according to the first projection and the second projection;
And determining the initial value of the second external parameter according to the initial rotation value and the re-rotation value of the second external parameter.
5. The method according to claim 1, wherein the optimizing the initial value of the scale and the initial value of the second parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data, and the first parameter between the vision sensor and the inertial measurement unit, to obtain a second parameter optimized value and a scale optimized value, specifically includes:
Determining the position of the visual sensor under the world coordinate system according to the normalized position of each frame of image, the initial value of the scale of the visual sensor and the initial value of the second external parameter of the visual sensor relative to the world coordinate system, and determining a first position residual error according to the position of the visual sensor under the world coordinate system of each frame of image and the positioning position in GNSS positioning data;
Determining a first acceleration according to the gravity acceleration and the GNSS speed in the GNSS positioning data corresponding to each frame of image, determining a second acceleration according to the posture of each frame of image, the triaxial acceleration in the inertial navigation measurement data and the initial values of the first external parameter and the second external parameter, and taking the difference value between the first acceleration and the second acceleration as an acceleration residual;
And carrying out iterative optimization on the residual error and the iterative optimization on the first position residual error and the acceleration residual error to obtain a second extrinsic optimization value and a scale optimization value.
6. The method according to claim 1, wherein the global optimization is performed on the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame of image, so as to obtain a target value of the scale of the vision sensor, a target value of the second extrinsic, and a target value of the speed of each frame of image, specifically including:
Determining the position of the visual sensor under the world coordinate system according to the normalized position of each frame of image, the optimized value of the scale of the visual sensor and the optimized value of the second external parameter of the visual sensor relative to the world coordinate system, and determining a second position residual error according to the position of the visual sensor under the world coordinate system of each frame of image and the positioning position in the GNSS positioning data;
determining a position pre-integration residual error according to inertial navigation position pre-integration in inertial navigation measurement data of each frame of image, position parameters of inertial navigation relative to a global positioning system, the first external parameters and gravity acceleration;
determining a speed pre-integration residual error according to the speed initial value of each frame of image, the inertial navigation speed pre-integration in inertial navigation measurement data, the first external parameter and the gravity acceleration;
and carrying out iterative optimization on the residual errors of the position pre-integral residual error, the speed pre-integral residual error and the second position residual error to obtain a target value of the scale of the vision sensor, a second exogenous target value and a speed target value of each frame of image.
7. The method of claim 1, further comprising:
determining the posture of the inertial measurement unit according to the posture of each frame of image and the first external parameter;
And optimizing the initial value of the zero offset of the gyroscope of the inertial measurement unit based on the attitude, the angular velocity pre-integration and the angular velocity pre-integration of the inertial measurement unit of each frame of image to obtain the target value of the zero offset of the gyroscope.
8. The method according to any one of claims 1 to 7, wherein determining the normalized position and pose of each frame of image specifically comprises:
And determining the normalized relative position and relative posture of each frame of image relative to the first frame of image by using a characteristic point matching and epipolar geometry method.
9. The method as claimed in any one of claims 1 to 7, wherein the steps of acquiring inertial measurement data output by an inertial measurement unit and GNSS positioning data output by a GNSS receiver corresponding to two or more frames of images captured by a vision sensor within a set period of time and a capturing time of each frame of images include:
Acquiring more than two frames of images shot by a visual sensor in a set time period, inertial navigation measurement data output by an inertial measurement unit and GNSS positioning data output by a GNSS receiver;
Taking shooting time of each frame of image as a reference, performing differential computation on the inertial navigation measurement data and the GNSS positioning data, and determining the inertial navigation measurement data and the GNSS positioning data corresponding to each frame of image.
10. A parameter determining apparatus of a sensor, comprising:
the acquisition module is used for acquiring more than two frames of images shot by the visual sensor in a set time period, inertial navigation measurement data output by the inertial measurement unit corresponding to shooting time of each frame of images and GNSS positioning data output by the GNSS receiver;
the determining module is used for determining the normalized position and posture of each frame of image;
The initial optimization module is used for obtaining an initial value of the scale of the vision sensor based on the normalized position and GNSS positioning data of each frame of image, and obtaining an initial value of a second external parameter of the vision sensor relative to the world coordinate system based on the normalized position and posture of each frame of image, the GNSS positioning data and the inertial navigation measurement data; the local optimization module is used for optimizing the initial value of the scale and the initial value of the second external parameter based on the normalized position and posture of each frame of image, GNSS positioning data, inertial navigation measurement data and the first external parameter between the visual sensor and the inertial measurement unit to obtain a second external parameter optimized value and a scale optimized value;
And the global optimization module is used for carrying out global optimization on the scale optimization value and the second extrinsic optimization value based on the normalized position, the inertial navigation measurement data and the GNSS positioning data of each frame of image to obtain a target value of the scale of the vision sensor, a second extrinsic target value and a speed target value of each frame of image.
11. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement a method of determining parameters of a sensor as claimed in any one of claims 1 to 9.
CN202010985174.3A 2020-09-18 2020-09-18 Method and device for determining parameters of sensor Active CN114199275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010985174.3A CN114199275B (en) 2020-09-18 2020-09-18 Method and device for determining parameters of sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010985174.3A CN114199275B (en) 2020-09-18 2020-09-18 Method and device for determining parameters of sensor

Publications (2)

Publication Number Publication Date
CN114199275A CN114199275A (en) 2022-03-18
CN114199275B true CN114199275B (en) 2024-06-21

Family

ID=80645330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010985174.3A Active CN114199275B (en) 2020-09-18 2020-09-18 Method and device for determining parameters of sensor

Country Status (1)

Country Link
CN (1) CN114199275B (en)

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100575873C (en) * 2007-12-29 2009-12-30 武汉理工大学 Double container positioning method based on machine vision
US9031809B1 (en) * 2010-07-14 2015-05-12 Sri International Method and apparatus for generating three-dimensional pose using multi-modal sensor fusion
CN102175261B (en) * 2011-01-10 2013-03-20 深圳大学 Visual measuring system based on self-adapting targets and calibrating method thereof
WO2014048475A1 (en) * 2012-09-27 2014-04-03 Metaio Gmbh Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN105783913A (en) * 2016-03-08 2016-07-20 中山大学 SLAM device integrating multiple vehicle-mounted sensors and control method of device
CN107328406B (en) * 2017-06-28 2020-10-16 中国矿业大学(北京) Method and system for positioning mine moving target based on multi-source sensor
CN107449444B (en) * 2017-07-17 2020-04-10 中国人民解放军国防科学技术大学 Multi-star map attitude associated star sensor internal parameter calibration method
CN107588785B (en) * 2017-09-12 2019-11-05 中国人民解放军国防科技大学 Star sensor internal and external parameter simplified calibration method considering image point error
CN108007437B (en) * 2017-11-27 2020-05-29 北京航空航天大学 A method for measuring farmland boundaries and internal obstacles based on multi-rotor aircraft
CN108253938B (en) * 2017-12-29 2020-01-24 武汉大学 Digital close-up photogrammetry identification and inversion method for TBM rock-breaking slag
CN208751577U (en) * 2018-09-20 2019-04-16 江阴市雷奥机器人技术有限公司 A kind of robot indoor locating system
CN109308722B (en) * 2018-11-26 2024-05-14 陕西远航光电有限责任公司 Space pose measurement system and method based on active vision
CN110009739B (en) * 2019-01-29 2023-03-24 浙江省北大信息技术高等研究院 Method for extracting and coding motion characteristics of digital retina of mobile camera
CN110007324A (en) * 2019-02-21 2019-07-12 南京航空航天大学 A kind of fault satellites Relative Navigation based on SLAM
CN109901207A (en) * 2019-03-15 2019-06-18 武汉大学 A high-precision outdoor positioning method combining Beidou satellite system and image features
CN110132302A (en) * 2019-05-20 2019-08-16 中国科学院自动化研究所 Binocular visual odometer positioning method and system fusing IMU information
CN110207714B (en) * 2019-06-28 2021-01-19 广州小鹏自动驾驶科技有限公司 Method for determining vehicle pose, vehicle-mounted system and vehicle
CN110411476B (en) * 2019-07-29 2021-03-23 视辰信息科技(上海)有限公司 Calibration adaptation and evaluation method and system for visual inertial odometer
CN110617814A (en) * 2019-09-26 2019-12-27 中国科学院电子学研究所 Monocular vision and inertial sensor integrated remote distance measuring system and method
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN111156984B (en) * 2019-12-18 2022-12-09 东南大学 Monocular vision inertia SLAM method oriented to dynamic scene
CN111307176B (en) * 2020-03-02 2023-06-16 北京航空航天大学青岛研究院 Online calibration method for visual inertial odometer in VR head-mounted display equipment
CN111429524B (en) * 2020-03-19 2023-04-18 上海交通大学 Online initialization and calibration method and system for camera and inertial measurement unit
CN111539982B (en) * 2020-04-17 2023-09-15 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
低成本卫星/惯性/视觉组合导航关键技术研究;冯哲煜;中国优秀硕士学位论文全文数据库;全文 *

Also Published As

Publication number Publication date
CN114199275A (en) 2022-03-18

Similar Documents

Publication Publication Date Title
WO2022007602A1 (en) Method and apparatus for determining location of vehicle
CN110057352B (en) A kind of camera attitude angle determination method and device
US12073630B2 (en) Moving object tracking method and apparatus
EP2434256B1 (en) Camera and inertial measurement unit integration with navigation data feedback for feature tracking
CN108932737B (en) Vehicle-mounted camera pitch angle calibration method and device, electronic equipment and vehicle
CN111065043B (en) System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
CN106814753B (en) Target position correction method, device and system
CN107909614B (en) A positioning method of inspection robot under GPS failure environment
EP2175237B1 (en) System and methods for image-based navigation using line features matching
WO2020133172A1 (en) Image processing method, apparatus, and computer readable storage medium
CN109141411B (en) Positioning method, positioning device, mobile robot, and storage medium
CN107607111A (en) Acceleration biases method of estimation and device, vision inertia odometer and its application
US11468599B1 (en) Monocular visual simultaneous localization and mapping data processing method apparatus, terminal, and readable storage medium
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
CN117128954A (en) Complex environment-oriented combined positioning method for aircraft
CN113256728A (en) IMU equipment parameter calibration method and device, storage medium and electronic device
CN114638897A (en) Multi-camera system initialization method, system and device based on non-overlapping views
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
EP3227634B1 (en) Method and system for estimating relative angle between headings
Huttunen et al. A monocular camera gyroscope
CN111025330B (en) Target inclination angle detection method and device based on depth map
CN112179373A (en) A kind of measurement method of visual odometer and visual odometer
CN113167577A (en) Surveying method for a movable platform, movable platform and storage medium
CN114199275B (en) Method and device for determining parameters of sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant