[go: up one dir, main page]

CN117419707A - Target positioning method based on unmanned aerial vehicle inertia and vision combined system - Google Patents

Target positioning method based on unmanned aerial vehicle inertia and vision combined system Download PDF

Info

Publication number
CN117419707A
CN117419707A CN202311352034.2A CN202311352034A CN117419707A CN 117419707 A CN117419707 A CN 117419707A CN 202311352034 A CN202311352034 A CN 202311352034A CN 117419707 A CN117419707 A CN 117419707A
Authority
CN
China
Prior art keywords
inertial
coordinate system
aerial vehicle
coordinates
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311352034.2A
Other languages
Chinese (zh)
Inventor
张时雨
刘程
许常燕
段志强
徐胡超
肖红剑
王旭东
严杰
张婷
唐廷廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 24 Research Institute
CETC 26 Research Institute
Original Assignee
CETC 24 Research Institute
CETC 26 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 24 Research Institute, CETC 26 Research Institute filed Critical CETC 24 Research Institute
Priority to CN202311352034.2A priority Critical patent/CN117419707A/en
Publication of CN117419707A publication Critical patent/CN117419707A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/48Thermography; Techniques using wholly visual means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J2005/0077Imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Navigation (AREA)

Abstract

The invention relates to the technical field of unmanned aerial vehicle positioning, in particular to a target positioning method based on an unmanned aerial vehicle inertia and vision combined system, which comprises the steps of constructing the unmanned aerial vehicle inertia navigation and vision combined system, and acquiring inertial navigation, laser radar and infrared camera acquired positioning information through the system; fusing the acquired positioning information acquired by the laser radar and the infrared camera to obtain coordinates of points under a laser radar coordinate system; performing multi-frame point cloud splicing through the acquired inertial navigation and positioning information acquired by the laser radar, and converting the multi-frame point cloud information into an initial moment inertial coordinate system; converting the coordinates in the inertial coordinate system at the initial moment into a geographic coordinate system to obtain the positioned position coordinates; the invention adds the laser radar and the thermal infrared imager on the basis of inertial navigation to form an inertial navigation/vision combined system, thereby realizing the effects of monitoring and searching forest grassland resources and realizing the real-time positioning function of a target object.

Description

Target positioning method based on unmanned aerial vehicle inertia and vision combined system
Technical Field
The invention relates to the technical field of unmanned aerial vehicle positioning, in particular to a target positioning method based on an unmanned aerial vehicle inertia and vision combined system.
Background
Forest grassland resources occupy a significant proportion in the earth resources, and the problem of forest grassland resource protection can be effectively solved by using unmanned aerial vehicle monitoring. When traditional unmanned aerial vehicle was used for forest grassland resource monitoring, only had monitoring and tracking function, can't carry out accurate location.
According to the invention, on the basis of carrying inertial navigation on an unmanned aerial vehicle, an infrared thermal imager and a laser radar are added to form an inertial navigation/vision combination system, and the phenomenon that the temperature difference exists between a monitored object and the surrounding environment is utilized to realize the function of searching and positioning forest grassland resources, so that the purpose of protecting the forest grassland resources is achieved.
Disclosure of Invention
In order to realize accurate positioning of a monitoring target, the invention provides a target positioning method based on an unmanned aerial vehicle inertia and vision combined system, which comprises the following steps:
an unmanned plane inertial navigation and vision combined system is constructed, and positioning information is acquired through the inertial navigation system, the laser radar and the infrared camera;
fusing the acquired inertial navigation with positioning information acquired by the infrared camera to obtain coordinates of a point under a laser radar coordinate system;
performing multi-frame point cloud splicing through the acquired inertial navigation and positioning information acquired by the laser radar, and converting the multi-frame point cloud information into an initial moment inertial coordinate system;
and converting the coordinates in the inertial coordinate system at the initial moment into the geographic coordinate system to obtain the positioned position coordinates.
Further, the process of obtaining coordinates of a point in the laser radar coordinate system includes:
if one point (u, v) of the infrared image corresponds to one point (x, y, z) on the laser radar, the relationship between the two points is expressed as:
wherein M represents an internal distortion parameter matrix of the infrared camera; r is R Rotating The 4-order homogeneous transformation matrix is formed by the rotation translation vector of the initial inertial system and the rotation translation vector of the inertial system at the later moment; t is t Translation of The translation matrix is used for the position transformation of the optical radar and the thermal infrared imager.
Further, the internal distortion parameter matrix M of the infrared camera is expressed as:
wherein fx and fy are physical focal lengths dequantized by the infrared camera; cx, cy are imaging center coordinates.
Further, the process of converting the multi-frame point cloud information into the initial moment inertial coordinate system comprises the following steps:
wherein x is m 、y m 、z m The coordinates of the feature points in the initial inertial navigation coordinate system; x ', y ', z ' are coordinates of the feature points under the laser radar coordinate system; r is a 4-order homogeneous transformation matrix formed by rotation translation vectors of an initial inertial system and an inertial system at the later moment; q is the coordinate transformation matrix between the inertial sensor and the lidar sensor.
Further, the coordinate conversion matrix Q between the inertial sensor and the lidar sensor is calculated from coordinates measured between the inertial sensor and the lidar sensor, and includes:
further, the 4-order homogeneous transformation matrix R formed by the rotation translation vectors of the initial inertial system and the inertial system at the later moment is expressed as:
wherein R (q) is a projected three-order rotation transformation matrix of the airborne monitoring system; t is the displacement transformation matrix of the onboard detection system after being projected by the ink card support.
Further, the displacement transformation matrix T of the on-board detection system after the projection of the ink card support is expressed as:
wherein Deltax, deltay and Deltaz are the transformation amounts of the coordinates and the elevations of the unmanned aerial vehicle at the initial moment and the subsequent moment; x is x 1 、y 1 、z 1 The position coordinates of the unmanned aerial vehicle under the inertial coordinates at the initial moment are obtained; x is x 2 、y 2 、z 2 The position coordinates of the unmanned aerial vehicle are the position coordinates of the unmanned aerial vehicle at the moment of inertia coordinates at the later moment of the initial moment; l is half the equatorial circumference.
Further, the process of converting the point under the initial inertial coordinate system into the position under the geographic coordinate system includes:
wherein x is m 、y m 、z m The method comprises the steps of obtaining a coordinate of a target to be measured under an initial inertial coordinate; x is x w 、y w 、z w Is a coordinate under a laser radar coordinate system; r is R z R x R y Is a transformation matrix between x, y, z axes between two coordinate systems.
Further, a transformation matrix R between x, y and z axes between two coordinate systems z R x R y Expressed as:
where α represents an angle of rotation between the initial inertial coordinate system and the geographic coordinate system about the z-axis, β represents an angle of rotation between the initial inertial coordinate system and the geographic coordinate system about the x-axis, and γ represents an angle of rotation between the initial inertial coordinate system and the geographic coordinate system about the z-axis.
According to the invention, a laser radar and an infrared thermal imager are added on the basis of inertial navigation to form an inertial navigation/vision combined system, so that the effects of monitoring and searching forest grassland resources and realizing the real-time positioning function of a target object are realized, the infrared thermal imager realizes the monitoring effect according to the temperature difference between a monitored target and the surrounding environment in a field environment, and the object positioning of the inertial navigation/vision combined system can take advantage of the shortages to realize the positioning result with high precision and high frequency.
Drawings
FIG. 1 is a schematic diagram of a chess board employed in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a camera imaging employed in an embodiment of the present invention;
FIG. 3 is a schematic diagram of multi-frame point cloud information splicing in the present invention;
FIG. 4 is a flowchart of a target positioning algorithm for the unmanned aerial vehicle inertial and visual combination system of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a target positioning method based on an unmanned aerial vehicle inertia and vision combined system, which comprises the following steps:
an unmanned plane inertial navigation and vision combined system is constructed, and positioning information is acquired through the inertial navigation system, the laser radar and the infrared camera;
fusing the acquired inertial navigation with positioning information acquired by the infrared camera to obtain coordinates of a point under a laser radar coordinate system;
performing multi-frame point cloud splicing through the acquired inertial navigation and positioning information acquired by the laser radar, and converting the multi-frame point cloud information into an initial moment inertial coordinate system;
and converting the coordinates in the inertial coordinate system at the initial moment into the geographic coordinate system to obtain the positioned position coordinates.
In order to overcome the defects of the prior art, the aim of the embodiment is to provide an unmanned aerial vehicle carrying inertial navigation and visual sensor, realize accurate positioning of a monitoring target and realize protection of forest grassland resources. When the monitored target appears, the unmanned plane inertial navigation/vision combination system needs to ensure that the target to be monitored is always in the visual range; when the monitored target is found, the monitored target should be positioned in real time and the data is transmitted to the ground receiving end.
The unmanned aerial vehicle inertial navigation/vision combined target positioning system comprises a sensor module, a data processing module and a wireless communication module. The sensor module specifically comprises an inertial navigation system, a laser radar and an infrared camera; the data processing module is mainly used for synchronizing (the same coordinate system or the same time) the point cloud information, the infrared image information and the inertial navigation information, and the wireless communication module is mainly used for transmitting the point cloud information, the infrared image and other unmanned aerial vehicle ends and the ground receiving end.
1. Data fusion between coordinate systems
The laser radar can acquire point cloud information of a monitored target, the inertial navigation system can acquire position information such as longitude and latitude, and the infrared camera can monitor and track according to the principle that a large temperature difference exists between the monitored target and the surrounding. The laser radar can realize the effect of the point cloud coordinates of the target object, and the conversion of the inertial navigation and laser radar coordinate system is to convert the coordinates of the target object into the longitude and latitude of the inertial navigation coordinate system, so that the conversion matrix of the inertial navigation and laser radar coordinate system is an important technical means for obtaining the longitude and latitude of the monitored target.
1. Combined calibration and data fusion of inertial navigation and infrared camera
The infrared camera is one of important tools for realizing target positioning in a field environment, and can finish accurate positioning of a target object by being matched with a sensor such as a laser radar.
(1) Infrared camera calibration
The camera can influence the imaging of the camera due to lens distortion, so that the internal distortion parameter of the infrared camera is solved first. Let the internal distortion parameter matrix of the infrared camera be M, expressed as:
wherein fx and fy are physical focal lengths dequantized by the infrared camera and are generally the same value; cx, cy are imaging centers, typically the origin of coordinates.
The solution of the M matrix is obtained by a Zhengyou calibration method, namely, the international chessboard is photographed at different directions and angles (the international chessboard is formed by sticking an aluminum plate on a wood plate, heating the aluminum plate to form a larger temperature difference between the aluminum plate and the wood plate, and an infrared camera is enabled to identify the corner points of the chessboard, as shown in figure 1), and the M matrix can be obtained by identifying the black-white intersected corner points of the international chessboard and the coordinates of the corner points under a coordinate system of the world coordinate. As shown in the camera imaging schematic diagram of fig. 2, f can be solved by using the triangle similarity principle in the figure.
(2) Combined calibration of infrared camera and laser radar
When the combination is calibrated, the infrared camera and the laser radar are required to be used for fixing the unmanned aerial vehicle, and the international chessboard is photographed at different angles and directions. Assuming a point (u, v) of the infrared image, and a point (x, y, z) on the lidar corresponding thereto, the relationship between the two points can be expressed as:
wherein R is Rotating The 4-order homogeneous transformation matrix is formed by the rotation translation vector of the initial inertial system and the rotation translation vector of the inertial system at the later moment; t is t Translation of A translation matrix for position transformation of the optical radar and the thermal infrared imager; matrix R Rotating The matrix R can be solved by knowing the assembly size of the infrared camera and the laser radar and the positions of a plurality of calibration points Rotating Sum matrix t Translation of
(3) Data fusion of infrared camera and laser radar
And the data fusion of the infrared camera and the laser radar is realized by obtaining the coordinates of points under the laser radar coordinate system through the joint calibration matrix according to the coordinates of target points on an infrared image shot by the infrared camera at the same time.
2. Combined calibration and data fusion of inertial navigation and laser radar
Because the resolution ratio in the point cloud information of a single frame is lower, multi-frame point cloud information is needed to be spliced to realize the point cloud information of the object to be detected with high resolution ratio. The unmanned aerial vehicle moves in the process of searching the target, so that the multi-frame point cloud is spliced under the initial inertial coordinates by matching with an inertial system, and the high-precision point cloud information containing the target to be detected is realized.
(1) Multi-frame point cloud information stitching
The combined calibration of inertial navigation and laser radar carries out flight on the track designed in advance through the unmanned aerial vehicle carrying the above two sensors, respectively collects data on a certain area on the ground, and calculates the conversion matrix Q of the two sensors by recording the position information of the same characteristic points under the same frame. The specific calculation is as follows:
wherein x ', y ', z ' are coordinates of the feature points in the laser radar coordinate system; x, y and z are coordinates of the feature points under the inertial navigation coordinate system; q is the transformation matrix between the two sensors (inertial sensor and lidar sensor).
(2) Converting multi-frame point cloud information into inertial coordinates at initial time
In order to make multi-frame point cloud information correspond to initial moment inertial coordinates one by one, on the basis of obtaining a transformation matrix Q and a translation matrix t (the transformation matrix Q and the translation matrix t are obtained by calculating through SVD methods, nonlinear optimization and other methods) after two sensors are calibrated, multi-frame point cloud information is spliced, and an ICP algorithm is generally used for splicing multi-frame point clouds together, so that the effect is good, the aim in the splicing process is to minimize an error function, and the error function represents:
wherein F (Q, t) is an error function; m is the number of nearest neighbor point pairs of the point cloud M and N; p is p j Is a point in the point cloud M; q j Is the corresponding point in the point cloud N.
And (3) splicing the multi-frame point clouds by using the formula (4) to finish high-resolution and high-precision point cloud information.
And then the acquired point cloud data is converted into an initial moment inertial coordinate system through the data of the combined inertial navigation system so as to realize the splicing of more complete multi-frame point cloud data, as shown in the multi-frame point cloud information splicing of fig. 3. The multi-frame point cloud information can be spliced to an initial inertial coordinate system through a conversion matrix, and the method specifically comprises the following steps of:
wherein x ', y ', z ' are coordinates of the feature points in the laser radar coordinate system; x is x m 、y m 、z m The coordinates of the feature points in the initial inertial navigation coordinate system; r is a 4-order homogeneous transformation matrix formed by rotation translation vectors of an initial inertial system and an inertial system at the later moment; r (q) is a matrix of 3-order converted by quaternion, and is obtained by the front and back gestures of the unmanned plane; t is an onboard detection system displacement transformation matrix projected by the ink card support; r is a 4-order homogeneous transformation matrix formed by combining an inertial navigation system and a rotation translation vector of a world coordinate system; Δx, Δy and Δz are the transformation amounts of the coordinates and the elevations of the unmanned aerial vehicle at the initial time and the subsequent time; l is half of the equatorial circumference, generally 20037508.34, m; x is x 1 、y 1 、z 1 The position coordinates of the unmanned aerial vehicle under the inertial coordinates at the initial moment are obtained; x is x 2 、y 2 、z 2 And the position coordinates of the unmanned aerial vehicle under the inertial coordinates at the later moment.
2. Target positioning algorithm
On the basis of completing the conversion of each coordinate system, the realization of the target positioning algorithm is carried out next. The target positioning process is that an infrared thermal imager monitors an object with obvious temperature difference from the surrounding environment, a laser radar acquires object point cloud information and converts the point cloud information into a geographic coordinate system to realize longitude and latitude conversion, so that the effect of object positioning is achieved. A specific algorithm flow chart is shown in fig. 4.
(1) Identification of objects
And carrying out forest grassland target recognition based on the infrared image. The target point is separated by image processing such as common threshold segmentation.
(2) Positioning of objects
And on the basis that the multi-frame point cloud containing the target to be detected is converted into the initial moment inertial coordinate system, the position of the laser radar under the initial moment inertial coordinate system is obtained, so that the position information of the multi-frame point cloud information under the initial inertial coordinate system can be obtained, and the solving of the geographic coordinate system of the point cloud is realized. And obtaining the coordinates of the laser radar at the initial moment. Assume that the position of the object under test under the initial inertial coordinates is (x m ,y m ,z m ) In the lidar coordinate system, is (x) w ,y w ,z w ) Then the formula for converting the point under the initial inertial coordinate system into the position under the geographic coordinate system is as follows:
wherein, alpha, beta and gamma respectively represent alpha angle, beta angle and gamma angle of rotation around the z axis, the x axis and the z axis between the two coordinate systems.
So far, the unmanned plane inertia/vision combined system target positioning algorithm is utilized to complete the method completely. The system can realize the fusion of the infrared image shot by the infrared camera and the laser radar point cloud information, obtain the position of the laser radar point cloud coordinate system corresponding to the target point in the infrared image, and then calculate the position of the target point in the inertial coordinate system by utilizing the data fusion of the laser radar and the inertial system, thereby realizing the purpose of positioning the target point.
In the description of the present invention, it should be understood that the terms "coaxial," "bottom," "one end," "top," "middle," "another end," "upper," "one side," "top," "inner," "outer," "front," "center," "two ends," etc. indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the invention.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "configured," "connected," "secured," "rotated," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intermediaries, or in communication with each other or in interaction with each other, unless explicitly defined otherwise, the meaning of the terms described above in this application will be understood by those of ordinary skill in the art in view of the specific circumstances.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (9)

1. The target positioning method based on the unmanned aerial vehicle inertia and vision combined system is characterized by comprising the following steps of:
an unmanned plane inertial navigation and vision combined system is constructed, and positioning information is acquired through the inertial navigation system, the laser radar and the infrared camera;
fusing the acquired positioning information acquired by the laser radar and the infrared camera to obtain coordinates of points under a laser radar coordinate system;
performing multi-frame point cloud splicing through the acquired inertial navigation and positioning information acquired by the laser radar, and converting the multi-frame point cloud information into an initial moment inertial coordinate system;
and converting the coordinates in the inertial coordinate system at the initial moment into the geographic coordinate system to obtain the positioned position coordinates.
2. The method for positioning a target based on a combined inertial and visual system of an unmanned aerial vehicle according to claim 1, wherein the step of obtaining coordinates of a point in a laser radar coordinate system comprises:
if one point (u, v) of the infrared image corresponds to one point (x, y, z) on the laser radar, the relationship between the two points is expressed as:
wherein M represents an internal distortion parameter matrix of the infrared camera; r is R Rotating The 4-order homogeneous transformation matrix is formed by the rotation translation vector of the initial inertial system and the rotation translation vector of the inertial system at the later moment; t is t Translation of The translation matrix is used for the position transformation of the optical radar and the thermal infrared imager.
3. The method for positioning a target based on a combined inertial and visual system of an unmanned aerial vehicle according to claim 2, wherein the internal distortion parameter matrix M of the infrared camera is expressed as:
wherein fx and fy are physical focal lengths dequantized by the infrared camera; cx, cy are imaging center coordinates.
4. The method for positioning a target based on a combined inertial and visual system of an unmanned aerial vehicle according to claim 1, wherein the process of converting the multi-frame point cloud information into the inertial coordinate system at the initial time comprises:
wherein x is m 、y m 、z m The coordinates of the feature points in the initial inertial navigation coordinate system; x ', y ', z ' are coordinates of the feature points under the laser radar coordinate system; r is a 4-order homogeneous transformation matrix formed by rotation translation vectors of an initial inertial system and an inertial system at the later moment; q is the coordinate transformation matrix between the inertial sensor and the lidar sensor.
5. The method for positioning an object based on a combined inertial and visual system of an unmanned aerial vehicle according to claim 4, wherein the coordinate conversion matrix Q between the inertial sensor and the lidar sensor is calculated from coordinates measured between the inertial sensor and the lidar sensor, comprising:
6. the method for positioning a target based on a combined inertial and visual system of an unmanned aerial vehicle according to claim 4, wherein the 4-order homogeneous transformation matrix R consisting of the rotational translation vectors of the initial inertial system and the inertial system at the subsequent moment is expressed as:
wherein R (q) is a projected three-order rotation transformation matrix of the airborne monitoring system; t is the displacement transformation matrix of the onboard detection system after being projected by the ink card support.
7. The method for positioning a target based on an unmanned aerial vehicle inertia and vision combination system according to claim 6, wherein the displacement transformation matrix T of the on-board detection system after being projected by the ink card support is expressed as:
wherein Deltax, deltay and Deltaz are the transformation amounts of the coordinates and the elevations of the unmanned aerial vehicle at the initial moment and the subsequent moment; x is x 1 、y 1 、z 1 The position coordinates of the unmanned aerial vehicle under the inertial coordinates at the initial moment are obtained; x is x 2 、y 2 、z 2 The position coordinates of the unmanned aerial vehicle are the position coordinates of the unmanned aerial vehicle at the moment of inertia coordinates at the later moment of the initial moment; l is half the equatorial circumference.
8. The method for positioning a target based on a combined inertial and visual system of claim 1, wherein the step of converting the point in the initial inertial coordinate system to a position in the geographic coordinate system comprises:
wherein x is m 、y m 、z m The method comprises the steps of obtaining a coordinate of a target to be measured under an initial inertial coordinate; x is x w 、y w 、z w Is a coordinate under a laser radar coordinate system; r is R z R x R y Is a transformation matrix between x, y, z axes between two coordinate systems.
9. The method for positioning a target based on a combined inertial and visual system of an unmanned aerial vehicle according to claim 8, wherein the transformation matrix R between the x, y and z axes between two coordinate systems z R x R y Expressed as:
where α represents an angle of rotation between the initial inertial coordinate system and the geographic coordinate system about the z-axis, β represents an angle of rotation between the initial inertial coordinate system and the geographic coordinate system about the x-axis, and γ represents an angle of rotation between the initial inertial coordinate system and the geographic coordinate system about the z-axis.
CN202311352034.2A 2023-10-18 2023-10-18 Target positioning method based on unmanned aerial vehicle inertia and vision combined system Pending CN117419707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311352034.2A CN117419707A (en) 2023-10-18 2023-10-18 Target positioning method based on unmanned aerial vehicle inertia and vision combined system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311352034.2A CN117419707A (en) 2023-10-18 2023-10-18 Target positioning method based on unmanned aerial vehicle inertia and vision combined system

Publications (1)

Publication Number Publication Date
CN117419707A true CN117419707A (en) 2024-01-19

Family

ID=89527737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311352034.2A Pending CN117419707A (en) 2023-10-18 2023-10-18 Target positioning method based on unmanned aerial vehicle inertia and vision combined system

Country Status (1)

Country Link
CN (1) CN117419707A (en)

Similar Documents

Publication Publication Date Title
CN106408601B (en) A kind of binocular fusion localization method and device based on GPS
CN106289184B (en) A kind of no GNSS signal and cooperate with vision deformation monitoring method without unmanned plane under control point
CN104835115A (en) Imaging method for aerial camera, and system thereof
CN105627991A (en) Real-time panoramic stitching method and system for unmanned aerial vehicle images
CN104501779A (en) High-accuracy target positioning method of unmanned plane on basis of multi-station measurement
US11372455B2 (en) Imaging sensor-based position detection
Yang et al. Panoramic UAV surveillance and recycling system based on structure-free camera array
KR102239562B1 (en) Fusion system between airborne and terrestrial observation data
CN109269525B (en) Optical measurement system and method for take-off or landing process of space probe
CN110009682A (en) A Target Recognition and Localization Method Based on Monocular Vision
CN110930508A (en) Two-dimensional photoelectric video and three-dimensional scene fusion method
US10109074B2 (en) Method and system for inertial measurement having image processing unit for determining at least one parameter associated with at least one feature in consecutive images
CN109341720A (en) A Geometric Calibration Method of Remote Sensing Camera Based on Star Trajectory
Crispel et al. All-sky photogrammetry techniques to georeference a cloud field
He et al. Automated relative orientation of UAV-based imagery in the presence of prior information for the flight trajectory
IL264714A (en) Video geolocation
Li et al. Verification of monocular and binocular pose estimation algorithms in vision-based UAVs autonomous aerial refueling system
Zhou et al. Automatic orthorectification and mosaicking of oblique images from a zoom lens aerial camera
US12254600B2 (en) Joint imaging system based on unmanned aerial vehicle platform and image enhancement fusion method
CN109146936B (en) Image matching method, device, positioning method and system
Ji et al. Comparison of two panoramic sensor models for precise 3d measurements
CN117419707A (en) Target positioning method based on unmanned aerial vehicle inertia and vision combined system
CN111412898B (en) Large-area deformation photogrammetry method based on ground-air coupling
CN113654528B (en) Method and system for estimating target coordinates through unmanned aerial vehicle position and cradle head angle
CN114353667B (en) Ground target measurement method based on AR and UAV monocular vision and its application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination