[go: up one dir, main page]

CN109764876B - Multi-mode fusion positioning method of unmanned platform - Google Patents

Multi-mode fusion positioning method of unmanned platform Download PDF

Info

Publication number
CN109764876B
CN109764876B CN201910130418.7A CN201910130418A CN109764876B CN 109764876 B CN109764876 B CN 109764876B CN 201910130418 A CN201910130418 A CN 201910130418A CN 109764876 B CN109764876 B CN 109764876B
Authority
CN
China
Prior art keywords
positioning
information
positioning system
matrix
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910130418.7A
Other languages
Chinese (zh)
Other versions
CN109764876A (en
Inventor
鞠孝亮
赵卉菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201910130418.7A priority Critical patent/CN109764876B/en
Publication of CN109764876A publication Critical patent/CN109764876A/en
Application granted granted Critical
Publication of CN109764876B publication Critical patent/CN109764876B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明提供了一种无人平台的多模态融合定位方法。该方法包括:在无人平台中搭载多个定位系统,分别学习出每个定位系统的用于描述误差模型的神经网络参数,根据所述神经网络参数得到用以输出定位算法的误差信息矩阵,基于各个定位系统的信息矩阵得到各个定位系统输出的定位结果;将每个定位系统的定位结果与信息矩阵输入到信息滤波器,所示信息滤波器输出所述无人平台的融合定位结果。本发明实施例的方法可以有效地融合多模块定位系统输出,获得精准稳定的融合定位结果,为移动机器人等无人平台执行其他复杂任务提供了基础。可扩展性强,可应用于多个模块融合定位。

Figure 201910130418

The invention provides a multi-modal fusion positioning method of an unmanned platform. The method includes: carrying a plurality of positioning systems in an unmanned platform, learning neural network parameters for describing an error model of each positioning system respectively, and obtaining an error information matrix for outputting a positioning algorithm according to the neural network parameters, Based on the information matrix of each positioning system, the positioning results output by each positioning system are obtained; the positioning results and information matrix of each positioning system are input into the information filter, and the shown information filter outputs the fusion positioning result of the unmanned platform. The method of the embodiment of the present invention can effectively fuse the outputs of the multi-module positioning system, obtain accurate and stable fusion positioning results, and provide a basis for unmanned platforms such as mobile robots to perform other complex tasks. It has strong scalability and can be applied to fusion positioning of multiple modules.

Figure 201910130418

Description

Multi-mode fusion positioning method of unmanned platform
Technical Field
The invention relates to the technical field of positioning, in particular to a multi-mode fusion positioning method for an unmanned platform.
Background
In the development process of the mobile robot, the positioning technology is a basic technology for realizing various complex tasks, and effective execution of tasks such as navigation, planning and the like all depend on accurate positioning input. In recent years, with the development of the field of unmanned driving and sensor technology, a plurality of positioning strategies relying on different module sensors emerge. Due to the fact that the measurement information and the positioning algorithm of different types of sensors are different, single-module positioning technologies are good and bad, and the intelligent vehicle platform in a complex scene needs to be integrated with a multi-module positioning technology to guarantee reliable and stable output of positioning information.
The existing multi-module positioning method mostly performs module switching according to an empirical design rule, which is essentially equivalent to the alternate work of single-module positioning, and is no longer reliable once the method is separated from the experience applicable condition. Although the Bayesian fusion method based on the probability can essentially realize information fusion, the basic conditions are that reliable error models are required to be provided for the positioning of each module, and the positioning of each module is required to be mutually adapted in the fusion process. For the existing single-module positioning technology, except for a limited number of available calibration technologies such as inertial navigation, dead reckoning and the like and an error propagation theory, an error model of the existing single-module positioning technology is estimated, for most positioning technologies based on environmental perception, the error model is difficult to derive due to the internal nonlinear feature processing link, the existing single-module positioning technology often has scene correlation, and for example, a laser positioning method is more accurate in a scene with rich features.
For the multi-module positioning technology, the aim is to realize the fusion and complementation of each module, but an effective fusion means is still lacked at present, so that the multi-source positioning fusion problem is still a big problem in the positioning field.
Disclosure of Invention
The embodiment of the invention provides a multi-mode fusion positioning method of an unmanned platform, which aims to overcome the problems in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme.
A multi-modal fusion positioning method of an unmanned platform comprises the following steps:
carrying a plurality of positioning systems in an unmanned platform, respectively learning the neural network parameters of each positioning system for describing an error model, obtaining an error information matrix for outputting a positioning algorithm according to the neural network parameters, and obtaining a positioning result output by each positioning system based on the information matrix of each positioning system;
and inputting the positioning result and the information matrix of each positioning system into an information filter, and outputting the fusion positioning result of the unmanned platform by the information filter.
Preferably, the learning of the neural network parameters of each positioning system for describing the error model respectively and the obtaining of the error information matrix for outputting the positioning algorithm according to the neural network parameters include:
the method comprises the steps of collecting input data of a positioning algorithm of each positioning system, making a training data set of each positioning system by using the input data, designing a corresponding neural network aiming at each positioning system, representing a mapping relation between positioning scene data of each positioning system and an error model in a neural network form, learning neural network parameters of each positioning system for describing the error model by using the training data set of each positioning system and the mapping relation between the positioning scene data and the error model, and obtaining an error information matrix for outputting the positioning algorithm according to the neural network parameters.
Preferably, when the positioning system is a laser odometer, the learning of the neural network parameters of each positioning system for describing the error model respectively, and the obtaining of the error information matrix for outputting the positioning algorithm according to the neural network parameters include:
for the two-dimensional positioning and orientation problem, projecting single-frame two-dimensional laser scanning data to a two-dimensional plane to obtain a single-frame occupied grid map, inputting the grid map into a CNN neural network, outputting 6 independent elements, forming a lower triangular matrix by the 6 independent elements, multiplying the lower triangular matrix by a transposed matrix thereof to obtain a semi-positive definite information matrix, and taking the semi-positive definite information matrix as an information matrix of the laser odometer.
Preferably, the inputting the positioning result and the information matrix of each positioning system into an information filter, the information filter outputting the fused positioning result, includes:
the information filter acquires the information vector and the information matrix of the previous frame fusion positioning result, the information vector and the information matrix are converted into a local coordinate system with the previous frame position as a zero point, the local coordinate system with the previous frame position as the zero point and the positioning results and the information matrix of each positioning system are input into the information filter, and the information filter outputs the fusion relative positioning of the current frame.
Preferably, the inputting the positioning result and the information matrix of each positioning system into an information filter, the information filter outputting the fused positioning result, includes:
selecting a positioning system of a certain known error model as a fusion reference positioning system, wherein the fusion reference positioning system and the positioning system to be learned have a state relation aiming at the positioning system to be learned:
Figure GDA0002711500270000031
wherein xtRepresenting a relative positioning value, u, relative to the previous frame positiontFor fusing the relative positioning estimate, z, output from the reference positioning systemtRelative positioning estimate, ε, output for a positioning system to be learnedtGaussian error, R, for relative positioning estimation to fuse reference positioning systemstFor fusing the covariance matrix, delta, of the reference location systemtGaussian error, Q, estimated for relative positioning of a positioning system to be learnedtA covariance matrix for the positioning system error to be learned;
let z be the two-dimensional positioning input data of the time t information filtert,
Figure GDA0002711500270000032
ut,RtThen, the information filtering and fusing step of the information filter includes:
1. performing coordinate system transformation
Figure GDA0002711500270000041
2. Performing information matrix prediction
Figure GDA0002711500270000042
3. Performing information vector prediction
Figure GDA0002711500270000043
4. Information matrix updating based on observations
Figure GDA0002711500270000044
5. Information vector updating based on observations
Figure GDA0002711500270000045
6. Obtaining a fusion localization solution
Figure GDA0002711500270000046
Output muttt
Wherein mu is solved for two-dimensional positioningt=(μx,ty,tθ,t)TWith two-dimensional positioning coordinate transformation array
Figure GDA0002711500270000047
ξtFor the information vector in the information filter, there is xit=Ωtμt
According to the technical scheme provided by the embodiment of the invention, the method provided by the embodiment of the invention can effectively fuse the output of the multi-module positioning system, obtain a precise and stable fused positioning result and provide a foundation for the unmanned platform such as a mobile robot to execute other complex tasks. The expandability is strong, can be applied to the fusion positioning of a plurality of modules.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating an implementation principle of a multi-modal fusion positioning method for an unmanned platform according to an embodiment of the present invention;
fig. 2 is a processing flow chart of a multi-modal fusion positioning method for an unmanned platform according to an embodiment of the present invention;
fig. 3 is an implementation schematic diagram of a learning method for an information matrix of a positioning system according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a process of performing information fusion processing by an information filter according to an embodiment of the present invention:
FIG. 5 is a process flow diagram of a scene-error mapping algorithm for a laser odometer according to an embodiment of the present invention;
fig. 6 is a processing flow chart of an error model learning algorithm of a scene-error mapping algorithm of a laser odometer according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the convenience of understanding the embodiments of the present invention, the following description will be further explained by taking several specific embodiments as examples in conjunction with the drawings, and the embodiments are not to be construed as limiting the embodiments of the present invention.
Example one
The embodiment of the invention provides an end-to-end error model learning method based on machine learning, aiming at the problem of multi-module positioning fusion of a mobile robot. The method is based on a Bayes filter fusion framework, can learn error models output by various positioning systems, performs positioning fusion by using the error models, and has accurate and reliable fusion positioning solution and continuous and jitter-free positioning track. In the fusion framework, complex error model design and assumption do not need to be carried out aiming at different modules, and the independent work of the modules can be ensured. In the learning process, the learning and updating process can be carried out once only by the information of the starting point and the end point of the track with coarse precision without depending on continuous and high-precision positioning truth value information, so that the method can be used for online learning of an unmanned platform.
The implementation principle schematic diagram of the multi-modal fusion positioning method of the mobile robot provided by the embodiment of the invention is shown in fig. 1, and the specific processing flow is shown in fig. 2, and the method comprises the following steps:
firstly, setting a mobile robot to carry N sets of positioning systems, collecting input data of positioning algorithms of the positioning systems in a use scene of the mobile robot, and making training data sets of the positioning systems by using the collected data;
and secondly, selecting a positioning fusion reference of each positioning system, and respectively learning an information matrix of each positioning system relative to a previous frame by taking a frame as a unit, wherein the information matrix is used for describing the error distribution of the positioning system. The implementation principle of the learning method of the information matrix of the positioning system provided by the embodiment of the invention is shown in fig. 3, and the method comprises the following processing procedures: and acquiring input data of the positioning algorithm of each positioning system, and making a training data set of each positioning system by using the input data. As the positioning error is generally related to the scene information input by the algorithm, for any positioning system X needing to learn an error model, a corresponding neural network is designed for each positioning system respectively, and the mapping relation between the positioning scene data of each positioning system and the error model is expressed in a neural network form. Then, the information matrix of each positioning system relative to the previous frame is learned by using the training data set of each positioning system and the mapping relation between the positioning scene data and the error model.
The processing process of learning the information matrix of each positioning system by using the training data set and the mapping relation comprises the following steps:
1. selecting a positioning system A (such as inertial navigation system) with a known error model as a fusion reference system, and setting the data input as D1The positioning system A outputs a relative positioning result and an error covariance matrix u, R (the covariance matrix is an inverse matrix of an information matrix);
2. for any error model to be learned, positioning system X (with data input as D)2And outputting a positioning result and an error information matrix as z, Q) to initialize an error model neural network. Selecting scene-related data D that may affect its positioning error2-sceneAs the input of the neural network, for example, for a visual odometer, the real-time image can be selected as the input, and the output is an error information matrix
Figure GDA0002711500270000071
(see example two);
3. setting the real value of the global position of the starting point and the ending point of training data Traj with any length T as
Figure GDA0002711500270000081
Positioning input data D of t-th frame in data can be obtained1,t,D2,tAnd input data D required by the neural network2-scene,tUnder the condition of neural network parameter theta, the relative positioning result of each continuous frame and the error covariance matrix < u can be obtainedt,Rt>,<zt,QtIf necessary, converting the relative positioning result of each frame and the error covariance matrix thereof into the fusion frame shown in FIG. 1 to obtain the fusion relative positioning result of each frame and the information matrix < μtt> (ii). Using a transformation of the coordinate system in
Figure GDA0002711500270000082
On a position basis to mu12,...,μTThe current nerve can be obtained by performing relative positioning accumulationGlobal positioning location estimation under network parameters
Figure GDA0002711500270000083
4. Let the loss function be
Figure GDA0002711500270000084
The J value of the training data Traj is obtained. The neural network parameters can be updated to
Figure GDA0002711500270000085
Wherein the loss function can be designed according to experimental requirements, such as
Figure GDA0002711500270000086
5. And (4) loading different training data tracks for multiple times, and repeating the steps 3-4 to learn the parameters. And finishing learning after the set maximum training times are reached. At this point, the neural network parameters can be used, for any D2-scene,tMapping out available information matrices
Figure GDA0002711500270000087
And then, obtaining the positioning result output by each positioning system based on the information matrix of each positioning system.
And thirdly, inputting the positioning result of each positioning system and the information matrix into an information filter to obtain a fusion positioning result. And the information filter realizes fusion positioning by using the error model learning result of each positioning system and outputs a fusion positioning result. The information filter can be updated on line in the application process, and the error model is continuously optimized.
A schematic diagram of a process of information fusion processing performed by an information filter according to an embodiment of the present invention is shown in fig. 4, and includes the following processing procedures:
the information filter obtains the information vector and the information matrix of the previous frame fusion positioning result, the information vector and the information matrix are converted into a local coordinate system with the previous frame position as a zero point, the local coordinate system with the previous frame position as the zero point and the positioning result and the information matrix of each positioning system are input into the information filter, and the information filter outputs the fusion relative positioning of the current frame.
The information filter is a typical filter in the prior document, and the specific working process of the information filter comprises the following steps: and the two positioning systems in the second step, namely the fusion reference positioning system and the positioning system to be learned have state relation.
Figure GDA0002711500270000091
Wherein xtRepresenting a relative positioning value, u, relative to the previous frame positiontFor fusing the relative positioning estimate, z, output from the reference positioning systemtRelative positioning estimate, ε, output for a positioning system to be learnedttRespectively representing the Gaussian errors of the relative positioning estimation of the two positioning systems, the covariance matrixes of the errors are respectively Rt,Qt
Input z for obtaining two-dimensional positioning data at time tt,
Figure GDA0002711500270000092
ut,RtThen, the step of filtering and fusing the time information is as follows:
7. performing coordinate system transformation
Figure GDA0002711500270000093
8. Performing information matrix prediction
Figure GDA0002711500270000094
9. Performing information vector prediction
Figure GDA0002711500270000101
10. Information matrix updating based on observations
Figure GDA0002711500270000102
11. Information vector updating based on observations
Figure GDA0002711500270000103
12. Obtaining a fusion localization solution
Figure GDA0002711500270000104
Output muttt
Wherein mu is solved for two-dimensional positioningt=(μx,ty,tθ,t)TCoordinate transformation array with two-dimensional positioning (3-degree of freedom)
Figure GDA0002711500270000105
ξtFor the information vector in the information filter, there is xit=Ωtμt
Example two
The embodiment takes the error model learning of the laser odometer as an example, and details of the technology are described in detail. Because the error model of the dead reckoning has the normality, the initial sensor calibration can obtain a reliable error model according to the error propagation theory, and therefore the error model selected from the dead reckoning is used as a fusion reference to be fused with the laser odometer.
The processing flow of the scene-error mapping algorithm of the laser odometer provided by the embodiment of the invention is shown in fig. 5, and the specific processing process comprises the following steps: and projecting the single-frame two-dimensional laser scanning data to a two-dimensional plane to obtain a single-frame occupied grid map, inputting the grid map into a CNN neural network, and outputting 6 independent elements. Then, a lower triangular matrix is formed by 6 independent elements, the lower triangular matrix is multiplied by a transposed matrix of the lower triangular matrix to obtain a semi-positive definite information matrix, and the semi-positive definite information matrix is used as an information matrix of the laser odometer.
Fig. 6 is a processing flow chart of an error model learning algorithm of a scene-error mapping algorithm of a laser odometer according to an embodiment of the present invention, where the specific processing procedure includes:
step 1: initializing the neural network parameters of the error model, setting the maximum iteration number as M, setting the iterated number i as 0, and setting the accumulated step number T
Step 2: loading the ith training track to obtain the data needed by T frame positioning and the truth value of global position of the head and tail sections of the track
And step 3: initializing starting point global position estimation, initializing relative positioning fusion algorithm, and setting the accumulated step number t equal to 0
And 4, step 4: calculating a dead reckoning relative positioning result and information matrix
And 5: calculating the relative positioning result of laser mileage
Step 6: calculating a laser odometer information matrix according to a laser odometer scene-error mapping algorithm
And 7: computing a positioning fusion solution for laser odometry and dead reckoning using a relative positioning fusion algorithm
And 8: obtaining the accumulated global position of the current frame by the global position truth value at the time when t is 0
Step 9, judging whether the current accumulated step number T is smaller than the accumulated step number T, if so, returning to execute the step 4; otherwise, executing step 10;
step 10, obtaining a global position true value at the time T, and calculating a target loss function
Step 11, expanding the network graph model according to time, and performing back propagation T steps by using the loss function value to obtain an accumulated gradient
Step 12, updating network parameters by using accumulated gradient
Step 13, judging whether the iteration frequency i is less than the maximum iteration frequency M, if so, returning to execute the step 2; otherwise, the flow ends.
In conclusion, the method provided by the embodiment of the invention can effectively fuse the output of the multi-module positioning system, obtain a precise and stable fusion positioning result, fill the blank in the aspect of multi-module positioning fusion at home and abroad, and provide a foundation for unmanned platforms such as mobile robots to execute other complex tasks.
The error model learning method provided by the invention is completely carried out under a probability fusion framework, solves the significant problems in the prior art, and has the advantages that 1) the complicated error model design derivation work is avoided, the end-to-end learning can be carried out by taking the positioning precision as the target, 2) the expandability is strong, the method can be applied to the fusion positioning of a plurality of modules, 3) the method has the application prospect of on-line learning for a multi-module positioning unmanned platform carrying a pre-training network, 4) the fusion effect is good, higher positioning precision can be obtained, and the positioning track is smooth.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for apparatus or system embodiments, since they are substantially similar to method embodiments, they are described in relative terms, as long as they are described in partial descriptions of method embodiments. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (4)

1. A multi-modal fusion positioning method of an unmanned platform is characterized by comprising the following steps:
carrying a plurality of positioning systems in an unmanned platform, respectively learning the neural network parameters of each positioning system for describing an error model, obtaining an error information matrix for outputting a positioning algorithm according to the neural network parameters, and obtaining a positioning result output by each positioning system based on the information matrix of each positioning system;
inputting the positioning result and the information matrix of each positioning system into an information filter, and outputting the fusion positioning result of the unmanned platform by the information filter;
the learning of the neural network parameters of each positioning system for describing the error model and the obtaining of the error information matrix for outputting the positioning algorithm according to the neural network parameters respectively comprise:
the method comprises the steps of collecting input data of a positioning algorithm of each positioning system, making a training data set of each positioning system by using the input data, designing a corresponding neural network aiming at each positioning system, representing a mapping relation between positioning scene data of each positioning system and an error model in a neural network form, learning neural network parameters of each positioning system for describing the error model by using the training data set of each positioning system and the mapping relation between the positioning scene data and the error model, and obtaining an error information matrix for outputting the positioning algorithm according to the neural network parameters.
2. The method of claim 1, wherein when the positioning systems are laser odometers, the learning of the neural network parameters describing the error model for each positioning system separately, and the deriving of the error information matrix for outputting the positioning algorithm based on the neural network parameters comprises:
for the two-dimensional positioning and orientation problem, projecting single-frame two-dimensional laser scanning data to a two-dimensional plane to obtain a single-frame occupied grid map, inputting the grid map into a CNN neural network, outputting 6 independent elements, forming a lower triangular matrix by the 6 independent elements, multiplying the lower triangular matrix by a transposed matrix thereof to obtain a semi-positive definite information matrix, and taking the semi-positive definite information matrix as an information matrix of the laser odometer.
3. The method according to claim 1 or 2, wherein the inputting the positioning result and the information matrix of each positioning system into an information filter, the information filter outputting a fused positioning result, comprises:
the information filter acquires the information vector and the information matrix of the previous frame fusion positioning result, the information vector and the information matrix are converted into a local coordinate system with the previous frame position as a zero point, the local coordinate system with the previous frame position as the zero point and the positioning results and the information matrix of each positioning system are input into the information filter, and the information filter outputs the fusion relative positioning of the current frame.
4. The method of claim 3, wherein the inputting the positioning result and the information matrix of each positioning system into an information filter, the information filter outputting a fused positioning result, comprises:
selecting a positioning system of a certain known error model as a fusion reference positioning system, wherein the fusion reference positioning system and the positioning system to be learned have a state relation aiming at the positioning system to be learned:
Figure FDA0002801857460000021
wherein xtRepresenting a relative positioning value, u, relative to the previous frame positiontFor fusing the relative positioning estimate, z, output from the reference positioning systemtRelative positioning estimate, ε, output for a positioning system to be learnedtGaussian error, R, for relative positioning estimation to fuse reference positioning systemstFor fusing the covariance matrix, delta, of the reference location systemtGaussian error, Q, estimated for relative positioning of a positioning system to be learnedtA covariance matrix for the positioning system error to be learned;
let z be the two-dimensional positioning input data of the time t information filtert,
Figure FDA0002801857460000022
ut,RtThen, the information filtering and fusing step of the information filter includes:
1. performing coordinate system transformation
Figure FDA0002801857460000031
2. Performing information matrix prediction
Figure FDA0002801857460000032
3. Performing information vector prediction
Figure FDA0002801857460000033
4. Information matrix updating based on observations
Figure FDA0002801857460000034
5. Information vector updating based on observations
Figure FDA0002801857460000035
6. Obtaining a fusion localization solution
Figure FDA0002801857460000036
Output muttt
Wherein mu is solved for two-dimensional positioningt=(μx,ty,tθ,t)TWith two-dimensional positioning coordinate transformation array
Figure FDA0002801857460000037
ξtFor the information vector in the information filter, there is xit=Ωtμt
CN201910130418.7A 2019-02-21 2019-02-21 Multi-mode fusion positioning method of unmanned platform Expired - Fee Related CN109764876B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910130418.7A CN109764876B (en) 2019-02-21 2019-02-21 Multi-mode fusion positioning method of unmanned platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910130418.7A CN109764876B (en) 2019-02-21 2019-02-21 Multi-mode fusion positioning method of unmanned platform

Publications (2)

Publication Number Publication Date
CN109764876A CN109764876A (en) 2019-05-17
CN109764876B true CN109764876B (en) 2021-01-08

Family

ID=66456303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910130418.7A Expired - Fee Related CN109764876B (en) 2019-02-21 2019-02-21 Multi-mode fusion positioning method of unmanned platform

Country Status (1)

Country Link
CN (1) CN109764876B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367282B (en) * 2020-03-09 2022-06-07 山东大学 Robot navigation method and system based on multimode perception and reinforcement learning
CN113543305A (en) * 2020-04-22 2021-10-22 维沃移动通信有限公司 Positioning method, communication device and network device
CN111680596B (en) * 2020-05-29 2023-10-13 北京百度网讯科技有限公司 Positioning true value verification method, device, equipment and medium based on deep learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103983263A (en) * 2014-05-30 2014-08-13 东南大学 Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network
CN105352504B (en) * 2015-12-01 2018-03-06 中国矿业大学 The coal mining machine positioning device and method that a kind of inertial navigation merges with laser scanning
CN106980133A (en) * 2017-01-18 2017-07-25 中国南方电网有限责任公司超高压输电公司广州局 GPS INS integrated navigation method and system using neural network algorithm compensation and correction
CN108716917A (en) * 2018-04-16 2018-10-30 天津大学 A kind of indoor orientation method merging inertia and visual information based on ELM
CN108692701B (en) * 2018-05-28 2020-08-07 佛山市南海区广工大数控装备协同创新研究院 Mobile robot multi-sensor fusion positioning method based on particle filter
CN108871336B (en) * 2018-06-20 2019-05-07 湘潭大学 A system and method for estimating vehicle position
CN109059912A (en) * 2018-07-31 2018-12-21 太原理工大学 A kind of GPS/INS integrated positioning method based on wavelet neural network

Also Published As

Publication number Publication date
CN109764876A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN106803271B (en) Camera calibration method and device for visual navigation unmanned aerial vehicle
Carlone et al. A linear approximation for graph-based simultaneous localization and mapping
CN107084714B (en) A multi-robot collaborative target positioning method based on RoboCup3D
CN109764876B (en) Multi-mode fusion positioning method of unmanned platform
CN105225269A (en) Based on the object modelling system of motion
CN107481292A (en) The attitude error method of estimation and device of vehicle-mounted camera
CN113313176B (en) A point cloud analysis method based on dynamic graph convolutional neural network
CN110414526A (en) Training method, training device, server and the storage medium of semantic segmentation network
CN113910218B (en) Robot calibration method and device based on kinematic and deep neural network fusion
CN112965372B (en) Reinforcement learning-based precision assembly method, device and system for micro-parts
CN114943182B (en) Robot cable shape control method and equipment based on graph neural network
CN118274849B (en) A method and system for positioning an intelligent handling robot based on multi-feature fusion
CN113139696A (en) Trajectory prediction model construction method and trajectory prediction method and device
CN118608435B (en) De-distortion method and device for point cloud, electronic equipment and readable storage medium
CN116300909A (en) Robot obstacle avoidance navigation method based on information preprocessing and reinforcement learning
CN111076724A (en) Three-dimensional laser positioning method and system
CN113570662A (en) System and method for 3D localization of landmarks from real world images
CN118915782A (en) Unmanned aerial vehicle tracking control system and method based on three-dimensional model
CN117372536A (en) Laser radar and camera calibration method, system, equipment and storage medium
CN113763447A (en) Depth map completion method, electronic device and storage medium
CN116429116A (en) Robot positioning method and equipment
CN112571420A (en) Dual-function model prediction control method under unknown parameters
Bahrpeyma et al. Application of reinforcement learning to ur10 positioning for prioritized multi-step inspection in nvidia omniverse
CN111257853A (en) An online calibration method of lidar for autonomous driving system based on IMU pre-integration
CN115131429B (en) Track alignment method, device, electronic device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210108

CF01 Termination of patent right due to non-payment of annual fee