Background
In the development process of the mobile robot, the positioning technology is a basic technology for realizing various complex tasks, and effective execution of tasks such as navigation, planning and the like all depend on accurate positioning input. In recent years, with the development of the field of unmanned driving and sensor technology, a plurality of positioning strategies relying on different module sensors emerge. Due to the fact that the measurement information and the positioning algorithm of different types of sensors are different, single-module positioning technologies are good and bad, and the intelligent vehicle platform in a complex scene needs to be integrated with a multi-module positioning technology to guarantee reliable and stable output of positioning information.
The existing multi-module positioning method mostly performs module switching according to an empirical design rule, which is essentially equivalent to the alternate work of single-module positioning, and is no longer reliable once the method is separated from the experience applicable condition. Although the Bayesian fusion method based on the probability can essentially realize information fusion, the basic conditions are that reliable error models are required to be provided for the positioning of each module, and the positioning of each module is required to be mutually adapted in the fusion process. For the existing single-module positioning technology, except for a limited number of available calibration technologies such as inertial navigation, dead reckoning and the like and an error propagation theory, an error model of the existing single-module positioning technology is estimated, for most positioning technologies based on environmental perception, the error model is difficult to derive due to the internal nonlinear feature processing link, the existing single-module positioning technology often has scene correlation, and for example, a laser positioning method is more accurate in a scene with rich features.
For the multi-module positioning technology, the aim is to realize the fusion and complementation of each module, but an effective fusion means is still lacked at present, so that the multi-source positioning fusion problem is still a big problem in the positioning field.
Disclosure of Invention
The embodiment of the invention provides a multi-mode fusion positioning method of an unmanned platform, which aims to overcome the problems in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme.
A multi-modal fusion positioning method of an unmanned platform comprises the following steps:
carrying a plurality of positioning systems in an unmanned platform, respectively learning the neural network parameters of each positioning system for describing an error model, obtaining an error information matrix for outputting a positioning algorithm according to the neural network parameters, and obtaining a positioning result output by each positioning system based on the information matrix of each positioning system;
and inputting the positioning result and the information matrix of each positioning system into an information filter, and outputting the fusion positioning result of the unmanned platform by the information filter.
Preferably, the learning of the neural network parameters of each positioning system for describing the error model respectively and the obtaining of the error information matrix for outputting the positioning algorithm according to the neural network parameters include:
the method comprises the steps of collecting input data of a positioning algorithm of each positioning system, making a training data set of each positioning system by using the input data, designing a corresponding neural network aiming at each positioning system, representing a mapping relation between positioning scene data of each positioning system and an error model in a neural network form, learning neural network parameters of each positioning system for describing the error model by using the training data set of each positioning system and the mapping relation between the positioning scene data and the error model, and obtaining an error information matrix for outputting the positioning algorithm according to the neural network parameters.
Preferably, when the positioning system is a laser odometer, the learning of the neural network parameters of each positioning system for describing the error model respectively, and the obtaining of the error information matrix for outputting the positioning algorithm according to the neural network parameters include:
for the two-dimensional positioning and orientation problem, projecting single-frame two-dimensional laser scanning data to a two-dimensional plane to obtain a single-frame occupied grid map, inputting the grid map into a CNN neural network, outputting 6 independent elements, forming a lower triangular matrix by the 6 independent elements, multiplying the lower triangular matrix by a transposed matrix thereof to obtain a semi-positive definite information matrix, and taking the semi-positive definite information matrix as an information matrix of the laser odometer.
Preferably, the inputting the positioning result and the information matrix of each positioning system into an information filter, the information filter outputting the fused positioning result, includes:
the information filter acquires the information vector and the information matrix of the previous frame fusion positioning result, the information vector and the information matrix are converted into a local coordinate system with the previous frame position as a zero point, the local coordinate system with the previous frame position as the zero point and the positioning results and the information matrix of each positioning system are input into the information filter, and the information filter outputs the fusion relative positioning of the current frame.
Preferably, the inputting the positioning result and the information matrix of each positioning system into an information filter, the information filter outputting the fused positioning result, includes:
selecting a positioning system of a certain known error model as a fusion reference positioning system, wherein the fusion reference positioning system and the positioning system to be learned have a state relation aiming at the positioning system to be learned:
wherein xtRepresenting a relative positioning value, u, relative to the previous frame positiontFor fusing the relative positioning estimate, z, output from the reference positioning systemtRelative positioning estimate, ε, output for a positioning system to be learnedtGaussian error, R, for relative positioning estimation to fuse reference positioning systemstFor fusing the covariance matrix, delta, of the reference location systemtGaussian error, Q, estimated for relative positioning of a positioning system to be learnedtA covariance matrix for the positioning system error to be learned;
let z be the two-dimensional positioning input data of the time t information filter
t,
u
t,R
tThen, the information filtering and fusing step of the information filter includes:
1. performing coordinate system transformation
2. Performing information matrix prediction
3. Performing information vector prediction
4. Information matrix updating based on observations
5. Information vector updating based on observations
6. Obtaining a fusion localization solution
Output mut,Ωt,ξt
Wherein mu is solved for two-dimensional positioningt=(μx,t,μy,t,μθ,t)TWith two-dimensional positioning coordinate transformation array
ξtFor the information vector in the information filter, there is xit=Ωtμt。
According to the technical scheme provided by the embodiment of the invention, the method provided by the embodiment of the invention can effectively fuse the output of the multi-module positioning system, obtain a precise and stable fused positioning result and provide a foundation for the unmanned platform such as a mobile robot to execute other complex tasks. The expandability is strong, can be applied to the fusion positioning of a plurality of modules.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the convenience of understanding the embodiments of the present invention, the following description will be further explained by taking several specific embodiments as examples in conjunction with the drawings, and the embodiments are not to be construed as limiting the embodiments of the present invention.
Example one
The embodiment of the invention provides an end-to-end error model learning method based on machine learning, aiming at the problem of multi-module positioning fusion of a mobile robot. The method is based on a Bayes filter fusion framework, can learn error models output by various positioning systems, performs positioning fusion by using the error models, and has accurate and reliable fusion positioning solution and continuous and jitter-free positioning track. In the fusion framework, complex error model design and assumption do not need to be carried out aiming at different modules, and the independent work of the modules can be ensured. In the learning process, the learning and updating process can be carried out once only by the information of the starting point and the end point of the track with coarse precision without depending on continuous and high-precision positioning truth value information, so that the method can be used for online learning of an unmanned platform.
The implementation principle schematic diagram of the multi-modal fusion positioning method of the mobile robot provided by the embodiment of the invention is shown in fig. 1, and the specific processing flow is shown in fig. 2, and the method comprises the following steps:
firstly, setting a mobile robot to carry N sets of positioning systems, collecting input data of positioning algorithms of the positioning systems in a use scene of the mobile robot, and making training data sets of the positioning systems by using the collected data;
and secondly, selecting a positioning fusion reference of each positioning system, and respectively learning an information matrix of each positioning system relative to a previous frame by taking a frame as a unit, wherein the information matrix is used for describing the error distribution of the positioning system. The implementation principle of the learning method of the information matrix of the positioning system provided by the embodiment of the invention is shown in fig. 3, and the method comprises the following processing procedures: and acquiring input data of the positioning algorithm of each positioning system, and making a training data set of each positioning system by using the input data. As the positioning error is generally related to the scene information input by the algorithm, for any positioning system X needing to learn an error model, a corresponding neural network is designed for each positioning system respectively, and the mapping relation between the positioning scene data of each positioning system and the error model is expressed in a neural network form. Then, the information matrix of each positioning system relative to the previous frame is learned by using the training data set of each positioning system and the mapping relation between the positioning scene data and the error model.
The processing process of learning the information matrix of each positioning system by using the training data set and the mapping relation comprises the following steps:
1. selecting a positioning system A (such as inertial navigation system) with a known error model as a fusion reference system, and setting the data input as D1The positioning system A outputs a relative positioning result and an error covariance matrix u, R (the covariance matrix is an inverse matrix of an information matrix);
2. for any error model to be learned, positioning system X (with data input as D)
2And outputting a positioning result and an error information matrix as z, Q) to initialize an error model neural network. Selecting scene-related data D that may affect its positioning error
2-sceneAs the input of the neural network, for example, for a visual odometer, the real-time image can be selected as the input, and the output is an error information matrix
(see example two);
3. setting the real value of the global position of the starting point and the ending point of training data Traj with any length T as
Positioning input data D of t-th frame in data can be obtained
1,t,D
2,tAnd input data D required by the neural network
2-scene,tUnder the condition of neural network parameter theta, the relative positioning result of each continuous frame and the error covariance matrix < u can be obtained
t,R
t>,<z
t,Q
tIf necessary, converting the relative positioning result of each frame and the error covariance matrix thereof into the fusion frame shown in FIG. 1 to obtain the fusion relative positioning result of each frame and the information matrix < μ
t,Ω
t> (ii). Using a transformation of the coordinate system in
On a position basis to mu
1,μ
2,...,μ
TThe current nerve can be obtained by performing relative positioning accumulationGlobal positioning location estimation under network parameters
4. Let the loss function be
The J value of the training data Traj is obtained. The neural network parameters can be updated to
Wherein the loss function can be designed according to experimental requirements, such as
5. And (4) loading different training data tracks for multiple times, and repeating the steps 3-4 to learn the parameters. And finishing learning after the set maximum training times are reached. At this point, the neural network parameters can be used, for any D
2-scene,tMapping out available information matrices
And then, obtaining the positioning result output by each positioning system based on the information matrix of each positioning system.
And thirdly, inputting the positioning result of each positioning system and the information matrix into an information filter to obtain a fusion positioning result. And the information filter realizes fusion positioning by using the error model learning result of each positioning system and outputs a fusion positioning result. The information filter can be updated on line in the application process, and the error model is continuously optimized.
A schematic diagram of a process of information fusion processing performed by an information filter according to an embodiment of the present invention is shown in fig. 4, and includes the following processing procedures:
the information filter obtains the information vector and the information matrix of the previous frame fusion positioning result, the information vector and the information matrix are converted into a local coordinate system with the previous frame position as a zero point, the local coordinate system with the previous frame position as the zero point and the positioning result and the information matrix of each positioning system are input into the information filter, and the information filter outputs the fusion relative positioning of the current frame.
The information filter is a typical filter in the prior document, and the specific working process of the information filter comprises the following steps: and the two positioning systems in the second step, namely the fusion reference positioning system and the positioning system to be learned have state relation.
Wherein xtRepresenting a relative positioning value, u, relative to the previous frame positiontFor fusing the relative positioning estimate, z, output from the reference positioning systemtRelative positioning estimate, ε, output for a positioning system to be learnedt,δtRespectively representing the Gaussian errors of the relative positioning estimation of the two positioning systems, the covariance matrixes of the errors are respectively Rt,Qt。
Input z for obtaining two-dimensional positioning data at time t
t,
u
t,R
tThen, the step of filtering and fusing the time information is as follows:
7. performing coordinate system transformation
8. Performing information matrix prediction
9. Performing information vector prediction
10. Information matrix updating based on observations
11. Information vector updating based on observations
12. Obtaining a fusion localization solution
Output mut,Ωt,ξt
Wherein mu is solved for two-dimensional positioningt=(μx,t,μy,t,μθ,t)TCoordinate transformation array with two-dimensional positioning (3-degree of freedom)
ξtFor the information vector in the information filter, there is xit=Ωtμt
Example two
The embodiment takes the error model learning of the laser odometer as an example, and details of the technology are described in detail. Because the error model of the dead reckoning has the normality, the initial sensor calibration can obtain a reliable error model according to the error propagation theory, and therefore the error model selected from the dead reckoning is used as a fusion reference to be fused with the laser odometer.
The processing flow of the scene-error mapping algorithm of the laser odometer provided by the embodiment of the invention is shown in fig. 5, and the specific processing process comprises the following steps: and projecting the single-frame two-dimensional laser scanning data to a two-dimensional plane to obtain a single-frame occupied grid map, inputting the grid map into a CNN neural network, and outputting 6 independent elements. Then, a lower triangular matrix is formed by 6 independent elements, the lower triangular matrix is multiplied by a transposed matrix of the lower triangular matrix to obtain a semi-positive definite information matrix, and the semi-positive definite information matrix is used as an information matrix of the laser odometer.
Fig. 6 is a processing flow chart of an error model learning algorithm of a scene-error mapping algorithm of a laser odometer according to an embodiment of the present invention, where the specific processing procedure includes:
step 1: initializing the neural network parameters of the error model, setting the maximum iteration number as M, setting the iterated number i as 0, and setting the accumulated step number T
Step 2: loading the ith training track to obtain the data needed by T frame positioning and the truth value of global position of the head and tail sections of the track
And step 3: initializing starting point global position estimation, initializing relative positioning fusion algorithm, and setting the accumulated step number t equal to 0
And 4, step 4: calculating a dead reckoning relative positioning result and information matrix
And 5: calculating the relative positioning result of laser mileage
Step 6: calculating a laser odometer information matrix according to a laser odometer scene-error mapping algorithm
And 7: computing a positioning fusion solution for laser odometry and dead reckoning using a relative positioning fusion algorithm
And 8: obtaining the accumulated global position of the current frame by the global position truth value at the time when t is 0
Step 9, judging whether the current accumulated step number T is smaller than the accumulated step number T, if so, returning to execute the step 4; otherwise, executing step 10;
step 10, obtaining a global position true value at the time T, and calculating a target loss function
Step 11, expanding the network graph model according to time, and performing back propagation T steps by using the loss function value to obtain an accumulated gradient
Step 12, updating network parameters by using accumulated gradient
Step 13, judging whether the iteration frequency i is less than the maximum iteration frequency M, if so, returning to execute the step 2; otherwise, the flow ends.
In conclusion, the method provided by the embodiment of the invention can effectively fuse the output of the multi-module positioning system, obtain a precise and stable fusion positioning result, fill the blank in the aspect of multi-module positioning fusion at home and abroad, and provide a foundation for unmanned platforms such as mobile robots to execute other complex tasks.
The error model learning method provided by the invention is completely carried out under a probability fusion framework, solves the significant problems in the prior art, and has the advantages that 1) the complicated error model design derivation work is avoided, the end-to-end learning can be carried out by taking the positioning precision as the target, 2) the expandability is strong, the method can be applied to the fusion positioning of a plurality of modules, 3) the method has the application prospect of on-line learning for a multi-module positioning unmanned platform carrying a pre-training network, 4) the fusion effect is good, higher positioning precision can be obtained, and the positioning track is smooth.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for apparatus or system embodiments, since they are substantially similar to method embodiments, they are described in relative terms, as long as they are described in partial descriptions of method embodiments. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.