[go: up one dir, main page]

CN112241010A - Positioning method, apparatus, computer equipment and storage medium - Google Patents

Positioning method, apparatus, computer equipment and storage medium Download PDF

Info

Publication number
CN112241010A
CN112241010A CN201910873938.7A CN201910873938A CN112241010A CN 112241010 A CN112241010 A CN 112241010A CN 201910873938 A CN201910873938 A CN 201910873938A CN 112241010 A CN112241010 A CN 112241010A
Authority
CN
China
Prior art keywords
point cloud
real
time point
global
target body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910873938.7A
Other languages
Chinese (zh)
Inventor
张婷
原诚寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing New Energy Vehicle Technology Innovation Center Co Ltd
Original Assignee
Beijing New Energy Vehicle Technology Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing New Energy Vehicle Technology Innovation Center Co Ltd filed Critical Beijing New Energy Vehicle Technology Innovation Center Co Ltd
Priority to CN201910873938.7A priority Critical patent/CN112241010A/en
Publication of CN112241010A publication Critical patent/CN112241010A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

本申请涉及一种定位方法、系统、计算机设备和存储介质。所述方法包括:通过获取全局点云地图和目标体实时点云数据,进而根据所述全局点云地图和所述目标体实时点云数据,确定所述全局点云地图对应的全局点云特征信息和所述目标体实时点云数据对应的目标体实时点云特征信息,进一步对所述全局点云特征信息和所述目标体实时点云特征信息进行特征匹配,确定特征匹配点对,最后根据所述特征匹配点对,确定所述目标体的位姿信息。采用本方法能够通过将激光雷达观测到的特征和存储的特征进行匹配,得到车辆的位置和姿态,可以实现高精度定位和高可靠性。

Figure 201910873938

The present application relates to a positioning method, system, computer equipment and storage medium. The method includes: by acquiring a global point cloud map and target real-time point cloud data, and then determining a global point cloud feature corresponding to the global point cloud map according to the global point cloud map and the target real-time point cloud data information and the target real-time point cloud feature information corresponding to the target real-time point cloud data, further perform feature matching on the global point cloud feature information and the target real-time point cloud feature information, determine feature matching point pairs, and finally According to the feature matching point pair, the pose information of the target body is determined. By using the method, the position and attitude of the vehicle can be obtained by matching the features observed by the lidar with the stored features, and high-precision positioning and high reliability can be achieved.

Figure 201910873938

Description

Positioning method, positioning device, computer equipment and storage medium
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a positioning method, an apparatus, a computer device, and a storage medium.
Background
In the driving process of the automatic driving automobile, the automatic driving automobile needs to be accurately positioned in real time so as to ensure the safety of personnel in the automobile and other vehicles on the road. The positioning technology is one of key technologies in the field of automatic driving and is a basic technology for sensing the surrounding environment by an unmanned vehicle.
In the prior art, Global Navigation Satellite System (GNSS) positioning is generally adopted.
However, when there is an obstacle or the signal is weak, the gnss positioning may have problems such as inaccurate positioning and poor reliability.
Disclosure of Invention
In view of the above, it is necessary to provide a positioning method, an apparatus, a computer device, and a storage medium that can achieve high-precision positioning and high reliability in view of the above technical problems.
A method of positioning, the method comprising:
acquiring a global point cloud map and real-time point cloud data of a target body;
determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs;
and determining the pose information of the target body according to the feature matching point pairs.
In one embodiment, the determining, according to the global point cloud map and the target real-time point cloud data, global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data includes:
performing down-sampling processing on the target body real-time point cloud data by adopting a voxel grid filtering method to obtain filtered target body real-time point cloud data;
and respectively extracting the characteristics of the global point cloud map and the filtered target body real-time point cloud data by utilizing a PointNet network to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data.
In one embodiment, the down-sampling the real-time point cloud data of the target volume by using a voxel grid filtering method to obtain the filtered real-time point cloud data of the target volume includes:
eliminating noise points and outliers in the target body real-time point cloud data to obtain the eliminated target body real-time point cloud data;
according to the removed target body real-time point cloud data, creating a three-dimensional voxel grid corresponding to the removed target body real-time point cloud data;
and calculating and summarizing the gravity center point of each voxel in the three-dimensional voxel grid, and determining the filtered real-time point cloud data of the target body.
In one embodiment, the performing, by using a PointNet network, feature extraction on the global point cloud map and the filtered target body real-time point cloud data respectively to obtain global point cloud feature information corresponding to the global point cloud map and target body real-time point cloud feature information corresponding to the target body real-time point cloud data includes:
and respectively carrying out point cloud data angle conversion, multilayer convolution processing, characteristic data conversion, multilayer convolution processing, pooling operation and full connection layer processing on the global point cloud map and the filtered target body real-time point cloud data in sequence to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data.
In one embodiment, the performing feature matching on the global point cloud feature information and the target real-time point cloud feature information, and determining a pair of feature matching points includes:
acquiring an initial pose transformation relation;
calculating initial position information corresponding to the target body real-time point cloud characteristic information in the global point cloud characteristic information according to the initial pose transformation relation;
performing feature matching on the target body real-time point cloud feature information and initial position information in a preset range by adopting a Flann algorithm, and determining homonymous point pairs;
and eliminating wrong point pairs in the same-name point pairs by using a Ranpac method to obtain feature matching point pairs.
In one embodiment, the acquiring the initial pose transformation relationship includes:
acquiring positioning data of a target body through a global navigation satellite system and acquiring attitude data of the target body through an inertial measurement unit;
and establishing an initial pose transformation relation according to the positioning data of the target body, the posture data of the target body and the pose information of the target body at the last moment.
In one embodiment, the determining pose information of the target body according to the feature matching point pairs includes:
calculating a target pose transformation relation between the global point cloud characteristic information and the target body real-time point cloud characteristic information according to the characteristic matching point pairs;
and determining the pose information of the target body according to the target pose transformation relation between the global point cloud characteristic information and the real-time point cloud characteristic information of the target body.
In one embodiment, the obtaining the global point cloud map comprises:
acquiring global data returned by a sensor;
and sequentially carrying out adjacent frame matching and data optimization on the global data returned by the sensor by adopting a laser Slam algorithm to obtain a global point cloud map.
A positioning device, the device comprising:
the data acquisition module is used for acquiring a global point cloud map and real-time point cloud data of a target body;
the characteristic information determining module is used for determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
the matching point pair determining module is used for performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information to determine a feature matching point pair;
and the pose information determining module is used for determining the pose information of the target body according to the feature matching point pairs.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method as claimed in any one of the above when the computer program is executed.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of the preceding claims.
According to the positioning method, the positioning device, the positioning computer equipment and the positioning storage medium, the global point cloud map and the target body real-time point cloud data are obtained, the global point cloud characteristic information corresponding to the global point cloud map and the target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data are determined according to the global point cloud map and the target body real-time point cloud data, feature matching is further conducted on the global point cloud characteristic information and the target body real-time point cloud characteristic information, a feature matching point pair is determined, and finally the pose information of the target body is determined according to the feature matching point pair. According to the method, the position and the posture of the vehicle are obtained by matching the features observed by the laser radar with the stored features, and high-precision positioning and high reliability can be realized.
Drawings
FIG. 1 is a diagram of an exemplary location method;
FIG. 2 is a flow chart illustrating a positioning method according to an embodiment;
FIG. 3 is a flowchart illustrating step S22 according to an embodiment;
FIG. 4 is a block diagram of a positioning device in accordance with one embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The positioning method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 and the server 104 communicate via a network. The terminal 102 acquires the global point cloud map and the target real-time point cloud data, and transmits the global point cloud map and the target real-time point cloud data to the server 104 through the network. The server 104 determines global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data, then performs characteristic matching on the global point cloud characteristic information and the target body real-time point cloud characteristic information, determines a characteristic matching point pair, and finally determines the pose information of the target body according to the characteristic matching point pair. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a positioning method is provided, which is described by taking the method as an example applied to the server 104 in fig. 1, and includes the following steps:
step S1: acquiring a global point cloud map and real-time point cloud data of a target body;
step S2: determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
step S3: performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs;
step S4: and determining the pose information of the target body according to the feature matching point pairs.
In steps S1-S4, the global point cloud map refers to a point cloud map corresponding to the surrounding environment in which the target object is located. The target body real-time point cloud data refers to point cloud data corresponding to any object at any time. The target real-time point cloud data are acquired by the aid of the laser radar, the laser radar can realize accurate ranging and positioning, and the system has the advantages of being long in detection distance, high in accuracy, simple in model calculation, stable in operation of different illumination intensities and the like, is an indispensable sensor in an automatic driving environment sensing system, and is also the best sensor in three environment sensing sensors in comprehensive performance.
Further, the feature matching point pair refers to a pair of points that meet the matching requirement. The pose information of the target body refers to the position and the posture of the target body, wherein the target body can be a vehicle or other objects.
The laser positioning technology generally adopts a feature extraction method based on artificial design features, the identification capability of feature descriptors is not high, the deep learning technology is introduced into an automatic driving laser positioning module, and a new method is provided for automatic driving real-time positioning by utilizing the characteristics of strong feature description capability and high robustness of the deep learning method. And mainly adopting deep learning to extract the features of the global point cloud map and the real-time point cloud data of the target body.
According to the positioning method, a global point cloud map and target body real-time point cloud data are obtained, global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data are determined according to the global point cloud map and the target body real-time point cloud data, feature matching is further conducted on the global point cloud characteristic information and the target body real-time point cloud characteristic information, a feature matching point pair is determined, and finally the pose information of the target body is determined according to the feature matching point pair. According to the method, the position and the posture of the vehicle are obtained by matching the features observed by the laser radar with the stored features, and high-precision positioning and high reliability can be realized.
In one embodiment, the step S2 includes:
step S21: performing down-sampling processing on the target body real-time point cloud data by adopting a voxel grid filtering method to obtain filtered target body real-time point cloud data;
step S22: and respectively extracting the characteristics of the global point cloud map and the filtered target body real-time point cloud data by utilizing a PointNet network to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data.
In steps S21-S22, the geometrical structure of the point cloud is not only a macroscopic geometrical shape, but also includes a microscopic arrangement thereof, such as a similar size in the transverse direction and the same distance in the longitudinal direction. If the point clouds are sampled by using equipment such as a high-resolution camera, the point clouds are often dense, and the subsequent segmentation work is difficult due to the excessive number of the point clouds. Based on the reasons, the point cloud is subjected to downsampling by adopting a voxel grid filtering method, and the voxel grid filter can achieve the function of downsampling without damaging the geometrical structure of the point cloud.
Further, the PointNet network is a new type of neural network that directly processes point clouds, taking the point clouds directly as input, and outputting each classification label of the whole input or each point segment/each partial label for each point of the input. The basic architecture is very simple, each point is treated identically and independently in the initial phase, and in the basic setup, each point is represented by its three coordinates (x, y, z) only, other dimensions can be added by computing normals and other local or global features. The PointNet network well reflects the sequence invariance of the input point cloud and provides a complete system structure from the aspects of object classification, partial segmentation to scene semantic analysis and the like. Although simple, PointNet is efficient and effective.
In one embodiment, the step S21 includes:
step S211: eliminating noise points and outliers in the target body real-time point cloud data to obtain the eliminated target body real-time point cloud data;
step S212: according to the removed target body real-time point cloud data, creating a three-dimensional voxel grid corresponding to the removed target body real-time point cloud data;
step S213: and calculating and summarizing the gravity center point of each voxel in the three-dimensional voxel grid, and determining the filtered real-time point cloud data of the target body.
In steps S211-S213, the voxel grid unit is required to be set when the three-dimensional voxel grid is established, such as the size, which can be set according to specific needs. Furthermore, the center of gravity point of each voxel represents all points within the voxel.
The method realizes downsampling by using a voxelized grid method, namely, the number of points is reduced, point cloud data is reduced, the shape characteristics of the point cloud are kept, and the method is very practical in improving algorithm speeds of registration, curved surface reconstruction, shape recognition and the like.
In one embodiment, referring to fig. 3, the step S22 includes:
step S221: and respectively carrying out point cloud data angle conversion, multilayer convolution processing, characteristic data conversion, multilayer convolution processing, pooling operation and full connection layer processing on the global point cloud map and the filtered target body real-time point cloud data in sequence to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data.
Specifically, the angle conversion of the point cloud data is mainly to rotate the point cloud so that the point cloud is at an angle more favorable for classification. The pooling operation mainly uses a maximum pooling method to form 1 × 1024 eigenvectors. The full-connection layer process calculates the eigenvalues of the eigenvectors using the Softmax Loss function.
In one embodiment, the step S3 includes:
step S31: acquiring an initial pose transformation relation;
step S32: calculating initial position information corresponding to the target body real-time point cloud characteristic information in the global point cloud characteristic information according to the initial pose transformation relation;
step S33: performing feature matching on the target body real-time point cloud feature information and initial position information in a preset range by adopting a Flann algorithm, and determining homonymous point pairs;
step S34: and eliminating wrong point pairs in the same-name point pairs by using a Ranpac method to obtain feature matching point pairs.
In steps S31-S34, the initial pose transformation relationship is adopted as an auxiliary positioning in the present application to improve the accuracy of positioning. The Fast Nearest neighbor algorithm (Flann) For adaptive Nearest Neighbors of high-dimensional data is mainly a process For finding consistency between frames, namely, the Nearest neighbor of one descriptor set (query set) is found in the other set (equivalent to a training set), and the matching efficiency can be improved by performing feature matching through the Flann method. In addition, because the initial position information is a point marked with specific geographic information, if the target body moves along with time, some points in a certain range need to be selected for gradual matching in the matching process, and therefore, the preset range can be in a circular shape, a rectangular shape or other shapes and is specifically adjusted according to needs.
Random Sample Consensus (Random Sample Consensus) belongs to the fine matching method, where the Random is used to fit the parameters of the model (the values of the matrix rows and columns) and identify the same object in different photographs.
In one embodiment, the step S31 includes:
step S311: acquiring positioning data of a target body through a global navigation satellite system and acquiring attitude data of the target body through an inertial measurement unit;
step S312: and establishing an initial pose transformation relation according to the positioning data of the target body, the posture data of the target body and the pose information of the target body at the last moment.
In one embodiment, the step S4 includes:
step S41: calculating a target pose transformation relation between the global point cloud characteristic information and the target body real-time point cloud characteristic information according to the characteristic matching point pairs;
step S42: and determining the pose information of the target body according to the target pose transformation relation between the global point cloud characteristic information and the real-time point cloud characteristic information of the target body.
In steps S41-S42, a target pose transformation relation R, T between the global point cloud feature information and the target real-time point cloud feature information is calculated according to the matched feature point pairs, so as to obtain the position and posture information of the target, where R represents a rotation matrix and T represents a translation matrix.
Specifically, R, T is of the form:
Figure BDA0002203727240000091
T=[tx ty tz]T
wherein, alpha, beta, gamma, tx、ty、tzRespectively, the angles with the x-axis, the y-axis and the z-axis and the translation position.
In one embodiment, the step S1 includes:
step S11: acquiring global data returned by a sensor;
step S12: and sequentially carrying out adjacent frame matching and data optimization on the global data returned by the sensor by adopting a laser Slam algorithm to obtain a global point cloud map.
In steps S11-S12, an instantaneous positioning And Mapping (Slam, Simultaneous Localization And Mapping) specifically operates to start the robot to move from an unknown position in an unknown environment, perform self-positioning according to position estimation And a map in the moving process, And simultaneously build an incremental map on the basis of self-positioning to realize autonomous positioning And navigation of the robot.
For the acquisition of the global Point cloud map, front-end processing and back-end optimization are required, and the front-end processing realizes matching and comparison between two adjacent frames of Point clouds by an Iterative Closest Point algorithm (ICP) matching method; and the rear-end optimization adopts an extended Kalman filtering method to optimize the output result of the front end.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a positioning device comprising: the data acquisition module 110, the feature information determination module 210, the matching point pair determination module 310, and the pose information determination module 410, wherein:
a data acquisition module 110, configured to acquire a global point cloud map and target real-time point cloud data;
a feature information determining module 210, configured to determine, according to the global point cloud map and the target real-time point cloud data, global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data;
a matching point pair determining module 310, configured to perform feature matching on the global point cloud feature information and the target real-time point cloud feature information, and determine a feature matching point pair;
and a pose information determining module 410, configured to determine pose information of the target body according to the feature matching point pairs.
In one embodiment, the feature information determining module 210 includes:
a down-sampling module 2101, configured to perform down-sampling processing on the target real-time point cloud data by using a voxel grid filtering method to obtain filtered target real-time point cloud data;
a feature information extraction module 2102, configured to perform feature extraction on the global point cloud map and the filtered target real-time point cloud data by using a PointNet network, respectively, to obtain global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data.
In one embodiment, the down-sampling module 2101 comprises:
the eliminating module 2101a is used for eliminating noise points and outliers in the target body real-time point cloud data to obtain the eliminated target body real-time point cloud data;
a grid establishing module 2101b, configured to create a three-dimensional voxel grid corresponding to the removed target body real-time point cloud data according to the removed target body real-time point cloud data;
and a center of gravity point calculation module 2101c, configured to calculate and summarize a center of gravity point of each voxel in the three-dimensional voxel grid, and determine filtered real-time point cloud data of the target volume.
In one embodiment, the feature information extraction module 2102 comprises:
a data processing module 2102a, configured to perform point cloud data angle conversion, multilayer convolution processing, feature data conversion, multilayer convolution processing, pooling operation, and full link layer processing on the global point cloud map and the filtered target real-time point cloud data in sequence, respectively, to obtain global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data.
In one embodiment, the matching point pair determining module 310 includes:
an initial transformation relation obtaining module 3101, configured to obtain an initial pose transformation relation;
a position information determining module 3102, configured to calculate, according to the initial pose transformation relationship, initial position information corresponding to the target real-time point cloud feature information in the global point cloud feature information;
a homonymous point pair determining module 3103, configured to perform feature matching on the target real-time point cloud feature information and initial position information within a preset range by using a Flann algorithm, and determine a homonymous point pair;
and an erroneous point pair removing module 3104, configured to remove erroneous point pairs from the same-name point pairs by using a ranac method to obtain feature matching point pairs.
In one embodiment, the transformation relation obtaining module 3101 includes:
an attitude data acquisition module 3101a, configured to acquire positioning data of the target body through a global navigation satellite system and acquire attitude data of the target body through an inertial measurement unit;
the initial transformation relation establishing module 3101b is configured to establish an initial pose transformation relation according to the positioning data of the target body, the pose data of the target body, and the pose information of the target body at the previous time.
In one embodiment, the pose information determination module 410 includes:
a target transformation relation calculation module 4101, configured to calculate a target pose transformation relation between the global point cloud feature information and the target real-time point cloud feature information according to the feature matching point pairs;
a pose information calculating module 4102, configured to determine pose information of the target object according to a target pose transformation relationship between the global point cloud feature information and the target object real-time point cloud feature information.
In one embodiment, the data acquisition module 110 includes:
a global data acquisition module 1101, configured to acquire global data returned by the sensor;
and the global point cloud map determining module 1102 is configured to sequentially perform adjacent frame matching and data optimization on the global data returned by the sensor by using a laser Slam algorithm to obtain a global point cloud map.
For the specific definition of the positioning device, reference may be made to the above definition of the positioning method, which is not described herein again. The modules in the positioning device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing relevant data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a positioning method.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a global point cloud map and real-time point cloud data of a target body;
determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs;
and determining the pose information of the target body according to the feature matching point pairs.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a global point cloud map and real-time point cloud data of a target body;
determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs;
and determining the pose information of the target body according to the feature matching point pairs.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1.一种定位方法,其特征在于,所述方法包括:1. a positioning method, is characterized in that, described method comprises: 获取全局点云地图和目标体实时点云数据;Obtain the global point cloud map and real-time point cloud data of the target body; 根据所述全局点云地图和所述目标体实时点云数据,确定所述全局点云地图对应的全局点云特征信息和所述目标体实时点云数据对应的目标体实时点云特征信息;According to the global point cloud map and the target real-time point cloud data, determine the global point cloud feature information corresponding to the global point cloud map and the target real-time point cloud feature information corresponding to the target real-time point cloud data; 对所述全局点云特征信息和所述目标体实时点云特征信息进行特征匹配,确定特征匹配点对;Perform feature matching on the global point cloud feature information and the target real-time point cloud feature information, and determine feature matching point pairs; 根据所述特征匹配点对,确定所述目标体的位姿信息。According to the feature matching point pair, the pose information of the target body is determined. 2.根据权利要求1所述的方法,其特征在于,所述根据所述全局点云地图和所述目标体实时点云数据,确定所述全局点云地图对应的全局点云特征信息和所述目标体实时点云数据对应的目标体实时点云特征信息包括:2. The method according to claim 1, wherein, according to the global point cloud map and the real-time point cloud data of the target body, the global point cloud feature information corresponding to the global point cloud map and the all The target real-time point cloud feature information corresponding to the target real-time point cloud data includes: 采用体素网格滤波方法对所述目标体实时点云数据进行降采样处理,得到过滤后的目标体实时点云数据;The voxel grid filtering method is used to downsample the real-time point cloud data of the target body to obtain filtered real-time point cloud data of the target body; 利用PointNet网络分别对所述全局点云地图和所述过滤后的目标体实时点云数据进行特征提取,得到所述全局点云地图对应的全局点云特征信息和所述目标体实时点云数据对应的目标体实时点云特征信息。Use PointNet network to perform feature extraction on the global point cloud map and the filtered real-time point cloud data of the target respectively, and obtain the global point cloud feature information corresponding to the global point cloud map and the real-time point cloud data of the target The corresponding real-time point cloud feature information of the target body. 3.根据权利要求2所述的方法,其特征在于,所述采用体素网格滤波方法对所述目标体实时点云数据进行降采样处理,得到过滤后的目标体实时点云数据包括:3. The method according to claim 2, characterized in that, performing downsampling processing on the real-time point cloud data of the target body by adopting the voxel grid filtering method, and obtaining the real-time point cloud data of the target body after filtering comprises: 剔除所述目标体实时点云数据中的噪声点和离群点,得到剔除后的目标体实时点云数据;Eliminating noise points and outliers in the real-time point cloud data of the target body to obtain real-time point cloud data of the target body after the elimination; 根据所述剔除后的目标体实时点云数据,创建所述剔除后的目标体实时点云数据所对应的三维体素栅格;creating a three-dimensional voxel grid corresponding to the culled real-time point cloud data of the target body according to the culled real-time point cloud data of the target body; 计算并汇总所述三维体素栅格中每个体素的重心点,确定过滤后的目标体实时点云数据。Calculate and summarize the centroid point of each voxel in the three-dimensional voxel grid to determine the filtered real-time point cloud data of the target body. 4.根据权利要求3所述的方法,其特征在于,所述利用PointNet网络分别对所述全局点云地图和所述过滤后的目标体实时点云数据进行特征提取,得到所述全局点云地图对应的全局点云特征信息和所述目标体实时点云数据对应的目标体实时点云特征信息包括:4. method according to claim 3, is characterized in that, described utilizing PointNet network to carry out feature extraction to described global point cloud map and described filtered target body real-time point cloud data respectively, obtain described global point cloud The global point cloud feature information corresponding to the map and the target real-time point cloud feature information corresponding to the target real-time point cloud data include: 分别对所述全局点云地图和所述过滤后的目标体实时点云数据依次进行点云数据角度转换、多层卷积处理、特征数据转换、多层卷积处理、池化运算和全连接层处理,得到所述全局点云地图对应的全局点云特征信息和所述目标体实时点云数据对应的目标体实时点云特征信息;Perform point cloud data angle conversion, multi-layer convolution processing, feature data conversion, multi-layer convolution processing, pooling operation and full connection on the global point cloud map and the filtered real-time point cloud data respectively. Layer processing to obtain global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data; 和/或,所述对所述全局点云特征信息和所述目标体实时点云特征信息进行特征匹配,确定特征匹配点对包括:And/or, performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining the feature matching point pair includes: 获取初始位姿变换关系;Obtain the initial pose transformation relationship; 根据所述初始位姿变换关系,计算所述目标体实时点云特征信息在所述全局点云特征信息中对应的初始位置信息;According to the initial pose transformation relationship, calculate the initial position information corresponding to the real-time point cloud feature information of the target body in the global point cloud feature information; 采用Flann算法对所述目标体实时点云特征信息和预设范围内的初始位置信息进行特征匹配,确定同名点对;The Flann algorithm is used to perform feature matching on the real-time point cloud feature information of the target body and the initial position information within a preset range to determine a point pair with the same name; 利用Ransac方法剔除所述同名点对中错误的点对,得到特征匹配点对。Use the Ransac method to eliminate the wrong point pairs in the same name point pairs to obtain feature matching point pairs. 5.根据权利要求4所述的方法,其特征在于,所述获取初始位姿变换关系包括:5. The method according to claim 4, wherein the obtaining the initial pose transformation relationship comprises: 通过全球导航卫星系统获取目标体的定位数据以及通过惯性测量单位获取目标体的姿态数据;Obtain the positioning data of the target body through the global navigation satellite system and obtain the attitude data of the target body through the inertial measurement unit; 根据所述目标体的定位数据、目标体的姿态数据以及上一时刻目标体的位姿信息,建立初始位姿变换关系。According to the positioning data of the target body, the posture data of the target body, and the posture information of the target body at the previous moment, an initial posture and posture transformation relationship is established. 6.根据权利要求4所述的方法,其特征在于,所述根据所述特征匹配点对,确定所述目标体的位姿信息包括:6. The method according to claim 4, wherein the determining the pose information of the target body according to the feature matching point pair comprises: 根据所述特征匹配点对,计算所述全局点云特征信息和所述目标体实时点云特征信息的目标位姿变换关系;According to the feature matching point pair, calculate the target pose transformation relationship between the global point cloud feature information and the target body real-time point cloud feature information; 根据所述全局点云特征信息和所述目标体实时点云特征信息的目标位姿变换关系,确定所述目标体的位姿信息。The pose information of the target body is determined according to the target pose transformation relationship between the global point cloud feature information and the target body real-time point cloud feature information. 7.根据权利要求1所述的方法,其特征在于,所述获取全局点云地图包括:7. The method according to claim 1, wherein the acquiring a global point cloud map comprises: 获取传感器回传的全局数据;Obtain the global data returned by the sensor; 采用激光Slam算法对传感器回传的全局数据依次进行相邻帧匹配和数据优化,得到全局点云地图。The laser Slam algorithm is used to sequentially perform adjacent frame matching and data optimization on the global data returned by the sensor to obtain a global point cloud map. 8.一种定位装置,其特征在于,所述装置包括:8. A positioning device, characterized in that the device comprises: 数据获取模块,用于获取全局点云地图和目标体实时点云数据;The data acquisition module is used to acquire the global point cloud map and the real-time point cloud data of the target body; 特征信息确定模块,用于根据所述全局点云地图和所述目标体实时点云数据,确定所述全局点云地图对应的全局点云特征信息和所述目标体实时点云数据对应的目标体实时点云特征信息;A feature information determination module, configured to determine the global point cloud feature information corresponding to the global point cloud map and the target corresponding to the target object real-time point cloud data according to the global point cloud map and the target object real-time point cloud data Volume real-time point cloud feature information; 匹配点对确定模块,用于对所述全局点云特征信息和所述目标体实时点云特征信息进行特征匹配,确定特征匹配点对;a matching point pair determination module, configured to perform feature matching on the global point cloud feature information and the target real-time point cloud feature information, and determine feature matching point pairs; 位姿信息确定模块,用于根据所述特征匹配点对,确定所述目标体的位姿信息。The pose information determination module is used for determining the pose information of the target body according to the feature matching point pair. 9.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述方法的步骤。9. A computer device comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the steps of the method according to any one of claims 1 to 7 when the processor executes the computer program . 10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的方法的步骤。10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 7 are implemented.
CN201910873938.7A 2019-09-17 2019-09-17 Positioning method, apparatus, computer equipment and storage medium Pending CN112241010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910873938.7A CN112241010A (en) 2019-09-17 2019-09-17 Positioning method, apparatus, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910873938.7A CN112241010A (en) 2019-09-17 2019-09-17 Positioning method, apparatus, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112241010A true CN112241010A (en) 2021-01-19

Family

ID=74168161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910873938.7A Pending CN112241010A (en) 2019-09-17 2019-09-17 Positioning method, apparatus, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112241010A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950708A (en) * 2021-02-05 2021-06-11 深圳市优必选科技股份有限公司 Positioning method, positioning device and robot
CN113008274A (en) * 2021-03-19 2021-06-22 奥特酷智能科技(南京)有限公司 Vehicle initialization positioning method, system and computer readable medium
CN113447032A (en) * 2021-06-29 2021-09-28 东软睿驰汽车技术(沈阳)有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN113763475A (en) * 2021-09-24 2021-12-07 北京百度网讯科技有限公司 Positioning method, device, equipment, system, medium and automatic driving vehicle
CN113936194A (en) * 2021-11-18 2022-01-14 广西柳工机械股份有限公司 A method, device, equipment and storage medium for determining the position and attitude of a loader
CN114353780A (en) * 2021-12-31 2022-04-15 高德软件有限公司 Attitude optimization method and equipment
CN115201833A (en) * 2021-04-08 2022-10-18 中强光电股份有限公司 Object positioning method and object positioning system
WO2022247306A1 (en) * 2021-05-28 2022-12-01 同济大学 Unmanned aerial vehicle positioning method based on millimeter wave radar
CN117274331A (en) * 2023-09-19 2023-12-22 北京斯年智驾科技有限公司 Positioning registration optimization method, system, device and storage medium
CN117670785A (en) * 2022-08-31 2024-03-08 北京三快在线科技有限公司 Ghost detection method for point cloud maps
CN118379356A (en) * 2024-06-21 2024-07-23 南方电网调峰调频发电有限公司 Object security determination method, device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308737A (en) * 2018-07-11 2019-02-05 重庆邮电大学 A three-stage point cloud registration method for mobile robot V-SLAM method
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308737A (en) * 2018-07-11 2019-02-05 重庆邮电大学 A three-stage point cloud registration method for mobile robot V-SLAM method
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵兴东 等: "矿用三维激光数字测量原理及其工程应用", 31 January 2016, 冶金工业出版社, pages: 51 - 54 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950708A (en) * 2021-02-05 2021-06-11 深圳市优必选科技股份有限公司 Positioning method, positioning device and robot
CN112950708B (en) * 2021-02-05 2023-12-15 深圳市优必选科技股份有限公司 Positioning method, positioning device and robot
CN113008274A (en) * 2021-03-19 2021-06-22 奥特酷智能科技(南京)有限公司 Vehicle initialization positioning method, system and computer readable medium
CN115201833A (en) * 2021-04-08 2022-10-18 中强光电股份有限公司 Object positioning method and object positioning system
WO2022247306A1 (en) * 2021-05-28 2022-12-01 同济大学 Unmanned aerial vehicle positioning method based on millimeter wave radar
CN113447032A (en) * 2021-06-29 2021-09-28 东软睿驰汽车技术(沈阳)有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN113763475A (en) * 2021-09-24 2021-12-07 北京百度网讯科技有限公司 Positioning method, device, equipment, system, medium and automatic driving vehicle
US12270659B2 (en) 2021-09-24 2025-04-08 Beijing Baidu Netcom Science Technology Co., Ltd. Positioning method and apparatus, device, system, medium and self-driving vehicle
CN113936194A (en) * 2021-11-18 2022-01-14 广西柳工机械股份有限公司 A method, device, equipment and storage medium for determining the position and attitude of a loader
CN114353780A (en) * 2021-12-31 2022-04-15 高德软件有限公司 Attitude optimization method and equipment
CN114353780B (en) * 2021-12-31 2024-04-02 高德软件有限公司 Gesture optimization method and device
CN117670785A (en) * 2022-08-31 2024-03-08 北京三快在线科技有限公司 Ghost detection method for point cloud maps
CN117274331A (en) * 2023-09-19 2023-12-22 北京斯年智驾科技有限公司 Positioning registration optimization method, system, device and storage medium
CN118379356A (en) * 2024-06-21 2024-07-23 南方电网调峰调频发电有限公司 Object security determination method, device, computer equipment and storage medium
CN118379356B (en) * 2024-06-21 2025-02-11 南方电网调峰调频发电有限公司 Object security determination method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112241010A (en) Positioning method, apparatus, computer equipment and storage medium
US10725472B2 (en) Object tracking using depth information
CN110176032B (en) Three-dimensional reconstruction method and device
CN110047108B (en) Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium
CN107067415A (en) A kind of quick accurate positioning method of target based on images match
CN110686677A (en) Global positioning method based on geometric information
CN107818598B (en) Three-dimensional point cloud map fusion method based on visual correction
CN112837352A (en) Image-based data processing method, device and equipment, automobile and storage medium
CN109059941B (en) Characteristics map construction method, vision positioning method and corresponding intrument
CN111860651B (en) Monocular vision-based semi-dense map construction method for mobile robot
CN111402328B (en) A position and attitude calculation method and device based on laser odometry
CN113960614A (en) Elevation map construction method based on frame-map matching
CN117036484B (en) Visual positioning and mapping method, system, equipment and medium based on geometry and semantics
CN114549286A (en) Lane line generation method and device, computer-readable storage medium and terminal
CN114049362A (en) Transform-based point cloud instance segmentation method
CN112884804A (en) Action object tracking method and related equipment
CN118552711B (en) Image processing method and system for robot navigation vision positioning
Zhao et al. An improved vision-based algorithm for unmanned aerial vehicles autonomous landing
Amorós et al. Trajectory estimation and optimization through loop closure detection, using omnidirectional imaging and global-appearance descriptors
CN119748485A (en) Track planning method and system for welding robot, storage medium and welding robot
CN119556277A (en) Search and Matching Method of Space Target Attitude Model in Optical-Radar Co-site Observation
CN112747752B (en) Vehicle positioning method, device, equipment and storage medium based on laser odometer
CN114310887A (en) 3D human leg recognition method and device, computer equipment and storage medium
CN113112565A (en) Map construction method based on robot vision closed-loop detection algorithm, storage medium and equipment
CN108447084A (en) Stereo matching compensation method based on ORB features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100176 floor 10, building 1, zone 2, yard 9, Taihe 3rd Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Beijing National New Energy Vehicle Technology Innovation Center Co.,Ltd.

Address before: 100176 1705, block a, building 1, No. 10, Ronghua Middle Road, economic and Technological Development Zone, Daxing District, Beijing

Applicant before: BEIJING NEW ENERGY VEHICLE TECHNOLOGY INNOVATION CENTER Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210119