[go: up one dir, main page]

CN113920258B - Map generation method, device, medium and electronic equipment - Google Patents

Map generation method, device, medium and electronic equipment Download PDF

Info

Publication number
CN113920258B
CN113920258B CN202111130227.4A CN202111130227A CN113920258B CN 113920258 B CN113920258 B CN 113920258B CN 202111130227 A CN202111130227 A CN 202111130227A CN 113920258 B CN113920258 B CN 113920258B
Authority
CN
China
Prior art keywords
point cloud
pose information
frame
point
point clouds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111130227.4A
Other languages
Chinese (zh)
Other versions
CN113920258A (en
Inventor
余丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202111130227.4A priority Critical patent/CN113920258B/en
Publication of CN113920258A publication Critical patent/CN113920258A/en
Priority to PCT/CN2022/076191 priority patent/WO2023045224A1/en
Application granted granted Critical
Publication of CN113920258B publication Critical patent/CN113920258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Instructional Devices (AREA)
  • Image Analysis (AREA)

Abstract

本公开涉及一种地图生成方法、装置、介质及电子设备,该方法包括:获取点云采集设备在目标区域采集的多帧点云;确定目标区域的信号强度是否满足预设高强度要求;根据目标区域的信号强度是否满足预设高强度要求的确定结果,从多帧点云中确定点云匹配对,并根据点云匹配对,确定多帧点云各自的目标位姿信息,其中,构成点云匹配对的两帧点云之间存在相同的点;根据所述多帧点云各自的所述目标位姿信息,生成所述目标区域的地图。通过上述技术方案,通过构建点云匹配对,根据点云匹配对确定多帧点云各自的目标位姿信息,降低对全球定位系统的依赖,无需如相关技术中对每一个点找距离最近的点进行匹配,计算量相对小,可以提高地图生成的效率和精度。

The present disclosure relates to a map generation method, device, medium and electronic device, the method comprising: obtaining a multi-frame point cloud collected by a point cloud acquisition device in a target area; determining whether the signal strength of the target area meets the preset high-intensity requirement; determining a point cloud matching pair from the multi-frame point cloud according to the result of determining whether the signal strength of the target area meets the preset high-intensity requirement, and determining the target pose information of each of the multi-frame point clouds according to the point cloud matching pair, wherein the same points exist between the two frames of point clouds constituting the point cloud matching pair; generating a map of the target area according to the target pose information of each of the multi-frame point clouds. Through the above technical scheme, by constructing a point cloud matching pair, determining the target pose information of each of the multi-frame point clouds according to the point cloud matching pair, the dependence on the global positioning system is reduced, and there is no need to find the nearest point for each point to match as in the related art. The amount of calculation is relatively small, which can improve the efficiency and accuracy of map generation.

Description

Map generation method, map generation device, medium and electronic equipment
Technical Field
The disclosure relates to the field of automatic driving, and in particular relates to a map generation method, a map generation device, a map generation medium and electronic equipment.
Background
The application scene of the map is very wide, for example, the unmanned vehicle can use the electronic map to recognize the surrounding environment, so as to control the behaviors of steering, accelerating or decelerating of the unmanned vehicle, and the scale of the map acquisition is continuously expanding along with the rising of the unmanned vehicle. Taking take out as an example, an unmanned vehicle can perform a delivery task in a city to deliver items to users, can realize contactless delivery, because the unmanned vehicle needs a map to recognize the surrounding environment, reliable operation of the unmanned vehicle is thus dependent on a map of high accuracy, however, due to the diversification of urban scenes, it is difficult to guarantee the accuracy and quality of the constructed map.
Disclosure of Invention
An object of the present disclosure is to provide a map generation method, apparatus, medium, and electronic device to partially solve the above-described problems in the related art.
To achieve the above object, in a first aspect, the present disclosure provides a map generation method, the method including:
acquiring multi-frame point clouds acquired by point cloud acquisition equipment in a target area;
Determining whether the signal intensity of the target area meets a preset high-intensity requirement;
Determining a point cloud matching pair from the multi-frame point clouds according to a determination result of whether the signal intensity of the target area meets a preset high-intensity requirement or not, and determining respective target pose information of the multi-frame point clouds according to the point cloud matching pair, wherein the same points exist between two frames of point clouds forming the point cloud matching pair;
And generating a map of the target area according to the respective target pose information of the multi-frame point clouds.
Optionally, in a case that the determining result indicates that the signal intensity of the target area meets a preset high-intensity requirement, determining the point cloud matching pair from the multi-frame point clouds includes:
and aiming at each frame of point cloud, forming the point cloud matching pair with other frame of point clouds which have the same points as the frame of point clouds and have the farthest time stamp distance from the frame of point clouds in the multi-frame point clouds.
Optionally, when the determining result indicates that the signal intensity of the target area meets a preset high-intensity requirement, determining, according to the point cloud matching pair, the target pose information of each of the multi-frame point clouds includes:
determining respective spliced point clouds of two frames of point clouds forming the point cloud matching pairs aiming at each point cloud matching pair, and determining relative pose information between the two frames of point clouds according to the respective spliced point clouds of the two frames of point clouds, wherein the spliced point clouds of each frame of point clouds are spliced by point clouds acquired in a designated area by the point cloud acquisition equipment, and the designated area is an area around the position where the point cloud acquisition equipment acquires the frame of point clouds;
And determining the target pose information of each multi-frame point cloud according to the corresponding relative pose information of each point cloud matching.
Optionally, in a case that the signal strength of the target area represented by the determination result does not meet a preset high strength requirement, the determining a point cloud matching pair from the multi-frame point clouds includes:
And aiming at each frame of point cloud, forming the point cloud matching pair with other frames of point clouds which have the same points as the frame of point cloud in the multi-frame point cloud.
Optionally, when the determining result indicates that the signal strength of the target area does not meet the preset high-strength requirement, determining, according to the point cloud matching pair, the target pose information of each of the multi-frame point clouds includes:
And iteratively adjusting the current pose information of each of the two frames of point clouds by taking the distance information between the minimum homonymous points as a target, until the distance information between the homonymous points is smaller than a preset distance threshold value, so as to obtain the target pose information of each of the two frames of point clouds.
Optionally, the determining whether the signal strength of the target area meets a preset high strength requirement includes:
Determining first equipment pose information of the point cloud acquisition equipment in the process of acquiring the multi-frame point cloud in the target area at intervals of a first preset time period, and first confidence degrees respectively corresponding to the first equipment pose information;
And determining whether the signal intensity of the target area meets the preset high-intensity requirement according to the first confidence coefficient.
Optionally, the determining the first device pose information of the point cloud collecting device in the target area collecting the multi-frame point cloud at intervals of a first preset duration, and the first confidence degrees corresponding to the first device pose information respectively includes:
Acquiring second equipment pose information of the navigation equipment acquired by the navigation equipment in the target area every second preset time period and second confidence degrees corresponding to the second equipment pose information respectively;
determining the first equipment pose information according to the relative position relation between the navigation equipment and the point cloud acquisition equipment and the second equipment pose information;
And determining target second equipment pose information closest to the timestamp of the first equipment pose information in the second equipment pose information according to each piece of first equipment pose information, and taking the second confidence coefficient of the target second equipment pose information as a first confidence coefficient corresponding to the first equipment pose information.
Optionally, the determining, according to the first confidence, whether the signal strength of the target area meets a preset high strength requirement includes:
And if the duty ratio of the first confidence coefficient higher than the preset confidence coefficient threshold is larger than the preset duty ratio threshold, or the number of the first confidence coefficient higher than the preset confidence coefficient threshold is larger than the preset number threshold, determining that the signal strength of the target area meets the preset high-strength requirement.
In a second aspect, the present disclosure provides a map generation apparatus, the apparatus comprising:
the acquisition module is configured to acquire multi-frame point clouds acquired by the point cloud acquisition equipment in the target area;
A first determining module configured to determine whether the signal strength of the target area meets a preset high strength requirement;
The second determining module is configured to determine a point cloud matching pair from the multi-frame point clouds according to a determining result of whether the signal intensity of the target area meets a preset high-intensity requirement, and determine respective target pose information of the multi-frame point clouds according to the point cloud matching pair, wherein the two frames of point clouds forming the point cloud matching pair have the same point;
And the generation module is configured to generate a map of the target area according to the respective target pose information of the multi-frame point clouds.
Optionally, when the determining result indicates that the signal strength of the target area meets a preset high-strength requirement, the second determining module is configured to determine a point cloud matching pair from the multi-frame point clouds, by, for each frame of point clouds, forming the point cloud matching pair with other frame point clouds, which have the same point as the frame point cloud and have the farthest time stamp distance from the frame point cloud, in the multi-frame point clouds.
Optionally, in a case that the signal strength of the determined result characterizing the target area meets a preset high strength requirement, the second determining module includes:
The first determining submodule is configured to determine, for each point cloud matching pair, respective spliced point clouds of two frames of point clouds forming the point cloud matching pair, and determine relative pose information between the two frames of point clouds according to the respective spliced point clouds of the two frames of point clouds, wherein the spliced point clouds of each frame of point clouds are formed by splicing point clouds acquired by the point cloud acquisition equipment in a designated area, and the designated area is an area around the position where the point cloud acquisition equipment acquires the frame of point clouds;
And the second determining submodule is configured to determine the target pose information of each multi-frame point cloud according to the relative pose information corresponding to each point cloud matching.
Optionally, when the signal strength of the target area represented by the determination result does not meet the preset high-strength requirement, the second determination module is configured to determine a point cloud matching pair from the multi-frame point clouds, where for each frame of point clouds, other frame point clouds in the multi-frame point clouds, which have the same point as the frame point cloud, and the frame point clouds form the point cloud matching pair.
Optionally, in a case that the signal strength of the determined result characterizing the target area does not meet a preset high strength requirement, the second determining module includes:
And the third determining submodule is configured to determine respective characteristic points in two frames of point clouds forming the point cloud matching pair aiming at each point cloud matching pair, determine homonymous points in the two frames of point clouds from the characteristic points, and iteratively adjust respective current pose information of the two frames of point clouds by taking the distance information between the homonymous points as a target until the distance information between the homonymous points is smaller than a preset distance threshold value to obtain respective target pose information of the two frames of point clouds.
Optionally, the first determining module includes:
A fourth determining submodule, configured to determine first equipment pose information of each first preset duration and first confidence degrees respectively corresponding to the first equipment pose information in the process that the point cloud acquisition equipment acquires the multi-frame point cloud in the target area;
and a fifth determining submodule configured to determine whether the signal strength of the target area meets a preset high-strength requirement according to the first confidence.
Optionally, the fourth determining sub-module includes:
the acquisition sub-module is configured to acquire second equipment pose information of the navigation equipment acquired by the navigation equipment in the target area every second preset time length and second confidence degrees corresponding to the second equipment pose information respectively;
A sixth determination submodule configured to determine the first device pose information according to the relative positional relationship between the navigation device and the point cloud acquisition device and the second device pose information;
a seventh determining submodule configured to determine, for each piece of first equipment pose information, target second equipment pose information closest to a timestamp of the first equipment pose information in the second equipment pose information, and take a second confidence level of the target second equipment pose information as a first confidence level corresponding to the first equipment pose information.
Optionally, the fifth determining module is configured to determine that the signal strength of the target area meets the preset high strength requirement if the duty ratio of the first confidence coefficient higher than the preset confidence coefficient threshold is greater than the preset duty ratio threshold, or the number of the first confidence coefficient higher than the preset confidence coefficient threshold is greater than the preset number threshold.
In a third aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the method provided by the first aspect of the present disclosure.
In a fourth aspect, the present disclosure provides an electronic device comprising a memory having stored thereon a computer program, and a processor for executing the computer program in the memory to implement the steps of the method provided in the first aspect of the present disclosure.
Through the technical scheme, the multi-frame point clouds acquired by the point cloud acquisition equipment in the target area are acquired, whether the signal intensity of the target area meets the preset high-intensity requirement is determined, point cloud matching pairs are determined from the multi-frame point clouds according to the determination result, the respective target pose information of the multi-frame point clouds is determined according to the point cloud matching pairs, and the map of the target area is generated according to the respective target pose information of the multi-frame point clouds. In this way, by constructing the point cloud matching pair, the respective target pose information of the multi-frame point cloud is determined according to the point cloud matching pair, pose information of the point cloud acquisition equipment is not required to be directly obtained according to pose information of the global positioning system and the inertial measurement unit, dependence on the global positioning system is reduced, and a map with higher precision can be generated under the condition that the signal of the global positioning system is weak. In addition, the two frames of point clouds have the same points, the two frames of point clouds can construct point cloud matching pairs, the point cloud matching pairs do not need to be matched with the nearest point of each point in the related technology, the calculated amount is relatively small, the point cloud matching pairs are determined from the multi-frame point clouds according to the determination result of whether the signal intensity of the target area meets the preset high-intensity requirement, the number of the constructed point cloud matching pairs can be matched with the signal intensity of the target area, and the map generation efficiency and accuracy can be improved by constructing the point cloud matching pairs.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
fig. 1 is a flowchart illustrating a map generation method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a map generation method according to another exemplary embodiment.
FIG. 3 is a flowchart illustrating a method of determining first device pose information and a first confidence, according to an example embodiment.
Fig. 4a is a schematic diagram of a map generated using an embodiment in the related art.
Fig. 4b is a schematic diagram of a map generated using an embodiment in the related art.
Fig. 4c is a schematic diagram of a map generated using an embodiment of the present disclosure.
Fig. 5 is a block diagram of a map generating apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram of an electronic device, according to an example embodiment.
Fig. 7 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In the related art, map data is generally collected by a vehicle-mounted laser radar, and the pose of the vehicle-mounted laser radar is interpolated from the post-processed poses of a GPS (Global Position System, global positioning system) and an IMU (Inertial Measurement Unit ), so that the dependence on the GPS is large, and the system is used in open places, such as highways, the method can provide centimeter-level precision, but in scenes such as urban high buildings and boulevards, satellite signals are easy to be lost or interfered, and the pose of the vehicle-mounted laser radar can be obtained by directly using GSP and IMU, so that the radar pose in a weak GPS scene is inaccurate, a built map has ghost images, and the precision of the map is affected. In addition, in the related art, when the radar pose is optimized, each point in the point cloud needs to be found and matched with the closest point, so that on one hand, the calculation amount is very large, the efficiency is low, and on the other hand, the closest point matching failure is caused because the noise interference of a dynamic vehicle is easy to occur.
In view of the above, the present disclosure provides a map generating method, apparatus, medium and electronic device, so as to partially solve the above-mentioned problems in the related art.
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
Fig. 1 is a flowchart illustrating a map generation method according to an exemplary embodiment, which may be applied to an electronic device having processing capability, such as a terminal or a server. As shown in fig. 1, the method may include S101 to S104.
In S101, a multi-frame point cloud acquired by a point cloud acquisition device in a target area is acquired.
The target area may be any area where a map needs to be constructed, for example, it may be a street in a city, it may be a road in a campus or cell, etc. The point cloud collecting device may be, for example, a vehicle-mounted laser radar, and the vehicle integrated with the point cloud collecting device may travel one or more times in a target area, and in the vehicle traveling process, the point cloud collecting device may continuously scan and collect surrounding scenes, where each circle of the point cloud collecting device is one frame, and as the vehicle travels, the point cloud collecting device may collect multiple frames of point clouds in the target area, and the number of the multiple frames of point clouds is not limited in this disclosure.
In S102, it is determined whether the signal strength of the target area satisfies a preset high-strength requirement.
For example, whether the signal strength of the target area meets the preset high-strength requirement or not may be determined according to, for example, the confidence level of the equipment pose information of the point cloud acquisition equipment in the target area, where the confidence level may represent the confidence level of the equipment pose information, and the higher the confidence level, the higher the signal strength of the target area may be represented.
In S103, according to the determination result of whether the signal intensity of the target area meets the preset high-intensity requirement, a point cloud matching pair is determined from the multi-frame point clouds, and according to the point cloud matching pair, the respective target pose information of the multi-frame point clouds is determined.
The same points exist between two frames of point clouds forming the point cloud matching pair. The point cloud collecting device may scan the same object at different time instants, for example, the same tree is scanned at both the first time instant and the second time instant, so that the point cloud collecting device has the same point between the frame of point cloud collected at the first time instant and the frame of point cloud collected at the second time instant, and the two frame of point clouds may form a point cloud matching pair (FRAME PAIR).
For example, in the disclosure, a vehicle integrated with a point cloud collecting device may travel one or more round trip in a target area, the point cloud collecting device collects a frame of point cloud at a first time during a travel of the vehicle in a direction 1 in the target area, the point cloud collecting device collects a frame of point cloud at a second time during a travel of the vehicle in a direction 2 in the target area, the directions 1 and 2 may be opposite, as in the above example, the point clouds collected by the point cloud collecting device at the first time and the point clouds collected at the second time have the same points, and then the two frames of point clouds may form a point cloud matching pair, that is, two frames of point clouds forming the point cloud matching pair may not be two frames of point clouds adjacent in time.
And determining a point cloud matching pair from multi-frame point clouds according to a determination result of whether the signal intensity of the target area meets the preset high-intensity requirement, wherein the signal intensity of the target area can be represented as not high enough under the condition that the signal intensity of the target area does not meet the preset high-intensity requirement, more point cloud matching pairs can be constructed in order to avoid the influence of weak signal intensity on the map quality, the signal intensity of the target area can be represented as high under the condition that the signal intensity of the target area meets the preset high-intensity requirement, and the quantity of the point cloud matching pairs constructed under the condition can be relatively small in order to improve the calculation efficiency.
The number of the point cloud matching pairs can be multiple, and respective target pose information of each frame of point cloud can be determined according to the point cloud matching pairs, wherein the determination results of whether the signal strength of the target area meets the preset high-strength requirement are different, the manner of determining the respective target pose information of each frame of point cloud can be different, and the number of the constructed point cloud matching pairs and the manner of determining the target pose information can be adapted to the signal strength of the target area. The target pose information of a frame of point cloud can refer to position information and pose information of point cloud acquisition equipment at the time when the frame of point cloud is acquired, the position information can be three-dimensional coordinate information, and the pose information can comprise pitch angle, roll angle and deflection angle.
In S104, a map of the target area is generated based on the target pose information of each of the multi-frame point clouds.
After the respective target pose information of the multi-frame point clouds is determined, the target pose information of each frame point cloud can be converted into a global coordinate system for each frame point cloud, if a three-dimensional map is required to be generated, a three-dimensional map of a target area can be generated according to the pose information converted into the global coordinate system, and if a two-dimensional map is required to be generated, the pose information in the global coordinate system can be converted into a two-dimensional plane to obtain the two-dimensional map of the target area.
Through the technical scheme, the multi-frame point clouds acquired by the point cloud acquisition equipment in the target area are acquired, whether the signal intensity of the target area meets the preset high-intensity requirement is determined, point cloud matching pairs are determined from the multi-frame point clouds according to the determination result, the respective target pose information of the multi-frame point clouds is determined according to the point cloud matching pairs, and the map of the target area is generated according to the respective target pose information of the multi-frame point clouds. In this way, by constructing the point cloud matching pair, the respective target pose information of the multi-frame point cloud is determined according to the point cloud matching pair, pose information of the point cloud acquisition equipment is not required to be directly obtained according to pose information of the global positioning system and the inertial measurement unit, dependence on the global positioning system is reduced, and a map with higher precision can be generated under the condition that the signal of the global positioning system is weak. In addition, the two frames of point clouds have the same points, the two frames of point clouds can construct point cloud matching pairs, the point cloud matching pairs do not need to be matched with the nearest point of each point in the related technology, the calculated amount is relatively small, the point cloud matching pairs are determined from the multi-frame point clouds according to the determination result of whether the signal intensity of the target area meets the preset high-intensity requirement, the number of the constructed point cloud matching pairs can be matched with the signal intensity of the target area, and the map generation efficiency and accuracy can be improved by constructing the point cloud matching pairs.
Fig. 2 is a flowchart of a map generation method according to another exemplary embodiment, which may include S201 to S209 as shown in fig. 2, wherein S102 may include S202 and S203.
In S201, a multi-frame point cloud acquired by a point cloud acquisition device in a target area is acquired. The embodiment of this step S201 can refer to S101.
In S202, first device pose information of each first preset duration and first confidence degrees corresponding to the first device pose information respectively in a process that the point cloud acquisition device acquires multi-frame point clouds in a target area are determined.
An exemplary embodiment of this step S202 may include S2021 to S2023 as shown in fig. 3.
In S2021, second device pose information of the navigation device acquired by the navigation device at intervals of a second preset duration in the target area and second confidence degrees corresponding to the second device pose information respectively are acquired.
The navigation device may be an integrated navigation device, for example, may be an integrated navigation device formed by a GPS and an IMU, where the navigation device and the point cloud collecting device may be integrated on a vehicle at the same time, and the navigation device may collect second device pose information of itself every second preset time length in a process that the vehicle travels in a target area, where the second preset time length may be, for example, 0.01s, that is, the navigation device may collect pose information of itself at a frequency of 100 HZ.
The navigation device can output second confidence degrees corresponding to the second device pose information while outputting the second device pose information acquired by the navigation device, wherein the second confidence degrees can be used for representing the confidence degrees of the corresponding second device pose information, and the higher the second confidence degrees are, the higher the confidence degrees of the corresponding second device pose information are represented, namely the higher the accuracy is. For example, the second confidence level may be determined according to parameters such as the number of satellites, longitude and latitude errors, and position accuracy when the navigation device collects the pose information of the corresponding second device.
In S2022, the first device pose information is determined according to the relative positional relationship between the navigation device and the point cloud acquisition device, and the second device pose information.
The relative position relation between the navigation equipment and the point cloud acquisition equipment can be pre-calibrated, and according to the relative position relation and the second equipment pose information of the navigation equipment, the first equipment pose information of the point cloud acquisition equipment at intervals of a first preset time length can be obtained in an interpolation mode. The frequency of the navigation device is 100HZ, the frequency of the point cloud collecting device is 10HZ, the first preset duration is 0.1s, namely, the first device pose information of the low-frequency point cloud collecting device can be determined according to the second device pose information of the high-frequency navigation device. It should be noted that, the examples of the first preset duration and the second preset duration are merely illustrative, and do not limit the embodiments of the disclosure.
In S2023, for each first device pose information, determining target second device pose information closest to the timestamp of the first device pose information in the second device pose information, and taking the second confidence level of the target second device pose information as the first confidence level corresponding to the first device pose information.
The second confidence coefficient of the target second equipment pose information closest to the timestamp of the first equipment pose information can be most characterized, so that the second confidence coefficient of the target second equipment pose information can be used as the first confidence coefficient corresponding to the first equipment pose information, and the confidence coefficient of the pose information of the point cloud acquisition equipment can be accurately determined according to the confidence coefficient of the pose information of the navigation equipment.
In S203, it is determined whether the signal strength of the target area meets a preset high strength requirement according to the first confidence. In the case of no, S204, S205 and S209 are performed, and in the case of yes, S206 to S209 are performed.
An exemplary embodiment of this step S203 may be to determine that the signal strength of the target area meets the preset high strength requirement if the duty cycle of the first confidence coefficient higher than the preset confidence coefficient threshold is greater than the preset duty cycle threshold, or the number of the first confidence coefficients higher than the preset confidence coefficient threshold is greater than the preset number threshold.
The first confidence coefficient is higher than a preset confidence coefficient threshold value, the reliability of the corresponding first equipment pose information can be characterized as being higher, if the duty ratio of the first confidence coefficient higher than the preset confidence coefficient threshold value is higher than the preset duty ratio threshold value, or the number of the first confidence coefficient higher than the preset confidence coefficient threshold value is higher than the preset number threshold value, the integral reliability of the first equipment pose information of the point cloud acquisition equipment in the process of acquiring the point cloud in the target area can be characterized as being higher, the first equipment pose information of the point cloud acquisition equipment can be determined according to the second equipment pose information of the navigation equipment, and therefore the self pose information acquired by the navigation equipment can be characterized as being more accurate, namely the signal strength of the target area is better, and the preset high-strength requirement is met.
In S204, for each frame of point cloud, other frame of point clouds in the multi-frame point cloud, which have the same points as the frame of point cloud, and the frame of point clouds form a point cloud matching pair.
The signal strength of the target area does not meet the preset high-strength requirement, the signal strength of the target area can be represented as not high enough, in order to avoid the influence of weak signal strength on the map quality, more point cloud matching pairs can be constructed, and under the condition that the signal strength of the target area represented by the determination result does not meet the preset high-strength requirement, the embodiment of determining the point cloud matching pair from the multi-frame point cloud in S103 can be as in S204, namely, as long as the same point exists with the frame point cloud, the point cloud matching pair can be formed with the frame point cloud, so that the number of the formed point cloud matching pairs is more, and the quality of the finally generated map is improved.
In S205, for each point cloud matching pair, determining respective characteristic points in two frames of point clouds forming the point cloud matching pair, determining homonymous points in the two frames of point clouds from the characteristic points, and iteratively adjusting respective current pose information of the two frames of point clouds by taking minimized distance information between homonymous points as a target until the distance information between the homonymous points is smaller than a preset distance threshold value to obtain respective target pose information of the two frames of point clouds.
In the case that the signal strength of the determined result representing the target area does not meet the preset high-strength requirement, in S103, according to the point cloud matching pair, an embodiment of determining the respective target pose information of the multi-frame point cloud may be as in S205.
The feature points may be key points, for example, feature points in the point cloud may be extracted by a deep learning (Deep Registration) manner, the number of points included in the point cloud is numerous, first, respective feature points in two frames of point clouds are determined, and then, the same-name points are determined from the feature points, so that a certain calculation amount may be reduced.
The homonymous points may refer to the same points, one or more groups of homonymous points may exist in the two-frame point cloud, the distance information between homonymous points may refer to the euclidean distance between homonymous points, or may refer to the distance between homonymous points in a feature space, for example, the homonymous points may be represented by vector points, and the distance information between homonymous points may be converted into the distance between vectors corresponding to the two points. Since the same-name points are the same points, the distance between the same points should be as small as possible, so in the disclosure, the current pose information of each of the two frame point clouds is iteratively adjusted with the aim of minimizing the distance information between the same-name points, for example, a gradient descent method may be adopted for the way of performing the iterative adjustment, where the initial pose information of the point clouds may be the equipment pose information of the point cloud collecting equipment when the frame point clouds are collected. If the number of the same-name points in the two-frame point cloud is multiple, the sum of the distance information between each group of the same-name points can be minimized as a target, and the respective current pose information of the two-frame point cloud is iteratively adjusted. The condition for exiting the iterative adjustment can be that the distance information between the homonymous points is smaller than a preset distance threshold, and when the distance information between the homonymous points is smaller than the preset distance threshold, the distance between the homonymous points is represented to meet the requirement, namely the iterative adjustment process can be exited, and the respective target pose information of the two-frame point cloud is obtained. The objective function E in the iterative adjustment process can be shown in the following formula (1):
Wherein m represents an mth frame point cloud, n represents an nth frame point cloud, the mth frame point cloud and the nth frame point cloud form a point cloud matching pair, FPS represents a set of point cloud matching pairs, Represents the a-th point in the m-th frame point cloud,Represents the b point in the n-th frame point cloud, the a point and the b point are a group of homonymous points, W ab represents the weight occupied by the homonymous points,The distance information between the a-th point and the b-th point is represented, S k represents a set of homonymous points in the m-th frame point cloud and the n-th frame point cloud, and T represents a transpose matrix.
In S206, for each frame of point cloud, the other frame of point cloud having the same point as the frame of point cloud and the farthest time stamp distance from the frame of point cloud in the multi-frame point cloud is matched with the frame of point cloud to form a point cloud matching pair.
The signal intensity of the target area meets the preset high-intensity requirement, the signal intensity of the target area can be characterized to be higher, in order to improve the calculation efficiency, the number of the point cloud matching pairs constructed under the condition can be relatively small, and when the point cloud matching pairs are constructed, the acquisition time of the point cloud can be considered besides whether the same points exist. Under the condition that the signal intensity of the determined result representing the target area meets the preset high-intensity requirement, an embodiment of determining a point cloud matching pair from multiple frame point clouds in S103 can be as follows in S206, and for each frame of point clouds, other frame point clouds which have the same point as the frame point cloud and have the farthest time stamp distance from the frame point cloud in the multiple frame point clouds and form the point cloud matching pair with the frame point cloud, wherein the time stamp of the frame point cloud can refer to the time when the point cloud acquisition device acquires the frame point cloud, each frame point cloud has a corresponding time stamp, and the number of the point cloud matching pairs constructed in the mode is relatively small, so that the map generation efficiency can be improved.
In S207, for each point cloud matching pair, respective spliced point clouds of two frames of point clouds constituting the point cloud matching pair are determined, and relative pose information between the two frames of point clouds is determined according to the respective spliced point clouds of the two frames of point clouds.
In the case where the signal strength of the target area represented by the determination result meets the preset high-strength requirement, in S103, according to the point cloud matching pair, the embodiment of determining the respective target pose information of the multi-frame point cloud may be as in S207 and S208.
The point cloud of each frame of point cloud is formed by splicing point clouds acquired by the point cloud acquisition equipment in a designated area, and the designated area can be an area around the position where the point cloud acquisition equipment acquires the point cloud of the frame. For example, the location of the point cloud collecting device when collecting the frame point cloud and the area formed by the front 5m and the rear 5m of the location may be taken as a designated area, and it should be noted that the numerical value is only taken as an example, and the range of the designated area is not limited.
The splicing point clouds are formed by splicing multiple frames of point clouds, relative pose information between the two frames of point clouds is determined according to the respective splicing point clouds of the two frames of point clouds, and scenes where the two frames of point clouds are located can be expanded, so that the respective corresponding visual fields of the two frames of point clouds are wider, and the determined relative pose information between the two frames of point clouds is more accurate. For example, GICP (Generalized Iterative Closest Point ) may be employed to determine relative pose information between two frames of point clouds from the point clouds.
In S208, according to the relative pose information corresponding to each point cloud matching, the target pose information of each multi-frame point cloud is determined.
After obtaining the corresponding relative pose information of each point cloud matching, for example, pose Graph may be used to obtain the respective target pose information of the multi-frame point cloud, where the target function may be as shown in the following formula (2):
Wherein i represents an ith frame point cloud, j represents a jth frame point cloud, the ith frame point cloud and the jth frame point cloud form a point cloud matching pair, x i represents pose information of the ith frame point cloud, x j represents pose information of the jth frame point cloud, T i,j represents a rotation translation matrix of the ith frame point cloud relative to the jth frame point cloud, and c i,j represents difference information between the pose information of the ith frame point cloud after the rotation translation matrix conversion and the pose information of the jth frame point cloud.
In S209, a map of the target area is generated based on the target pose information of each of the multi-frame point clouds. The embodiment of this step S209 may refer to S104.
Through the technical scheme, the signal intensity of the target area meets the preset high-intensity requirement or does not meet the preset high-intensity requirement, the mode of determining the point cloud matching pairs is different under the two conditions, and the mode of determining the target pose information of each multi-frame point cloud is also different, under the condition that the signal intensity of the target area meets the preset high-intensity requirement, the number of the constructed point cloud matching pairs can be relatively small, under the condition that the signal intensity of the target area does not meet the preset high-intensity requirement, the number of the constructed point cloud matching pairs can be relatively large in order to improve map accuracy, and therefore, the map generation efficiency and accuracy can be improved through the mode of constructing the point cloud matching pairs.
In the following, a map generated using an embodiment of the related art will be described in comparison with a map generated using an embodiment of the present disclosure, and fig. 4a and 4b are schematic diagrams of a map generated using an embodiment of the related art, as shown in fig. 4a and 4b, in which there is a clear ghost phenomenon on a lane line in both the maps, as shown in fig. 4c, in which there is a ghost phenomenon on a lane line, as shown in fig. 4c, in a region in a left rectangular frame, as shown in fig. 4c, in which there is no ghost phenomenon on a lane line, as shown in a region in a right rectangular frame, as shown in fig. 4c, in which there is no blurring phenomenon on an object, and in which the map quality and accuracy are higher.
Based on the same inventive concept, the present disclosure also provides a map generating apparatus, fig. 5 is a block diagram of a map generating apparatus according to an exemplary embodiment, and as shown in fig. 5, the apparatus 500 may include:
an acquisition module 501 configured to acquire a multi-frame point cloud acquired by a point cloud acquisition device in a target area;
A first determining module 502 configured to determine whether the signal strength of the target area meets a preset high strength requirement;
A second determining module 503, configured to determine a point cloud matching pair from the multi-frame point clouds according to a determination result of whether the signal intensity of the target area meets a preset high intensity requirement, and determine respective target pose information of the multi-frame point clouds according to the point cloud matching pair, where the two frame point clouds forming the point cloud matching pair have the same point;
A generating module 504 is configured to generate a map of the target area according to the respective target pose information of the multi-frame point clouds.
Through the technical scheme, the multi-frame point clouds acquired by the point cloud acquisition equipment in the target area are acquired, whether the signal intensity of the target area meets the preset high-intensity requirement is determined, point cloud matching pairs are determined from the multi-frame point clouds according to the determination result, the respective target pose information of the multi-frame point clouds is determined according to the point cloud matching pairs, and the map of the target area is generated according to the respective target pose information of the multi-frame point clouds. In this way, by constructing the point cloud matching pair, the respective target pose information of the multi-frame point cloud is determined according to the point cloud matching pair, pose information of the point cloud acquisition equipment is not required to be directly obtained according to pose information of the global positioning system and the inertial measurement unit, dependence on the global positioning system is reduced, and a map with higher precision can be generated under the condition that the signal of the global positioning system is weak. In addition, the two frames of point clouds have the same points, the two frames of point clouds can construct point cloud matching pairs, the point cloud matching pairs do not need to be matched with the nearest point of each point in the related technology, the calculated amount is relatively small, the point cloud matching pairs are determined from the multi-frame point clouds according to the determination result of whether the signal intensity of the target area meets the preset high-intensity requirement, the number of the constructed point cloud matching pairs can be matched with the signal intensity of the target area, and the map generation efficiency and accuracy can be improved by constructing the point cloud matching pairs.
Optionally, when the signal strength of the target area represented by the determination result meets a preset high strength requirement, the second determining module 503 is configured to determine a point cloud matching pair from the multiple frame point clouds, where for each frame of point clouds, the other frame of point clouds that have the same point as the frame of point clouds and have the farthest timestamp distance from the frame of point clouds are configured to form the point cloud matching pair with the frame of point clouds.
Optionally, in a case where the determining result indicates that the signal strength of the target area meets a preset high strength requirement, the second determining module 503 includes:
The first determining submodule is configured to determine, for each point cloud matching pair, respective spliced point clouds of two frames of point clouds forming the point cloud matching pair, and determine relative pose information between the two frames of point clouds according to the respective spliced point clouds of the two frames of point clouds, wherein the spliced point clouds of each frame of point clouds are formed by splicing point clouds acquired by the point cloud acquisition equipment in a designated area, and the designated area is an area around the position where the point cloud acquisition equipment acquires the frame of point clouds;
And the second determining submodule is configured to determine the target pose information of each multi-frame point cloud according to the relative pose information corresponding to each point cloud matching.
Optionally, when the signal strength of the target area represented by the determination result does not meet the preset high strength requirement, the second determining module 503 is configured to determine a point cloud matching pair from the multiple frame point clouds, where for each frame of point clouds, other frame point clouds in the multiple frame point clouds, where the same point exists as the frame point cloud, and the frame point clouds form the point cloud matching pair.
Optionally, in a case where the signal strength of the target area represented by the determination result does not meet a preset high strength requirement, the second determining module 503 includes:
And the third determining submodule is configured to determine respective characteristic points in two frames of point clouds forming the point cloud matching pair aiming at each point cloud matching pair, determine homonymous points in the two frames of point clouds from the characteristic points, and iteratively adjust respective current pose information of the two frames of point clouds by taking the distance information between the homonymous points as a target until the distance information between the homonymous points is smaller than a preset distance threshold value to obtain respective target pose information of the two frames of point clouds.
Optionally, the first determining module 502 includes:
A fourth determining submodule, configured to determine first equipment pose information of each first preset duration and first confidence degrees respectively corresponding to the first equipment pose information in the process that the point cloud acquisition equipment acquires the multi-frame point cloud in the target area;
and a fifth determining submodule configured to determine whether the signal strength of the target area meets a preset high-strength requirement according to the first confidence.
Optionally, the fourth determining sub-module includes:
the acquisition sub-module is configured to acquire second equipment pose information of the navigation equipment acquired by the navigation equipment in the target area every second preset time length and second confidence degrees corresponding to the second equipment pose information respectively;
A sixth determination submodule configured to determine the first device pose information according to the relative positional relationship between the navigation device and the point cloud acquisition device and the second device pose information;
a seventh determining submodule configured to determine, for each piece of first equipment pose information, target second equipment pose information closest to a timestamp of the first equipment pose information in the second equipment pose information, and take a second confidence level of the target second equipment pose information as a first confidence level corresponding to the first equipment pose information.
Optionally, the fifth determining module is configured to determine that the signal strength of the target area meets the preset high strength requirement if the duty ratio of the first confidence coefficient higher than the preset confidence coefficient threshold is greater than the preset duty ratio threshold, or the number of the first confidence coefficient higher than the preset confidence coefficient threshold is greater than the preset number threshold.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 6 is a block diagram of an electronic device 700, according to an example embodiment. As shown in fig. 6, the electronic device 700 may include a processor 701, a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700 to perform all or part of the steps in the map generating method described above. The memory 702 is used to store various types of data to support operation on the electronic device 700, which may include, for example, instructions for any application or method operating on the electronic device 700, as well as application-related data, such as contact data, messages sent and received, pictures, audio, video, and so forth. The Memory 702 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 703 can include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 702 or transmitted through the communication component 705. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, near Field Communication (NFC) for short, 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or a combination of more of them, is not limited herein. The communication component 705 accordingly may comprise a Wi-Fi module, a bluetooth module, an NFC module, etc.
In an exemplary embodiment, the electronic device 700 may be implemented by one or more Application-specific integrated circuits (ASIC), digital signal Processor (DIGITAL SIGNAL Processor, DSP), digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), programmable logic device (Programmable Logic Device, PLD), field programmable gate array (Field Programmable GATE ARRAY, FPGA), controller, microcontroller, microprocessor, or other electronic element for performing the map generation method described above.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the map generation method described above is also provided. For example, the computer readable storage medium may be the memory 702 including program instructions described above, which are executable by the processor 701 of the electronic device 700 to perform the map generation method described above.
Fig. 7 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, electronic device 1900 may be provided as a server. Referring to fig. 7, the electronic device 1900 includes a processor 1922, which may be one or more in number, and a memory 1932 for storing computer programs executable by the processor 1922. The computer program stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, the processor 1922 may be configured to execute the computer program to perform the map generation method described above.
In addition, the electronic device 1900 may further include a power component 1926 and a communication component 1950, the power component 1926 may be configured to perform power management of the electronic device 1900, and the communication component 1950 may be configured to enable communication of the electronic device 1900, e.g., wired or wireless communication. In addition, the electronic device 1900 may also include an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server TM,Mac OS XTM,UnixTM,LinuxTM or the like.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the map generation method described above is also provided. For example, the computer readable storage medium may be the memory 1932 described above including program instructions that are executable by the processor 1922 of the electronic device 1900 to perform the map generation method described above.
In another exemplary embodiment, a computer program product is also provided, comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described map generation method when executed by the programmable apparatus.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the embodiments described above, and various simple modifications may be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure, and all the simple modifications belong to the protection scope of the present disclosure.
In addition, the specific features described in the foregoing embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, the present disclosure does not further describe various possible combinations.
Moreover, any combination between the various embodiments of the present disclosure is possible as long as it does not depart from the spirit of the present disclosure, which should also be construed as the disclosure of the present disclosure.

Claims (9)

1. A map generation method, the method comprising:
acquiring multi-frame point clouds acquired by point cloud acquisition equipment in a target area;
Determining whether the signal intensity of the target area meets a preset high-intensity requirement;
Determining a point cloud matching pair from the multi-frame point clouds according to a determination result of whether the signal intensity of the target area meets a preset high-intensity requirement or not, and determining respective target pose information of the multi-frame point clouds according to the point cloud matching pair, wherein the same points exist between two frames of point clouds forming the point cloud matching pair;
And under the condition that the signal intensity of the determined result representing the target area does not meet the preset high-intensity requirement, determining the respective target pose information of the multi-frame point cloud according to the point cloud matching pair comprises the following steps:
For each point cloud matching pair, determining respective characteristic points in two frames of point clouds forming the point cloud matching pair, and determining homonymous points in the two frames of point clouds from the characteristic points; iteratively adjusting the current pose information of each of the two frames of point clouds by taking the distance information between the minimized homonymous points as a target until the distance information between the homonymous points is smaller than a preset distance threshold value, so as to obtain the target pose information of each of the two frames of point clouds;
And generating a map of the target area according to the respective target pose information of the multi-frame point clouds.
2. The method according to claim 1, wherein, in the case that the signal strength of the target area represented by the determination result meets a preset high strength requirement, the determining a point cloud matching pair from the multi-frame point clouds includes:
and aiming at each frame of point cloud, forming the point cloud matching pair with other frame of point clouds which have the same points as the frame of point clouds and have the farthest time stamp distance from the frame of point clouds in the multi-frame point clouds.
3. The method according to claim 1, wherein, in the case where the signal strength of the target area represented by the determination result meets a preset high strength requirement, the determining, according to the point cloud matching pair, the target pose information of each of the multi-frame point clouds includes:
determining respective spliced point clouds of two frames of point clouds forming the point cloud matching pairs aiming at each point cloud matching pair, and determining relative pose information between the two frames of point clouds according to the respective spliced point clouds of the two frames of point clouds, wherein the spliced point clouds of each frame of point clouds are spliced by point clouds acquired in a designated area by the point cloud acquisition equipment, and the designated area is an area around the position where the point cloud acquisition equipment acquires the frame of point clouds;
And determining the target pose information of each multi-frame point cloud according to the corresponding relative pose information of each point cloud matching.
4. The method according to claim 1, wherein, in the case that the signal strength of the target area represented by the determination result does not meet a preset high strength requirement, the determining the point cloud matching pair from the multi-frame point clouds includes:
And aiming at each frame of point cloud, forming the point cloud matching pair with other frames of point clouds which have the same points as the frame of point cloud in the multi-frame point cloud.
5. The method of claim 1, wherein determining whether the signal strength of the target area meets a preset high strength requirement comprises:
Determining first equipment pose information of the point cloud acquisition equipment in the process of acquiring the multi-frame point cloud in the target area at intervals of a first preset time period, and first confidence degrees respectively corresponding to the first equipment pose information;
And determining whether the signal intensity of the target area meets the preset high-intensity requirement according to the first confidence coefficient.
6. The method of claim 5, wherein determining first device pose information of the point cloud collecting device at intervals of a first preset duration in the process of collecting the multi-frame point cloud in the target area, and first confidence degrees respectively corresponding to the first device pose information, includes:
Acquiring second equipment pose information of the navigation equipment acquired by the navigation equipment in the target area every second preset time period and second confidence degrees corresponding to the second equipment pose information respectively;
determining the first equipment pose information according to the relative position relation between the navigation equipment and the point cloud acquisition equipment and the second equipment pose information;
And determining target second equipment pose information closest to the timestamp of the first equipment pose information in the second equipment pose information according to each piece of first equipment pose information, and taking the second confidence coefficient of the target second equipment pose information as a first confidence coefficient corresponding to the first equipment pose information.
7. The method of claim 5, wherein determining whether the signal strength of the target area meets a preset high strength requirement based on the first confidence comprises:
And if the duty ratio of the first confidence coefficient higher than the preset confidence coefficient threshold is larger than the preset duty ratio threshold, or the number of the first confidence coefficient higher than the preset confidence coefficient threshold is larger than the preset number threshold, determining that the signal strength of the target area meets the preset high-strength requirement.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1-7.
9. An electronic device, comprising:
A memory having a computer program stored thereon;
A processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-7.
CN202111130227.4A 2021-09-26 2021-09-26 Map generation method, device, medium and electronic equipment Active CN113920258B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111130227.4A CN113920258B (en) 2021-09-26 2021-09-26 Map generation method, device, medium and electronic equipment
PCT/CN2022/076191 WO2023045224A1 (en) 2021-09-26 2022-02-14 Map generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111130227.4A CN113920258B (en) 2021-09-26 2021-09-26 Map generation method, device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113920258A CN113920258A (en) 2022-01-11
CN113920258B true CN113920258B (en) 2025-03-04

Family

ID=79236271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111130227.4A Active CN113920258B (en) 2021-09-26 2021-09-26 Map generation method, device, medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN113920258B (en)
WO (1) WO2023045224A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920258B (en) * 2021-09-26 2025-03-04 北京三快在线科技有限公司 Map generation method, device, medium and electronic equipment
CN115079202B (en) * 2022-06-16 2024-08-27 智道网联科技(北京)有限公司 Laser radar image construction method and device, electronic equipment and storage medium
CN116736327B (en) * 2023-08-10 2023-10-24 长沙智能驾驶研究院有限公司 Positioning data optimization method, device, electronic equipment and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968229A (en) * 2020-06-28 2020-11-20 北京百度网讯科技有限公司 High-precision map making method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230379B (en) * 2017-12-29 2020-12-04 百度在线网络技术(北京)有限公司 Method and device for fusing point cloud data
CN108921947B (en) * 2018-07-23 2022-06-21 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and acquisition entity for generating electronic map
CN111060948B (en) * 2019-12-14 2021-10-29 深圳市优必选科技股份有限公司 Positioning method, positioning device, helmet and computer readable storage medium
CN113424232B (en) * 2019-12-27 2024-03-15 深圳市大疆创新科技有限公司 Three-dimensional point cloud map construction method, system and equipment
CN111912417B (en) * 2020-07-10 2022-08-02 上海商汤临港智能科技有限公司 Map construction method, map construction device, map construction equipment and storage medium
CN113920258B (en) * 2021-09-26 2025-03-04 北京三快在线科技有限公司 Map generation method, device, medium and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968229A (en) * 2020-06-28 2020-11-20 北京百度网讯科技有限公司 High-precision map making method and device

Also Published As

Publication number Publication date
CN113920258A (en) 2022-01-11
WO2023045224A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
CN113920258B (en) Map generation method, device, medium and electronic equipment
CN108717710B (en) Positioning method, device and system in indoor environment
JP6812404B2 (en) Methods, devices, computer-readable storage media, and computer programs for fusing point cloud data
CN110246182B (en) Vision-based global map positioning method and device, storage medium and equipment
CN112639502B (en) Robot pose estimation
KR102463176B1 (en) Device and method to estimate position
CN109613543B (en) Method and device for correcting laser point cloud data, storage medium and electronic equipment
US8526677B1 (en) Stereoscopic camera with haptic feedback for object and location detection
US11754701B2 (en) Electronic device for camera and radar sensor fusion-based three-dimensional object detection and operating method thereof
CN114111775B (en) Multi-sensor fusion positioning method and device, storage medium and electronic equipment
CN113030990B (en) Fusion ranging method, device, ranging equipment and medium for vehicle
WO2020041668A1 (en) Signals of opportunity aided inertial navigation
CN112556696B (en) Object positioning method and device, computer equipment and storage medium
US20140358434A1 (en) Peer-Assisted Dead Reckoning
US20140286537A1 (en) Measurement device, measurement method, and computer program product
JP2020507857A (en) Agent navigation using visual input
KR20130036145A (en) A moving information determination apparatus, a receiver, and a method thereby
CN112556699B (en) Navigation positioning method and device, electronic equipment and readable storage medium
Han et al. Precise positioning with machine learning based Kalman filter using GNSS/IMU measurements from android smartphone
CN111882494B (en) Pose graph processing method and device, computer equipment and storage medium
JP2019174191A (en) Data structure, information transmitting device, control method, program, and storage medium
Li et al. Low-cost sensors aided vehicular position prediction with partial least squares regression during GPS outage
CN118731998A (en) Positioning method, device, storage medium, electronic device and vehicle
CN112330712B (en) Motion compensation method and device, electronic equipment and storage medium for radar images
CN112035583B (en) Positioning updating method, device and system, mobile equipment control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant