[go: up one dir, main page]

CN117664101A - A lidar-based semantic slam mapping method for airport unmanned vehicles - Google Patents

A lidar-based semantic slam mapping method for airport unmanned vehicles Download PDF

Info

Publication number
CN117664101A
CN117664101A CN202311372644.9A CN202311372644A CN117664101A CN 117664101 A CN117664101 A CN 117664101A CN 202311372644 A CN202311372644 A CN 202311372644A CN 117664101 A CN117664101 A CN 117664101A
Authority
CN
China
Prior art keywords
map
semantic
laser radar
slam
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311372644.9A
Other languages
Chinese (zh)
Inventor
张楠楠
魏义礼
张威
高宏
苗文亮
曲振东
闫宗吉
桂成一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weihai Guangtai Airport Equipment Co Ltd
Original Assignee
Weihai Guangtai Airport Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weihai Guangtai Airport Equipment Co Ltd filed Critical Weihai Guangtai Airport Equipment Co Ltd
Priority to CN202311372644.9A priority Critical patent/CN117664101A/en
Publication of CN117664101A publication Critical patent/CN117664101A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Navigation (AREA)

Abstract

The invention relates to a laser radar-based airport unmanned vehicle semantic SLAM map construction method, which solves the technical problem of how to design a map construction method based on SLAM, thereby providing a basis for realizing functions of autonomous mapping, navigation, obstacle avoidance, cruising, monitoring and the like; the method comprises the steps of obtaining a three-dimensional SLAM of a current vehicle through a laser radar, making a matching dimensional point cloud with a three-dimensional point cloud of a camera, combining a high-precision grid map of the laser SLAM with rich environment information of a visual SLAM, and fusing the high-precision grid map with the grid map of the laser SLAM, so that semantic SLAM mapping of an airport environment is achieved.

Description

一种基于激光雷达的机场无人车语义slam建图方法A lidar-based semantic slam mapping method for airport unmanned vehicles

技术领域Technical field

本发明涉及机场无人车技术领域,具体而言,涉及一种基于激光雷达的机场无人车语义slam建图方法。The present invention relates to the technical field of airport unmanned vehicles, and specifically to a lidar-based semantic slam mapping method for airport unmanned vehicles.

背景技术Background technique

随着我国民航事业的快速发展,飞机出行越来越普及,这对民航出行服务提出了更高的要求。随着人工智能、大数据等技术热度不断提高,机场无人车得到了越来越广泛的应用,机场无人车具有高度自主化,且自适应能力高,能在复杂的非结构化环境中进行自主移动,由于机场的既定的现场环境,无人车在既定环境的定位与规划是本领域技术人员的研究重点。With the rapid development of my country's civil aviation industry, air travel has become more and more popular, which has put forward higher requirements for civil aviation travel services. With the increasing popularity of technologies such as artificial intelligence and big data, airport unmanned vehicles have become more and more widely used. Airport unmanned vehicles are highly autonomous and highly adaptable, and can operate in complex unstructured environments. To carry out autonomous movement, due to the established on-site environment of the airport, the positioning and planning of unmanned vehicles in the established environment are the research focus of technicians in this field.

同步定位与建图(SLAM)是一种可以有效估计机器人当前位置并构建其当前空间模型的方法,现已经广泛应用于摄影测量、机器人、自动驾驶、AR等新兴领域。Simultaneous localization and mapping (SLAM) is a method that can effectively estimate the current position of a robot and construct its current spatial model. It has been widely used in emerging fields such as photogrammetry, robotics, autonomous driving, and AR.

目前针对大型枢纽机场服务存在空间大、人员复杂以及服务项目多的问题,如何设计一种基于SLAM的地图构建方法,从而为实现自主测绘、导航、避障、巡航和监控等功能提供基础,进而提供机场内的FOD检测及突发情况报警,在减少人工成本的同时,还可以提高机场的服务效率和质量。Currently, in view of the problems of large hub airport services such as large space, complex personnel and many service items, how to design a map construction method based on SLAM to provide a basis for realizing functions such as independent surveying and mapping, navigation, obstacle avoidance, cruising and monitoring, and then Providing FOD detection and emergency alarms in the airport can not only reduce labor costs, but also improve the service efficiency and quality of the airport.

发明内容Contents of the invention

本发明为了解决如何设计一种基于SLAM的地图构建方法,从而为实现自主测绘、导航、避障、巡航和监控等功能提供基础的技术问题,提供一种基于激光雷达的机场无人车语义slam建图方法。In order to solve the technical problem of how to design a map construction method based on SLAM to provide a basic technical problem for realizing functions such as autonomous surveying and mapping, navigation, obstacle avoidance, cruising and monitoring, the present invention provides a lidar-based semantic slam for airport unmanned vehicles. Mapping method.

本发明提供一种基于激光雷达的机场无人车语义slam建图方法,包括以下步骤:The present invention provides a lidar-based semantic slam mapping method for airport unmanned vehicles, which includes the following steps:

第一步:通过两个单目摄像头扫描建立三维地图,将标注好的二维地图与所述三维地图进行匹配,构建三维语义地图;The first step: build a three-dimensional map by scanning with two monocular cameras, match the marked two-dimensional map with the three-dimensional map, and construct a three-dimensional semantic map;

第二步:将激光雷达获取到的三维点云与所述第一步输出的带有语义的点云进行特征匹配,达到预设阈值则判定为真并将语义信息添加到激光雷达的三维点云中,输出带有语义信息的三维点云;The second step: Match the features of the three-dimensional point cloud acquired by the lidar with the semantic point cloud output in the first step. If the preset threshold is reached, it is determined to be true and the semantic information is added to the three-dimensional point cloud of the lidar. In the cloud, output a three-dimensional point cloud with semantic information;

第三步,通过激光雷达、里程计、GPS+IMU的组合数据构建栅格地图;The third step is to construct a raster map through the combined data of lidar, odometry, and GPS+IMU;

第四步,将栅格地图与第二步输出的带有语义信息的三维地图进行融合。The fourth step is to fuse the raster map with the three-dimensional map with semantic information output in the second step.

优选地,所述第三步中:Preferably, in the third step:

通过LIO-SAM算法进行数据融合,首先需要通过GPS+IMU确定机器位置与姿态作为标定,再将激光雷达探测到的目标数据结合标定数据进行坐标变换统一映射到机器坐标系下,可以得到机器周边环境信息的坐标值;For data fusion through the LIO-SAM algorithm, it is first necessary to determine the machine position and attitude through GPS+IMU as calibration, and then combine the target data detected by the lidar with the calibration data for coordinate transformation and uniform mapping to the machine coordinate system, and the machine surroundings can be obtained The coordinate value of environmental information;

通过对惯性测量单元和里程计数据的计算可得到机器人在世界坐标系下的位姿,结合激光雷达得到的每个激光扫描点的距离和角度信息,通过三角定理即可计算获得当前时刻所有激光扫描集合,集合中记录有每个激光扫描点在世界坐标系下的位置;将环境划分为一个个小格子,每个格子代表地图中的一个区域;然后,根据激光雷达数据,判断每个格子是否被障碍物占据,使用栅格地图的二进制形式表示,其中被占据的格子为1,未被占据的格子为0,如此将激光雷达数据转换为了栅格地图,并具有了每个障碍物的坐标点。By calculating the inertial measurement unit and odometer data, the robot's pose in the world coordinate system can be obtained. Combined with the distance and angle information of each laser scanning point obtained by the lidar, all lasers at the current moment can be calculated through the triangle theorem. Scan the collection, which records the position of each laser scanning point in the world coordinate system; divide the environment into small grids, each grid represents an area in the map; then, judge each grid based on the lidar data Whether it is occupied by obstacles is represented by the binary form of the raster map, in which the occupied grid is 1 and the unoccupied grid is 0. In this way, the lidar data is converted into a raster map and has the information of each obstacle. Coordinate points.

优选地,第一步的过程是:Preferably, the first step is:

通过两个单目摄像头采集图像,使用SA算法进行图像分割,使用BM算法进行多相机深度估计,获得物体在图像中的位置,生成语义点云地图。Images are collected through two monocular cameras, the SA algorithm is used for image segmentation, the BM algorithm is used for multi-camera depth estimation, the position of the object in the image is obtained, and a semantic point cloud map is generated.

该方法通过融合激光雷达、GPS、相机等技术,可以实现自主测绘、导航、避障、巡航和监控等功能,还可以提供机场内的FOD检测及突发情况报警,在减少人工成本的同时,还可以提高机场的服务效率和质量。By integrating lidar, GPS, camera and other technologies, this method can realize functions such as independent surveying and mapping, navigation, obstacle avoidance, cruising and monitoring. It can also provide FOD detection and emergency alarm in the airport, while reducing labor costs. It can also improve the service efficiency and quality of the airport.

利用全景分割对图像进行语义信息提取,为激光slam建立的栅格地图添加语义信息,建立了语义地图。Panoramic segmentation is used to extract semantic information from the image, and semantic information is added to the raster map established by laser slam to establish a semantic map.

针对激光SLAM无法提供语义信息的问题,基于全景分割的语义地图构建方法可以构建出语义地图。在这种方法中,首先需要两个单目摄像头扫描建立三维地图,之后对已有的二维地图进行语义标注,将标注好的二维地图与三维地图进行匹配,构建三维语义地图。主要分为三个部分:语义提取、栅格地图构建和语义地图构建。机场无人车采用视觉传感器、激光雷达、里程计和IMU传感器。语义提取部分使用全景分割算法对拼接完整的图像进行全景分割,得到图像中目标物体的类型和在图像中的位置。栅格地图构建部分使用激光雷达、里程计和惯性测量单元等传感器的数据,经过预处理之后对机场无人车所在环境进行地图构建,得到精度较高的环境栅格地图。语义地图构建部分结合语义提取部分的信息和栅格地图构建部分的信息,在已建立的环境栅格地图的基础上,采用栅格地图中障碍物的具体坐标和深度对语义提取部分输出的对应障碍物深度进行修正,建立高精度的、含有物体语义信息的语义地图。To solve the problem that laser SLAM cannot provide semantic information, the semantic map construction method based on panoramic segmentation can construct a semantic map. In this method, two monocular cameras are first needed to scan and build a three-dimensional map, and then the existing two-dimensional map is semantically annotated, and the annotated two-dimensional map is matched with the three-dimensional map to construct a three-dimensional semantic map. It is mainly divided into three parts: semantic extraction, raster map construction and semantic map construction. Airport unmanned vehicles use vision sensors, lidar, odometry and IMU sensors. The semantic extraction part uses a panoramic segmentation algorithm to perform panoramic segmentation on the complete spliced image to obtain the type and position of the target object in the image. The grid map construction part uses data from sensors such as lidar, odometry, and inertial measurement units. After preprocessing, the map of the airport's unmanned vehicle environment is constructed to obtain a high-precision environmental grid map. The semantic map construction part combines the information of the semantic extraction part and the information of the raster map construction part. Based on the established environmental raster map, the specific coordinates and depth of the obstacles in the raster map are used to correspond to the output of the semantic extraction part. The obstacle depth is corrected to establish a high-precision semantic map containing object semantic information.

本发明的有益效果是,本发明采用相机、激光雷达、IMU等硬件对现有的机场车辆进行改造。相机选用可见光波段的单目相机,该相机价格低廉,色彩信息,纹理信息丰富,但在机场环境下,容易受自然光的影响最终成像出现曝光问题。激光雷达采用固态激光雷达和机械式激光雷达。激光雷达受光线影响低,因此拟采用激光雷达与相机融合为无人车提供周围环境提供先验信息,并用于后续建图与定位。The beneficial effect of the present invention is that it uses hardware such as cameras, lidar, and IMU to transform existing airport vehicles. The camera uses a monocular camera in the visible light band. This camera is cheap and has rich color information and texture information. However, in the airport environment, it is easily affected by natural light and the final image has exposure problems. Lidar uses solid-state lidar and mechanical lidar. Lidar is less affected by light, so it is planned to use lidar and camera fusion to provide a priori information about the surrounding environment for unmanned vehicles and use it for subsequent mapping and positioning.

本发明公开的机场无人车语义SLAM建图方案:首先需要用到相机、激光雷达、IMU、GPS、里程计结合语义解析模块,将相机部署于机场无人车上,使用相机对环境物体进行识别、分割、二维语义解析,再通过多相机对环境物体进行深度估计。通过激光雷达获取当前车辆的三维SLAM并与相机的三维点云做匹配维点云,再结合激光SLAM的高精度栅格地图和视觉SLAM的丰富环境信息,并将其与激光SLAM的栅格地图进行融合,从而实现对机场环境的语义SLAM建图。The semantic SLAM mapping scheme for airport unmanned vehicles disclosed by the present invention: First, it is necessary to use cameras, lidar, IMU, GPS, and odometer combined with a semantic parsing module. The cameras are deployed on the airport unmanned vehicles, and the cameras are used to map environmental objects. Recognition, segmentation, two-dimensional semantic parsing, and depth estimation of environmental objects through multiple cameras. Obtain the 3D SLAM of the current vehicle through lidar and match the 3D point cloud with the camera's 3D point cloud, then combine it with the high-precision raster map of laser SLAM and the rich environmental information of visual SLAM, and combine it with the raster map of laser SLAM Fusion is performed to achieve semantic SLAM mapping of the airport environment.

本发明进一步的特征和方面,将在以下参考附图的具体实施方式的描述中,得以清楚地记载。Further features and aspects of the invention will become apparent from the following description of specific embodiments with reference to the accompanying drawings.

附图说明Description of drawings

图1是基于激光雷达的机场无人车语义slam建图方法的流程图;Figure 1 is a flow chart of the semantic slam mapping method for airport unmanned vehicles based on lidar;

图2是使用Segment Anything算法与Sim-T算法边缘检测对比图;Figure 2 is a comparison diagram of edge detection using the Segment Anything algorithm and the Sim-T algorithm;

图3是机场无人车在机场中使用各种算法的建图结果;Figure 3 shows the mapping results of airport unmanned vehicles using various algorithms in the airport;

图4是图3中各算法回到起点后的相对平移误差,单位是米;Figure 4 is the relative translation error of each algorithm in Figure 3 after returning to the starting point, the unit is meters;

图5是对机场拖头车进行语义构图的实测结果,其中图(a)是点云地图,图(b)是语义构图;Figure 5 is the actual measurement result of semantic composition of an airport tractor, where Figure (a) is a point cloud map and Figure (b) is semantic composition;

图6是对机场廊桥进行语义构图的实测结果,其中图(a)是点云地图,图(b)是语义构图。Figure 6 is the actual measurement result of semantic composition of the airport corridor bridge, where Figure (a) is a point cloud map and Figure (b) is semantic composition.

具体实施方式Detailed ways

以下参照附图,以具体实施例对本发明作进一步详细说明。The present invention will be further described in detail with specific embodiments below with reference to the accompanying drawings.

如图1所示,基于激光雷达的机场无人车语义slam建图系统如下:As shown in Figure 1, the semantic slam mapping system for airport unmanned vehicles based on lidar is as follows:

一、相机系统语义点云模块1. Camera system semantic point cloud module

多个相机放置到无人车前部上方两侧,处于同一水平线,基线固定。多相机系统由左右两个相机构成,需预先通过常规标定技术进行校准,获取相机内部参数来矫正相机。在机场固定路段,通过SA(segment anything)算法对车辆在理想参数下进行图像采集且对采集到的图像进行实时的全景分割和获取语义。在多相机系统中,根据左右两个视图的视差图来预估公共区域深度图,并基于三维重建算法实现三维点云的重建。之后将二维图像的语义信息融入到三维点云中构建语义点云。Multiple cameras are placed on both sides above the front of the unmanned vehicle, at the same horizontal line, with a fixed baseline. The multi-camera system consists of two left and right cameras, which need to be calibrated in advance through conventional calibration technology to obtain the internal parameters of the camera to correct the camera. In the fixed road section of the airport, the SA (segment anything) algorithm is used to collect images of vehicles under ideal parameters and perform real-time panoramic segmentation and semantic acquisition of the collected images. In a multi-camera system, the common area depth map is estimated based on the disparity maps of the left and right views, and the 3D point cloud is reconstructed based on the 3D reconstruction algorithm. Then the semantic information of the two-dimensional image is integrated into the three-dimensional point cloud to construct a semantic point cloud.

二、激光雷达与相机融合模块2. Lidar and camera fusion module

激光雷达放置于无人车前侧上方,将激光雷达获取到的三维点云与相机系统输出的带有语义的点云进行特征匹配,达到预设阈值则判定为真并将语义信息添加到激光雷达的三维点云中,输出带有语义信息的三维点云。The lidar is placed above the front side of the unmanned vehicle, and the three-dimensional point cloud obtained by the lidar is matched with the semantic point cloud output by the camera system. When the preset threshold is reached, it is determined to be true and the semantic information is added to the laser In the three-dimensional point cloud of the radar, a three-dimensional point cloud with semantic information is output.

三、栅格地图构建模块3. Raster map building module

通过激光雷达、里程计、GPS+IMU的组合数据来完成的,包括如下步骤:数据采集,多源数据融合,更新局部地图,更新全局地图。This is accomplished through the combined data of lidar, odometer, and GPS+IMU, including the following steps: data collection, multi-source data fusion, local map update, and global map update.

3.1数据采集3.1 Data collection

激光雷达采用ToF法,从某个点出发,多角度全方位进行测量获得激光反射点与激光雷达之间距离以及角度信息,对得到的点云数据还需采用体素滤波进行降采样以减小运算量;惯性测量单元(Inertial Measurement Unit,IMU)采用加速度计、陀螺仪和磁力计组合而成,通过对加速度进行积分,组合测量相对于世界坐标系的姿态、位置、速度和加速度,结合GPS实时定位信息可避免因滑步等问题产生的偏移误差;里程计则可以利用车辆轮胎滚动的旋转信息来估计车辆的移动距离。Lidar uses the ToF method. Starting from a certain point, it measures in all directions from multiple angles to obtain the distance and angle information between the laser reflection point and the lidar. The obtained point cloud data also needs to be down-sampled using voxel filtering to reduce the size of the point cloud. Computational quantity; Inertial Measurement Unit (IMU) is composed of an accelerometer, a gyroscope and a magnetometer. By integrating the acceleration, the attitude, position, speed and acceleration relative to the world coordinate system are measured in combination, combined with GPS. Real-time positioning information can avoid offset errors caused by slippage and other issues; the odometer can use the rotation information of the vehicle's tire rolling to estimate the vehicle's moving distance.

3.2多源数据融合3.2 Multi-source data fusion

多传感器数据的融合需保证各传感器信息时间与空间的同步。时间同步采用软同步的方法,利用时间戳进行不同传感器的信息匹配。使用插值法来将各传感器数据统一到扫描周期较长的传感器数据上,在获得同一时刻的传感器数据后再通过LIO-SAM算法进行数据融合。空间同步则是将各传感器的信息统一到世界坐标系中,首先需要通过GPS+IMU确定机器位置与姿态作为标定,再将激光雷达探测到的目标数据结合标定数据进行坐标变换统一映射到机器坐标系下,可以得到机器周边环境信息的坐标值。The fusion of multi-sensor data needs to ensure the time and space synchronization of each sensor information. Time synchronization adopts the soft synchronization method, using timestamps to match information from different sensors. The interpolation method is used to unify each sensor data to the sensor data with a longer scanning period. After obtaining the sensor data at the same time, the data is fused through the LIO-SAM algorithm. Spatial synchronization is to unify the information of each sensor into the world coordinate system. First, the machine position and attitude need to be determined through GPS+IMU as calibration, and then the target data detected by the lidar is combined with the calibration data for coordinate transformation and unified mapping to machine coordinates. Under the system, the coordinate values of the machine's surrounding environment information can be obtained.

3.3更新局部地图3.3 Update local map

通过对惯性测量单元和里程计数据的计算可得到机器人在世界坐标系下的位姿,结合激光雷达得到的每个激光扫描点的距离和角度信息,通过三角定理即可计算获得当前时刻所有激光扫描集合,集合中记录有每个激光扫描点在世界坐标系下的位置。将环境划分为一个个小格子,每个格子代表地图中的一个区域。然后,根据激光雷达数据,判断每个格子是否被障碍物占据。可以使用栅格地图的二进制形式表示,其中被占据的格子为1,未被占据的格子为0,如此便将激光雷达数据转换为了栅格地图,并具有了每个障碍物的坐标点。By calculating the inertial measurement unit and odometer data, the robot's pose in the world coordinate system can be obtained. Combined with the distance and angle information of each laser scanning point obtained by the lidar, all lasers at the current moment can be calculated through the triangle theorem. Scan collection, which records the position of each laser scanning point in the world coordinate system. Divide the environment into small grids, each grid represents an area on the map. Then, based on the lidar data, it is determined whether each grid is occupied by obstacles. The binary form of the raster map can be used, in which the occupied grid is 1 and the unoccupied grid is 0. In this way, the lidar data is converted into a raster map with the coordinate points of each obstacle.

3.4更新全局地图3.4 Update global map

将连续的激光雷达扫描用算法进行处理,将新的激光雷达数据与已有的地图进行融合,更新地图中的障碍物信息。Continuous lidar scans are processed using algorithms, new lidar data are integrated with existing maps, and obstacle information in the map is updated.

以上所述仅对本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。The above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes.

Claims (3)

1. The airport unmanned vehicle semantic slam map building method based on the laser radar is characterized by comprising the following steps of:
the first step: a three-dimensional map is established through scanning of two monocular cameras, and the marked two-dimensional map is matched with the three-dimensional map to construct a three-dimensional semantic map;
and a second step of: performing feature matching on the three-dimensional point cloud obtained by the laser radar and the point cloud with the semantics output in the first step, judging to be true if the three-dimensional point cloud reaches a preset threshold value, adding semantic information into the three-dimensional point cloud of the laser radar, and outputting the three-dimensional point cloud with the semantic information;
thirdly, constructing a grid map through combined data of a laser radar, an odometer and a GPS+IMU;
and fourthly, fusing the grid map with the three-dimensional map with the semantic information output in the second step.
2. The method for semantic slam mapping of an airport unmanned vehicle based on laser radar according to claim 1, wherein in the third step:
the LIO-SAM algorithm is used for data fusion, the position and the gesture of the machine are determined through the GPS and the IMU to be used as calibration, and then the target data detected by the laser radar are combined with calibration data to be subjected to coordinate transformation and uniformly mapped to a machine coordinate system, so that coordinate values of surrounding environment information of the machine can be obtained;
the pose of the robot under the world coordinate system can be obtained through calculation of the inertial measurement unit and the odometer data, the distance and angle information of each laser scanning point obtained by the laser radar are combined, all laser scanning sets at the current moment can be obtained through calculation through a triangle theorem, and the position of each laser scanning point under the world coordinate system is recorded in the sets; dividing the environment into small grids, each grid representing an area in the map; then, it is determined whether each cell is occupied by an obstacle from the laser radar data, expressed using a binary form of a grid map in which occupied cells are 1 and unoccupied cells are 0, so that the laser radar data is converted into a grid map and has coordinate points of each obstacle.
3. The method for constructing the semantic slam map of the airport unmanned vehicle based on the laser radar according to claim 2, wherein the process of the first step is as follows:
the method comprises the steps of acquiring images through two monocular cameras, performing image segmentation through an SA algorithm, performing multi-camera depth estimation through a BM algorithm, obtaining the position of an object in the images, and generating a semantic point cloud map.
CN202311372644.9A 2023-10-20 2023-10-20 A lidar-based semantic slam mapping method for airport unmanned vehicles Pending CN117664101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311372644.9A CN117664101A (en) 2023-10-20 2023-10-20 A lidar-based semantic slam mapping method for airport unmanned vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311372644.9A CN117664101A (en) 2023-10-20 2023-10-20 A lidar-based semantic slam mapping method for airport unmanned vehicles

Publications (1)

Publication Number Publication Date
CN117664101A true CN117664101A (en) 2024-03-08

Family

ID=90072248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311372644.9A Pending CN117664101A (en) 2023-10-20 2023-10-20 A lidar-based semantic slam mapping method for airport unmanned vehicles

Country Status (1)

Country Link
CN (1) CN117664101A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120014180A (en) * 2025-04-18 2025-05-16 宁波傲视智绘光电科技有限公司 Target three-dimensional reconstruction system and method for constructing three-dimensional contour of moving target

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190213790A1 (en) * 2018-01-11 2019-07-11 Mitsubishi Electric Research Laboratories, Inc. Method and System for Semantic Labeling of Point Clouds
CN110009718A (en) * 2019-03-07 2019-07-12 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device
CN113009501A (en) * 2021-02-25 2021-06-22 重庆交通大学 Image and laser data fused robot navigation three-dimensional semantic map generation method
CN114638909A (en) * 2022-03-24 2022-06-17 杭州电子科技大学 Substation semantic map construction method based on laser SLAM and visual fusion
CN116202509A (en) * 2023-03-17 2023-06-02 南京航空航天大学 Passable map generation method for indoor multi-layer building

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190213790A1 (en) * 2018-01-11 2019-07-11 Mitsubishi Electric Research Laboratories, Inc. Method and System for Semantic Labeling of Point Clouds
CN110009718A (en) * 2019-03-07 2019-07-12 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device
CN113009501A (en) * 2021-02-25 2021-06-22 重庆交通大学 Image and laser data fused robot navigation three-dimensional semantic map generation method
CN114638909A (en) * 2022-03-24 2022-06-17 杭州电子科技大学 Substation semantic map construction method based on laser SLAM and visual fusion
CN116202509A (en) * 2023-03-17 2023-06-02 南京航空航天大学 Passable map generation method for indoor multi-layer building

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120014180A (en) * 2025-04-18 2025-05-16 宁波傲视智绘光电科技有限公司 Target three-dimensional reconstruction system and method for constructing three-dimensional contour of moving target

Similar Documents

Publication Publication Date Title
EP4124829B1 (en) Map construction method, apparatus, device and storage medium
US11354856B2 (en) Unmanned aerial vehicle navigation map construction system and method based on three-dimensional image reconstruction technology
CN111275750B (en) Indoor space panoramic image generation method based on multi-sensor fusion
CN111060924B (en) A SLAM and Object Tracking Method
CN106802668B (en) Unmanned aerial vehicle three-dimensional collision avoidance method and system based on binocular and ultrasonic fusion
CN110726409B (en) A Map Fusion Method Based on Laser SLAM and Visual SLAM
CN106681353B (en) Obstacle avoidance method and system for UAV based on binocular vision and optical flow fusion
CN111247557A (en) Method, system and movable platform for moving target object detection
CN115272596A (en) A multi-sensor fusion SLAM method for monotonous and textureless large scenes
CN107193011A (en) A kind of method for being used to quickly calculate car speed in automatic driving car area-of-interest
KR101700764B1 (en) Method for Autonomous Movement and Apparatus Thereof
CN114323038B (en) Outdoor positioning method integrating binocular vision and 2D lidar
CN111257892A (en) An obstacle detection method for vehicle autonomous driving
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in autonomous driving scenarios
CN114821363B (en) Unmanned aerial vehicle positioning and mapping method and system based on semantic information matching
CN114923477A (en) Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology
CN111580130A (en) A Mapping Method Based on Multi-sensor Fusion
CN116189138A (en) Visual field blind area pedestrian detection algorithm based on vehicle-road cooperation
CN119440090A (en) Autonomous obstacle avoidance method, system, device and storage medium for inspection drone
CN120482088A (en) Small-sized carrier for carrying people and logistics and unmanned intelligent driving system thereof
CN114529603B (en) An odometry method based on the fusion of laser SLAM and monocular vision SLAM
CN119399282A (en) A positioning and mapping method and system based on multi-source data
CN119784948A (en) A method for reconstructing 3D color geometric models of outdoor inspection scenes based on multi-source information fusion
CN118191873A (en) Multi-sensor fusion ranging system and method based on light field image
CN117664101A (en) A lidar-based semantic slam mapping method for airport unmanned vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20240308