CN114964236A - Mapping and vehicle positioning system and method for underground parking lot environment - Google Patents
Mapping and vehicle positioning system and method for underground parking lot environment Download PDFInfo
- Publication number
- CN114964236A CN114964236A CN202210580720.4A CN202210580720A CN114964236A CN 114964236 A CN114964236 A CN 114964236A CN 202210580720 A CN202210580720 A CN 202210580720A CN 114964236 A CN114964236 A CN 114964236A
- Authority
- CN
- China
- Prior art keywords
- module
- vehicle
- global
- semantic
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
本发明公开了一种针对地下停车场环境的建图与车辆定位系统及方法,该系统中前视相机用于获取车辆前方区域的原始图像;逆透视变换模块用于得到俯视图并输入给语义特征检测模块;语义特征检测模块用于得到俯视图语义分割后的图像,并获取语义特征输入到建图模块和定位模块;里程计模块用于获取车辆的位姿;建图模块基于车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;定位模块用于获取车辆当前语义特征的全局坐标点,并将该全局坐标点与全局语义地图做匹配,以获得车辆当前的语义定位结果。本发明能实现车辆在地下停车场环境下的长时间稳定定位,更加鲁棒地应对环境变化,同时成本较低。
The invention discloses a mapping and vehicle positioning system and method for an underground parking lot environment. In the system, a front-view camera is used to obtain the original image of the area in front of the vehicle; an inverse perspective transformation module is used to obtain a top view and input it to semantic features detection module; the semantic feature detection module is used to obtain the semantically segmented image of the top view, and obtain the semantic features and input them to the mapping module and the positioning module; the odometer module is used to obtain the pose of the vehicle; the mapping module is based on the pose of the vehicle, Project the semantic features from the vehicle body coordinate system to the global coordinate system to obtain a global semantic map; the positioning module is used to obtain the global coordinate points of the current semantic features of the vehicle, and match the global coordinate points with the global semantic map to obtain a global semantic map. The current semantic localization result of the vehicle. The invention can realize long-term stable positioning of the vehicle in the underground parking lot environment, more robustly cope with environmental changes, and at the same time, the cost is low.
Description
技术领域technical field
本发明涉及车辆定位技术领域,具体涉及一种针对地下停车场环境的建图与车辆定位系统及方法。The invention relates to the technical field of vehicle positioning, in particular to a mapping and vehicle positioning system and method for an underground parking lot environment.
背景技术Background technique
自动泊车是自动驾驶领域的一项具体应用,在这项任务中,车辆经常需要在狭窄、拥挤、弱光且没有GPS信号的停车场自主导航,因此,对车辆的准确定位至关重要。近十年出现了大量的定位方案,包括基于视觉的定位方案、基于视觉惯性导航的定位方案以及基于激光雷达的定位方案。为了节约成本,大量的研究都基于视觉定位展开。传统的视觉定位方案多是利用环境中的稀疏点、线段或平面等几何特征,其中,角点在视觉里程计算法中被广泛应用,这些方案的一般流程为通过特征点匹配,估计地图点位置,建立地图并基于此地图对相机位姿进行估计。Autonomous parking is a specific application in the field of autonomous driving. In this task, vehicles often need to navigate autonomously in narrow, crowded, low-light parking lots without GPS signals. Therefore, accurate vehicle positioning is crucial. A large number of positioning schemes have emerged in the past decade, including vision-based positioning schemes, visual-inertial navigation-based positioning schemes, and lidar-based positioning schemes. In order to save costs, a lot of research is based on visual positioning. Traditional visual positioning schemes mostly use geometric features such as sparse points, line segments or planes in the environment. Among them, corner points are widely used in visual odometry. The general process of these schemes is to estimate the location of map points through feature point matching. , build a map and estimate the camera pose based on this map.
近些年,基于ORB特征进行建图定位的方案在学术界和工业界的备受关注,如公开号为CN113808203A的发明专利申请中公开了一种基于LK光流法与ORB-SLAM2的导航定位方法,该方法在ORB-SLAM2前加入基于GPU的LK光流算法,根据光流追踪特征点的数量,作为当前帧是否为关键帧的判断条件,并且对于非关键帧,不会进入ORBSLAM2的三个线程,阻止了非关键帧提取特征点和后续的计算,从而加快ORBSLAM2对Tracking线程处理,增强了算法的实时性,而且并不影响其鲁棒性,适用于汽车的自动驾驶和AGV物流小车的定位与导航。然而,这种定位方法容易受到光照、视角及环境外观变化的影响,无法长时间地有效定位。尤其是地下停车场这种环境,向ORB-SLAM等传统的视觉定位方案提出了巨大的挑战。一方面,地下室内停车场主要是墙体、地面及立柱组成,这种弱纹理结构使得特征点的检测和匹配变得很不稳定,定位也因此容易出现跟踪丢失的状况。另一方面是不同时间段不同车辆进进出出造成的停车场环境变化,长期的车辆重定位对于传统视觉定位方案来讲几乎不可能。In recent years, the solution of mapping and positioning based on ORB features has attracted much attention in academia and industry. For example, the invention patent application with publication number CN113808203A discloses a navigation and positioning based on LK optical flow method and ORB-SLAM2. method, this method adds the GPU-based LK optical flow algorithm before ORB-SLAM2, and uses the number of optical flow tracking feature points as the judgment condition for whether the current frame is a key frame, and for non-key frames, it will not enter the third stage of ORBSLAM2. This prevents non-key frame extraction of feature points and subsequent calculations, thereby speeding up ORBSLAM2's processing of Tracking threads, enhancing the real-time performance of the algorithm, and does not affect its robustness. It is suitable for autonomous driving of cars and AGV logistics trolleys positioning and navigation. However, this localization method is easily affected by changes in illumination, viewing angle, and appearance of the environment, and cannot effectively localize for a long time. Especially the environment of underground parking lot presents a huge challenge to traditional visual positioning schemes such as ORB-SLAM. On the one hand, the underground indoor parking lot is mainly composed of walls, ground and columns. This weak texture structure makes the detection and matching of feature points very unstable, and the positioning is also prone to tracking loss. On the other hand, the parking lot environment changes caused by the entry and exit of different vehicles in different time periods. Long-term vehicle relocation is almost impossible for traditional visual positioning solutions.
发明内容SUMMARY OF THE INVENTION
针对现有技术存在的上述不足,本发明要解决的技术问题是:如何提供一种能实现车辆在地下停车场环境下的长时间稳定定位,更加鲁棒地应对环境变化,同时成本较低的针对地下停车场环境的建图与车辆定位系统及方法。Aiming at the above-mentioned deficiencies in the prior art, the technical problem to be solved by the present invention is: how to provide a vehicle that can realize long-term stable positioning of vehicles in an underground parking lot environment, cope with environmental changes more robustly, and at the same time have a lower cost A mapping and vehicle positioning system and method for an underground parking lot environment.
为了解决上述技术问题,本发明采用如下技术方案:In order to solve the above-mentioned technical problems, the present invention adopts the following technical solutions:
一种针对地下停车场环境的建图与车辆定位系统,包括前视相机、逆透视变换模块、语义特征检测模块、里程计模块、建图模块和定位模块;A mapping and vehicle positioning system for an underground parking lot environment, comprising a front-view camera, an inverse perspective transformation module, a semantic feature detection module, an odometer module, a mapping module and a positioning module;
所述前视相机用于获取车辆前方区域的原始图像并输入给所述逆透视变换模块;The forward-looking camera is used to obtain the original image of the area in front of the vehicle and input it to the inverse perspective transformation module;
所述逆透视变换模块用于将所述前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给所述语义特征检测模块;The inverse perspective transformation module is configured to perform inverse perspective transformation on the original image input by the front-view camera to obtain a top view and input it to the semantic feature detection module;
所述语义特征检测模块用于通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到所述建图模块和所述定位模块;The semantic feature detection module is used to obtain the semantically segmented image of the top view through the convolutional neural network, and obtain the semantic features and input them to the mapping module and the positioning module;
所述里程计模块用于获取车辆的位姿输入到所述建图模块和所述定位模块;The odometer module is used to obtain the pose of the vehicle and input it to the mapping module and the positioning module;
所述建图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;The mapping module projects semantic features from the vehicle body coordinate system to the global coordinate system based on the pose of the vehicle provided by the odometer module to obtain a global semantic map;
所述定位模块用于获取车辆当前语义特征的全局坐标点,并将该全局坐标点与全局语义地图做匹配,以获得车辆当前的语义定位结果。The positioning module is used for acquiring the global coordinate point of the current semantic feature of the vehicle, and matching the global coordinate point with the global semantic map to obtain the current semantic positioning result of the vehicle.
优选的,所述里程计模块包括一个惯性测量单元和两个轮速传感器;Preferably, the odometer module includes an inertial measurement unit and two wheel speed sensors;
所述建图模块包括局部地图模块和全局地图模块;The mapping module includes a local map module and a global map module;
所述局部地图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给所述全局地图模块;The local map module projects semantic features from the vehicle body coordinate system to the global coordinate system based on the pose of the vehicle provided by the odometer module to obtain a local map and input it to the global map module;
所述全局地图模块用于对局部地图加回环检测和全局优化后生成全局语义地图。The global map module is used to generate a global semantic map after adding loopback detection and global optimization to the local map.
一种针对地下停车场环境的建图与车辆定位方法,采用上述的针对地下停车场环境的建图与车辆定位系统,所述建图与车辆定位方法包括建图方法和车辆定位方法;A mapping and vehicle positioning method for an underground parking lot environment, using the above-mentioned mapping and vehicle positioning system for an underground parking lot environment, wherein the mapping and vehicle positioning method includes a mapping method and a vehicle positioning method;
其中所述建图方法包括以下步骤:The mapping method includes the following steps:
步骤A1)所述前视相机获取车辆前方区域的原始图像并输入给所述逆透视变换模块;Step A1) The forward-looking camera obtains the original image of the area in front of the vehicle and inputs it to the inverse perspective transformation module;
步骤A2)所述逆透视变换模块将所述前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给所述语义特征检测模块;Step A2) The inverse perspective transformation module performs inverse perspective transformation on the original image input by the front-view camera to obtain a top view and input it to the semantic feature detection module;
步骤A3)所述语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到所述建图模块;Step A3) The semantic feature detection module obtains the image after the semantic segmentation of the overhead view through the convolutional neural network, and obtains the semantic feature and inputs it to the mapping module;
步骤A4)所述里程计模块获取车辆的位姿输入到所述建图模块;Step A4) The odometer module obtains the pose of the vehicle and inputs it to the mapping module;
步骤A5)所述建图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;Step A5) The mapping module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the pose of the vehicle provided by the odometer module to obtain a global semantic map;
步骤A6)完成建图;Step A6) completes mapping;
所述车辆定位方法包括以下步骤:The vehicle positioning method includes the following steps:
步骤S1)所述前视相机获取车辆当前位置前方区域的原始图像并输入给所述逆透视变换模块;Step S1) The forward-looking camera obtains the original image of the area in front of the current position of the vehicle and inputs it to the inverse perspective transformation module;
步骤S2)所述逆透视变换模块将所述前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给所述语义特征检测模块;Step S2) The inverse perspective transformation module performs inverse perspective transformation on the original image input by the front-view camera to obtain a top view and inputs it to the semantic feature detection module;
步骤S3)所述语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到所述定位模块;Step S3) the semantic feature detection module obtains the semantically segmented image of the overhead view through the convolutional neural network, and obtains the semantic feature and inputs it to the positioning module;
步骤S4)所述里程计模块获取车辆当前的位姿输入到所述定位模块;Step S4) The odometer module obtains the current posture of the vehicle and inputs it to the positioning module;
步骤S5)所述定位模块基于所述里程计模块提供的车辆当前的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得车辆当前语义特征的全局坐标点;Step S5) The positioning module projects the semantic feature from the vehicle body coordinate system to the global coordinate system based on the current pose of the vehicle provided by the odometer module, so as to obtain the global coordinate point of the current semantic feature of the vehicle;
步骤S6)将步骤S5)中得到的全局坐标点与所述建图方法中得到的全局语义地图做匹配,以获得车辆当前的语义定位结果,完成车辆定位。Step S6) Match the global coordinate points obtained in step S5) with the global semantic map obtained in the mapping method to obtain the current semantic positioning result of the vehicle, and complete the vehicle positioning.
优选的,逆透视变换的公式为:Preferably, the formula for inverse perspective transformation is:
式中:πc(·)为前视相机投影模型,[Rctc]为从前视相机坐标系向车体坐标系转化的外参矩阵,[u v]为语义特征的像素坐标,[xv yv]为语义特征在车体坐标系下的坐标,λ为尺度因子。In the formula: π c ( ) is the projection model of the forward-looking camera, [R c t c ] is the extrinsic parameter matrix transformed from the forward-looking camera coordinate system to the vehicle body coordinate system, [uv] is the pixel coordinate of the semantic feature, [x v y v ] is the coordinate of the semantic feature in the vehicle body coordinate system, and λ is the scale factor.
优选的,所述语义特征检测模块的卷积神经网络使用所述前视相机采集的停车场图片集进行训练分类,且卷积神经网络的分类类别包括车道线、停车线、引导线、减速带、可通行区域、障碍物和墙体,且所述语义特征检测模块通过卷积神经网络获取的语义特征包括车道线、停车线、引导线和减速带。Preferably, the convolutional neural network of the semantic feature detection module uses the parking lot picture set collected by the forward-looking camera for training and classification, and the classification categories of the convolutional neural network include lane lines, parking lines, guide lines, and speed bumps. , passable areas, obstacles and walls, and the semantic features obtained by the semantic feature detection module through the convolutional neural network include lane lines, stop lines, guide lines and speed bumps.
优选的,将语义特征从车体坐标系投影至全局坐标系的公式为:Preferably, the formula for projecting semantic features from the vehicle body coordinate system to the global coordinate system is:
式中:[xw,yw zw]为语义特征在全局坐标系下的坐标,Ro为从车体坐标系向全局坐标系转化的旋转矩阵,to为从车体坐标系向全局坐标系转化的平移向量。In the formula: [x w, y w z w ] is the coordinate of the semantic feature in the global coordinate system, R o is the rotation matrix transformed from the vehicle body coordinate system to the global coordinate system, and t o is the transformation from the vehicle body coordinate system to the global coordinate system. The translation vector for the coordinate system transformation.
优选的,所述里程计模块包括一个惯性测量单元和两个轮速传感器;Preferably, the odometer module includes an inertial measurement unit and two wheel speed sensors;
所述建图模块包括局部地图模块和全局地图模块;The mapping module includes a local map module and a global map module;
所述局部地图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给所述全局地图模块;The local map module projects semantic features from the vehicle body coordinate system to the global coordinate system based on the pose of the vehicle provided by the odometer module to obtain a local map and input it to the global map module;
所述全局地图模块用于对局部地图加回环检测和全局优化后生成全局语义地图;The global map module is used to generate a global semantic map after adding loopback detection and global optimization to the local map;
步骤A5)中,所述局部地图模块基于所述里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给所述全局地图模块;所述全局地图模块再对局部地图加回环检测和全局优化后生成全局语义地图。In step A5), the local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the pose of the vehicle provided by the odometer module to obtain a local map and input it to the global map module. The global map module generates a global semantic map after adding loopback detection and global optimization to the local map.
优选的,步骤A5)中,所述全局地图模块对局部地图加回环检测的方法为:使用基于数据配准法将当前的局部地图与之前生成的局部地图做匹配,若匹配结果满足设定值,则证明出现回环,用计算得到的相对位姿进行位姿图优化来消除累积误差。Preferably, in step A5), the method for adding loopback detection to the local map by the global map module is: using the data-based registration method to match the current local map with the previously generated local map, if the matching result satisfies the set value , it proves that a loopback occurs, and the calculated relative pose is used to optimize the pose graph to eliminate the accumulated error.
优选的,步骤A5)中,所述全局地图模块对局部地图进行全局优化的方法包括约束两帧连续局部地图的所述里程计模块的测量值、以及回环检测时基于数据配准法得到的相对位姿做回环帧间的约束。Preferably, in step A5), the method for performing global optimization on the local map by the global map module includes constraining the measurement value of the odometer module of two consecutive frames of local maps, and the relative data obtained based on the data registration method during loop closure detection. The pose is used as a constraint between loopback frames.
优选的,步骤A5)中,所述全局地图模块对局部地图进行全局优化的方法为高斯牛顿法,目标函数为:Preferably, in step A5), the method that the global map module performs global optimization on the local map is the Gauss-Newton method, and the objective function is:
式中:χ为位姿集合,Tt,t+1为t帧和t+1帧前视相机的相对位姿估计值,为从里程计模块获取的相对位姿测量值,L为回环帧对的集合,为基于数据配准法得到的第i帧和第j帧之间的相对位姿,做测量值使用,Ti,j为第i帧和第j帧带有累计误差的相对位姿估计值。In the formula: χ is the pose set, T t, t+1 is the relative pose estimation value of the front-view camera at frame t and frame t+1, is the relative pose measurement obtained from the odometer module, L is the set of loopback frame pairs, For the relative pose between the i-th frame and the j-th frame obtained based on the data registration method, it is used as a measurement value, and T i,j is the relative pose estimation value of the i-th frame and the j-th frame with accumulated errors.
与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:
1、本发明不同于传统视觉定位方案用到的环境中的几何特征,本发明利用环境中的语义特征,对于停车场环境,主要包括车道线、引导线、停车线和减速带等,语义特征相比于几何特征能够长期稳定存在,并在光照、视角和环境变化时依旧健壮,这些语义特征由卷积神经网络检测,并利用该语义特征建立全局语义地图,再利用全局语义地图对车辆进行定位。因此该方案比传统的定位方案能更加鲁棒地应对环境变化,并能保持长时间稳定准确的使用状态。1. The present invention is different from the geometric features in the environment used in the traditional visual positioning scheme. The present invention utilizes the semantic features in the environment. For the parking lot environment, it mainly includes lane lines, guide lines, parking lines and speed bumps. Semantic features Compared with geometric features, which can exist stably for a long time and remain robust under changes in illumination, perspective, and environment, these semantic features are detected by convolutional neural networks, and the semantic features are used to build a global semantic map, and then use the global semantic map to conduct vehicle analysis. position. Therefore, the scheme can cope with environmental changes more robustly than the traditional positioning scheme, and can maintain a stable and accurate use state for a long time.
2、本发明所用传感器仅为一个前视相机、一个IMU(惯性测量单元)和两个轮速传感器,其中IMU和轮速传感器组成里程计模块,在建图和定位时提供车辆的相对位姿,因此本发明的使用成本很低,量产车也能轻松配置。2. The sensors used in the present invention are only a forward-looking camera, an IMU (Inertial Measurement Unit) and two wheel speed sensors, wherein the IMU and the wheel speed sensors form an odometer module, which provides the relative pose of the vehicle during mapping and positioning Therefore, the use cost of the present invention is very low, and the mass production vehicle can also be easily configured.
3、本发明的框架主要包含两部分,建图和定位。顾名思义,建图即建立停车场环境的全局语义地图;由前视相机获取车辆前方区域的原始图像,经过逆透视变换后得到俯视图,然后输入卷积神经网络得到语义分割后的图像,并获得车道线、停车线、引导线和减速带等语义特征;然后基于里程计模块提供的位姿,将语义特征投影到全局坐标系下,鉴于里程计模块的数据漂移,本发明还利用回环检测和全局优化来消除这些累积误差,最后通过保存这些特征点,建立停车场的全局语义地图。全局语义地图生成之后,进入停车场的车辆通过前视相机获取图像、逆透视变换、语义特征检测及里程计模块提供的位姿获得语义特征的全局坐标点,再通过ICP算法(基于数据配准法)与建好的全局语义地图做匹配修正车辆位姿,最终获得准确的车辆定位结果。3. The framework of the present invention mainly includes two parts, mapping and positioning. As the name implies, mapping is to establish a global semantic map of the parking lot environment; the front-view camera obtains the original image of the area in front of the vehicle, and after inverse perspective transformation, the top view is obtained, and then the convolutional neural network is input to obtain the semantically segmented image, and obtain the lane. Semantic features such as lines, parking lines, guide lines and speed bumps; then based on the pose provided by the odometer module, the semantic features are projected to the global coordinate system. In view of the data drift of the odometer module, the present invention also utilizes loopback detection and global Optimization is done to eliminate these accumulated errors, and finally by saving these feature points, a global semantic map of the parking lot is established. After the global semantic map is generated, the vehicle entering the parking lot obtains the image through the forward-looking camera, inverse perspective transformation, semantic feature detection and the pose provided by the odometer module to obtain the global coordinate point of the semantic feature, and then uses the ICP algorithm (based on data registration) to obtain the global coordinate points of the semantic feature. method) and the built global semantic map to match and correct the vehicle pose, and finally obtain accurate vehicle positioning results.
附图说明Description of drawings
图1为本发明针对地下停车场环境的建图与车辆定位系统的系统框图。FIG. 1 is a system block diagram of a mapping and vehicle positioning system for an underground parking lot environment according to the present invention.
具体实施方式Detailed ways
下面将结合附图及实施例对本发明作进一步说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.
如附图1所示,一种针对地下停车场环境的建图与车辆定位系统,包括前视相机、逆透视变换模块、语义特征检测模块、里程计模块、建图模块和定位模块;As shown in Figure 1, a mapping and vehicle positioning system for an underground parking lot environment includes a forward-looking camera, an inverse perspective transformation module, a semantic feature detection module, an odometer module, a mapping module and a positioning module;
前视相机用于获取车辆前方区域的原始图像并输入给逆透视变换模块;The front-view camera is used to obtain the original image of the area in front of the vehicle and input it to the inverse perspective transformation module;
逆透视变换模块用于将前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给语义特征检测模块;The inverse perspective transformation module is used to inversely transform the original image input by the front-view camera to obtain a top view and input it to the semantic feature detection module;
语义特征检测模块用于通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到建图模块和定位模块;The semantic feature detection module is used to obtain the semantically segmented image of the top view through the convolutional neural network, and obtain the semantic feature input to the mapping module and the positioning module;
里程计模块用于获取车辆的位姿输入到建图模块和定位模块;The odometer module is used to obtain the pose of the vehicle and input it to the mapping module and the positioning module;
建图模块基于里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;Based on the pose of the vehicle provided by the odometer module, the mapping module projects the semantic features from the vehicle body coordinate system to the global coordinate system to obtain a global semantic map;
定位模块用于获取车辆当前语义特征的全局坐标点,并将该全局坐标点与全局语义地图做匹配,以获得车辆当前的语义定位结果。The localization module is used to obtain the global coordinate point of the current semantic feature of the vehicle, and match the global coordinate point with the global semantic map to obtain the current semantic localization result of the vehicle.
在本实施例中,里程计模块包括一个惯性测量单元和两个轮速传感器;In this embodiment, the odometer module includes an inertial measurement unit and two wheel speed sensors;
建图模块包括局部地图模块和全局地图模块;The mapping module includes a local map module and a global map module;
局部地图模块基于里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给全局地图模块;The local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the pose of the vehicle provided by the odometer module to obtain a local map and input it to the global map module;
全局地图模块用于对局部地图加回环检测和全局优化后生成全局语义地图。The global map module is used to generate a global semantic map after adding loopback detection and global optimization to the local map.
一种针对地下停车场环境的建图与车辆定位方法,采用上述的针对地下停车场环境的建图与车辆定位系统,建图与车辆定位方法包括建图方法和车辆定位方法;A mapping and vehicle positioning method for an underground parking lot environment, using the above-mentioned mapping and vehicle positioning system for an underground parking lot environment, the mapping and vehicle positioning method includes a mapping method and a vehicle positioning method;
其中建图方法包括以下步骤:The mapping method includes the following steps:
步骤A1)前视相机获取车辆前方区域的原始图像并输入给逆透视变换模块;Step A1) The front-view camera obtains the original image of the area in front of the vehicle and inputs it to the inverse perspective transformation module;
步骤A2)逆透视变换模块将前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给语义特征检测模块;Step A2) inverse perspective transformation module obtains a top view after inverse perspective transformation is performed on the original image input by the front-view camera and is input to the semantic feature detection module;
步骤A3)语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到建图模块;Step A3) The semantic feature detection module obtains the image after the semantic segmentation of the overhead view through the convolutional neural network, and obtains the semantic feature and inputs it to the mapping module;
步骤A4)里程计模块获取车辆的位姿输入到建图模块;Step A4) The odometer module obtains the pose of the vehicle and inputs it to the mapping module;
步骤A5)建图模块基于里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得全局语义地图;Step A5) the mapping module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the pose of the vehicle provided by the odometer module to obtain a global semantic map;
步骤A6)完成建图;Step A6) completes mapping;
车辆定位方法包括以下步骤:The vehicle positioning method includes the following steps:
步骤S1)前视相机获取车辆当前位置前方区域的原始图像并输入给逆透视变换模块;Step S1) the front-view camera obtains the original image of the area in front of the current position of the vehicle and inputs it to the inverse perspective transformation module;
步骤S2)逆透视变换模块将前视相机输入的原始图像进行逆透视变换后得到俯视图并输入给语义特征检测模块;Step S2) inverse perspective transformation module performs inverse perspective transformation on the original image input by the front-view camera to obtain a top view and input to the semantic feature detection module;
步骤S3)语义特征检测模块通过卷积神经网络得到俯视图语义分割后的图像,并获取语义特征输入到定位模块;Step S3) the semantic feature detection module obtains the semantically segmented image of the overhead view through the convolutional neural network, and obtains the semantic feature and inputs it to the positioning module;
步骤S4)里程计模块获取车辆当前的位姿输入到定位模块;Step S4) the odometer module obtains the current pose of the vehicle and inputs it to the positioning module;
步骤S5)定位模块基于里程计模块提供的车辆当前的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得车辆当前语义特征的全局坐标点;Step S5) the positioning module projects the semantic feature from the vehicle body coordinate system to the global coordinate system based on the current pose of the vehicle provided by the odometer module, to obtain the global coordinate point of the current semantic feature of the vehicle;
步骤S6)将步骤S5)中得到的全局坐标点与建图方法中得到的全局语义地图做匹配,以获得车辆当前的语义定位结果,完成车辆定位。Step S6) Match the global coordinate points obtained in step S5) with the global semantic map obtained in the mapping method to obtain the current semantic positioning result of the vehicle, and complete the vehicle positioning.
在本实施例中,前视相机的内外参已离线标定,通过逆透视变换将前视相机获取的原始图像投影到地面上,逆透视变换的公式为:In this embodiment, the internal and external parameters of the front-view camera have been calibrated offline, and the original image obtained by the front-view camera is projected onto the ground through inverse perspective transformation. The formula of inverse perspective transformation is:
式中:πc(·)为前视相机投影模型,[Rctc]为从前视相机坐标系向车体坐标系转化的外参矩阵,[u v]为语义特征的像素坐标,[xv yv]为语义特征在车体坐标系下的坐标,λ为尺度因子。In the formula: π c ( ) is the projection model of the forward-looking camera, [R c t c ] is the extrinsic parameter matrix transformed from the forward-looking camera coordinate system to the vehicle body coordinate system, [uv] is the pixel coordinate of the semantic feature, [x v y v ] is the coordinate of the semantic feature in the vehicle body coordinate system, and λ is the scale factor.
在本实施例中,语义特征检测模块使用卷积神经网络U-Net做语义分割,且卷积神经网络使用前视相机采集的停车场图片集进行训练分类,且卷积神经网络的分类类别包括车道线、停车线、引导线、减速带、可通行区域、障碍物和墙体,且语义特征检测模块通过卷积神经网络获取的语义特征包括车道线、停车线、引导线和减速带,这是由于车道线、停车线、引导线和减速带语义特征可辨识度高且稳定,适用于本发明的建图和定位。In this embodiment, the semantic feature detection module uses the convolutional neural network U-Net for semantic segmentation, and the convolutional neural network uses the parking lot image set collected by the front-view camera for training classification, and the classification categories of the convolutional neural network include: Lane lines, stop lines, guide lines, speed bumps, passable areas, obstacles and walls, and the semantic features obtained by the semantic feature detection module through the convolutional neural network include lane lines, stop lines, guide lines and speed bumps. Because the semantic features of lane lines, stop lines, guide lines and speed bumps are highly recognizable and stable, they are suitable for the mapping and positioning of the present invention.
在本实施例中,利用里程计模块提供的位姿,将语义特征通过如下公式从车体坐标系投影至全局坐标系,保存语义特征的全局坐标点生成局部地图,局部地图范围为30m;其中,将语义特征从车体坐标系投影至全局坐标系的公式为:In this embodiment, using the pose provided by the odometer module, the semantic feature is projected from the vehicle body coordinate system to the global coordinate system by the following formula, and the global coordinate points of the semantic feature are saved to generate a local map, and the local map range is 30m; , the formula for projecting semantic features from the vehicle body coordinate system to the global coordinate system is:
式中:[xw,yw zw]为语义特征在全局坐标系下的坐标,Ro为从车体坐标系向全局坐标系转化的旋转矩阵,to为从车体坐标系向全局坐标系转化的平移向量。In the formula: [x w, y w z w ] is the coordinate of the semantic feature in the global coordinate system, R o is the rotation matrix transformed from the vehicle body coordinate system to the global coordinate system, and t o is the transformation from the vehicle body coordinate system to the global coordinate system. The translation vector for the coordinate system transformation.
在本实施例中,步骤A5)中,局部地图模块基于里程计模块提供的车辆的位姿,将语义特征从车体坐标系投影到全局坐标系下,以获得局部地图并输入给全局地图模块;全局地图模块再对局部地图加回环检测和全局优化后生成全局语义地图。In this embodiment, in step A5), the local map module projects the semantic features from the vehicle body coordinate system to the global coordinate system based on the pose of the vehicle provided by the odometer module to obtain a local map and input it to the global map module ; The global map module generates a global semantic map after adding loopback detection and global optimization to the local map.
在本实施例中,步骤A5)中,全局地图模块对局部地图加回环检测的方法为:使用基于数据配准法将当前的局部地图与之前生成的局部地图做匹配,若匹配结果满足设定值,则证明出现回环,用计算得到的相对位姿进行位姿图优化来消除累积误差。In this embodiment, in step A5), the global map module adds the method of loopback detection to the local map as follows: using the data-based registration method to match the current local map with the previously generated local map, if the matching result satisfies the setting value, it proves that a loopback occurs, and the calculated relative pose is used to optimize the pose graph to eliminate the accumulated error.
在本实施例中,步骤A5)中,全局地图模块对局部地图进行全局优化的方法包括约束两帧连续局部地图的里程计模块的测量值、以及回环检测时基于数据配准法得到的相对位姿做回环帧间的约束。In this embodiment, in step A5), the method for the global map module to perform global optimization on the local map includes constraining the measured value of the odometer module of two consecutive local maps, and the relative position obtained based on the data registration method during loop closure detection. The pose is used as a constraint between loopback frames.
在本实施例中,步骤A5)中,全局地图模块对局部地图进行全局优化的方法为高斯牛顿法,目标函数为:In the present embodiment, in step A5), the method that the global map module performs global optimization on the local map is the Gauss-Newton method, and the objective function is:
式中:χ为位姿集合,Tt,t+1为t帧和t+1帧前视相机的相对位姿估计值,为从里程计模块获取的相对位姿测量值,L为回环帧对的集合,为基于数据配准法得到的第i帧和第j帧之间的相对位姿,做测量值使用,Ti,j为第i帧和第j帧带有累计误差的相对位姿估计值。In the formula: χ is the pose set, T t, t+1 is the relative pose estimation value of the front-view camera at frame t and frame t+1, is the relative pose measurement obtained from the odometer module, L is the set of loopback frame pairs, For the relative pose between the i-th frame and the j-th frame obtained based on the data registration method, it is used as a measurement value, and T i,j is the relative pose estimation value of the i-th frame and the j-th frame with accumulated errors.
与现有技术相比,本发明不同于传统视觉定位方案用到的环境中的几何特征,本发明利用环境中的语义特征,对于停车场环境,主要包括车道线、引导线、停车线和减速带等,语义特征相比于几何特征能够长期稳定存在,并在光照、视角和环境变化时依旧健壮,这些语义特征由卷积神经网络检测,并利用该语义特征建立全局语义地图,再利用全局语义地图对车辆进行定位。因此该方案比传统的定位方案能更加鲁棒地应对环境变化,并能保持长时间稳定准确的使用状态。本发明所用传感器仅为一个前视相机、一个IMU(惯性测量单元)和两个轮速传感器,其中IMU和轮速传感器组成里程计模块,在建图和定位时提供车辆的相对位姿,因此本发明的使用成本很低,量产车也能轻松配置。本发明的框架主要包含两部分,建图和定位。顾名思义,建图即建立停车场环境的全局语义地图;由前视相机获取车辆前方区域的原始图像,经过逆透视变换后得到俯视图,然后输入卷积神经网络得到语义分割后的图像,并获得车道线、停车线、引导线和减速带等语义特征;然后基于里程计模块提供的位姿,将语义特征投影到全局坐标系下,鉴于里程计模块的数据漂移,本发明还利用回环检测和全局优化来消除这些累积误差,最后通过保存这些特征点,建立停车场的全局语义地图。全局语义地图生成之后,进入停车场的车辆通过前视相机获取图像、逆透视变换、语义特征检测及里程计模块提供的位姿获得语义特征的全局坐标点,再通过ICP算法(基于数据配准法)与建好的全局语义地图做匹配修正车辆位姿,最终获得准确的车辆定位结果。Compared with the prior art, the present invention is different from the geometric features in the environment used by the traditional visual positioning scheme. The present invention utilizes the semantic features in the environment. For the parking lot environment, it mainly includes lane lines, guide lines, stop lines and deceleration. Compared with geometric features, semantic features can exist stably for a long time, and are still robust when lighting, perspective and environment change. These semantic features are detected by convolutional neural networks, and the semantic features are used to build a global semantic map, and then use the global semantic map. The semantic map locates the vehicle. Therefore, the scheme can cope with environmental changes more robustly than the traditional positioning scheme, and can maintain a stable and accurate use state for a long time. The sensors used in the present invention are only a forward-looking camera, an IMU (inertial measurement unit) and two wheel speed sensors, wherein the IMU and the wheel speed sensors form an odometer module, which provides the relative pose of the vehicle during mapping and positioning, so The use cost of the present invention is very low, and the mass production vehicle can also be easily configured. The framework of the present invention mainly includes two parts, mapping and positioning. As the name implies, mapping is to establish a global semantic map of the parking lot environment; the front-view camera obtains the original image of the area in front of the vehicle, and after inverse perspective transformation, the top view is obtained, and then the convolutional neural network is input to obtain the semantically segmented image, and obtain the lane. Semantic features such as lines, parking lines, guide lines and speed bumps; then based on the pose provided by the odometer module, the semantic features are projected to the global coordinate system. In view of the data drift of the odometer module, the present invention also utilizes loopback detection and global Optimization is done to eliminate these accumulated errors, and finally by saving these feature points, a global semantic map of the parking lot is established. After the global semantic map is generated, the vehicle entering the parking lot obtains the image through the forward-looking camera, inverse perspective transformation, semantic feature detection and the pose provided by the odometer module to obtain the global coordinate point of the semantic feature, and then uses the ICP algorithm (based on data registration) to obtain the global coordinate points of the semantic feature. method) and the built global semantic map to match and correct the vehicle pose, and finally obtain accurate vehicle positioning results.
最后需要说明的是,以上实施例仅用以说明本发明的技术方案而非限制技术方案,本领域的普通技术人员应当理解,那些对本发明的技术方案进行修改或者等同替换,而不脱离本技术方案的宗旨和范围,均应涵盖在本发明的权利要求范围当中。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention and not to limit the technical solutions. Those skilled in the art should understand that those technical solutions of the present invention are modified or equivalently replaced without departing from the present technology. The purpose and scope of the solution should be included in the scope of the claims of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210580720.4A CN114964236B (en) | 2022-05-25 | 2022-05-25 | Map building and vehicle positioning system and method for underground parking garage environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210580720.4A CN114964236B (en) | 2022-05-25 | 2022-05-25 | Map building and vehicle positioning system and method for underground parking garage environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114964236A true CN114964236A (en) | 2022-08-30 |
CN114964236B CN114964236B (en) | 2024-10-29 |
Family
ID=82956434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210580720.4A Active CN114964236B (en) | 2022-05-25 | 2022-05-25 | Map building and vehicle positioning system and method for underground parking garage environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114964236B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115752476A (en) * | 2022-11-29 | 2023-03-07 | 重庆长安汽车股份有限公司 | Vehicle ground library repositioning method, device, equipment and medium based on semantic information |
CN116295457A (en) * | 2022-12-21 | 2023-06-23 | 辉羲智能科技(上海)有限公司 | Vehicle vision positioning method and system based on two-dimensional semantic map |
CN116817887A (en) * | 2023-06-28 | 2023-09-29 | 哈尔滨师范大学 | Semantic visual SLAM map construction method, electronic equipment and storage medium |
CN116817892A (en) * | 2023-08-28 | 2023-09-29 | 之江实验室 | Cloud-integrated UAV route positioning method and system based on visual semantic map |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610175A (en) * | 2017-08-04 | 2018-01-19 | 华南理工大学 | The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A positioning method and system based on visual inertial navigation information fusion |
CN110375738A (en) * | 2019-06-21 | 2019-10-25 | 西安电子科技大学 | A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method |
CN111780754A (en) * | 2020-06-23 | 2020-10-16 | 南京航空航天大学 | Visual-inertial odometry pose estimation method based on sparse direct method |
CN112304307A (en) * | 2020-09-15 | 2021-02-02 | 浙江大华技术股份有限公司 | Positioning method and device based on multi-sensor fusion and storage medium |
CN113624223A (en) * | 2021-07-30 | 2021-11-09 | 中汽创智科技有限公司 | Indoor parking lot map construction method and device |
CN113763466A (en) * | 2020-10-10 | 2021-12-07 | 北京京东乾石科技有限公司 | Loop detection method and device, electronic equipment and storage medium |
CN113903011A (en) * | 2021-10-26 | 2022-01-07 | 江苏大学 | Semantic map construction and positioning method suitable for indoor parking lot |
-
2022
- 2022-05-25 CN CN202210580720.4A patent/CN114964236B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610175A (en) * | 2017-08-04 | 2018-01-19 | 华南理工大学 | The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A positioning method and system based on visual inertial navigation information fusion |
CN110375738A (en) * | 2019-06-21 | 2019-10-25 | 西安电子科技大学 | A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method |
CN111780754A (en) * | 2020-06-23 | 2020-10-16 | 南京航空航天大学 | Visual-inertial odometry pose estimation method based on sparse direct method |
CN112304307A (en) * | 2020-09-15 | 2021-02-02 | 浙江大华技术股份有限公司 | Positioning method and device based on multi-sensor fusion and storage medium |
CN113763466A (en) * | 2020-10-10 | 2021-12-07 | 北京京东乾石科技有限公司 | Loop detection method and device, electronic equipment and storage medium |
CN113624223A (en) * | 2021-07-30 | 2021-11-09 | 中汽创智科技有限公司 | Indoor parking lot map construction method and device |
CN113903011A (en) * | 2021-10-26 | 2022-01-07 | 江苏大学 | Semantic map construction and positioning method suitable for indoor parking lot |
Non-Patent Citations (3)
Title |
---|
KASRA KHOSOUSSI等: "A Sparse Separable SLAM Back-End", 《IEEE TRANSACTIONS ON ROBOTICS》, 19 October 2016 (2016-10-19), pages 1536 - 1549 * |
林晨浩等: "一种融合视觉与IMU的车载激光雷达建图与定位方法", 《福州大学学报(自然科学版)》, no. 1, 31 January 2022 (2022-01-31), pages 82 - 88 * |
陈嬿: "基于特征点优化的半直接法双目视觉SLAM方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 3, 15 March 2021 (2021-03-15), pages 140 - 264 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115752476A (en) * | 2022-11-29 | 2023-03-07 | 重庆长安汽车股份有限公司 | Vehicle ground library repositioning method, device, equipment and medium based on semantic information |
CN116295457A (en) * | 2022-12-21 | 2023-06-23 | 辉羲智能科技(上海)有限公司 | Vehicle vision positioning method and system based on two-dimensional semantic map |
CN116295457B (en) * | 2022-12-21 | 2024-05-24 | 辉羲智能科技(上海)有限公司 | Vehicle vision positioning method and system based on two-dimensional semantic map |
CN116817887A (en) * | 2023-06-28 | 2023-09-29 | 哈尔滨师范大学 | Semantic visual SLAM map construction method, electronic equipment and storage medium |
CN116817887B (en) * | 2023-06-28 | 2024-03-08 | 哈尔滨师范大学 | Semantic visual SLAM map construction method, electronic equipment and storage medium |
CN116817892A (en) * | 2023-08-28 | 2023-09-29 | 之江实验室 | Cloud-integrated UAV route positioning method and system based on visual semantic map |
CN116817892B (en) * | 2023-08-28 | 2023-12-19 | 之江实验室 | Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map |
Also Published As
Publication number | Publication date |
---|---|
CN114964236B (en) | 2024-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111462200B (en) | A cross-video pedestrian positioning and tracking method, system and device | |
CN108802785B (en) | Vehicle self-positioning method based on high-precision vector map and monocular vision sensor | |
CN114964236B (en) | Map building and vehicle positioning system and method for underground parking garage environment | |
CN103940434B (en) | Real-time lane detection system based on monocular vision and inertial navigation unit | |
CN112734841B (en) | Method for realizing positioning by using wheel type odometer-IMU and monocular camera | |
CN101576384B (en) | Indoor movable robot real-time navigation method based on visual information correction | |
CN113903011B (en) | Semantic map construction and positioning method suitable for indoor parking lot | |
CN114323033B (en) | Positioning method and equipment based on lane lines and feature points and automatic driving vehicle | |
CN105976402A (en) | Real scale obtaining method of monocular vision odometer | |
Zhang et al. | An efficient LiDAR-based localization method for self-driving cars in dynamic environments | |
CN114018248B (en) | An Odometer Method and Mapping Method Fused with Code Disc and LiDAR | |
CN102519481A (en) | Implementation method of binocular vision speedometer | |
CN113920198B (en) | Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment | |
CN108151713A (en) | A kind of quick position and orientation estimation methods of monocular VO | |
CN111986261A (en) | A vehicle positioning method, device, electronic device and storage medium | |
CN114120075A (en) | A 3D Object Detection Method Fusion Monocular Camera and LiDAR | |
CN115564865A (en) | Construction method and system of crowdsourcing high-precision map, electronic equipment and vehicle | |
Zhou et al. | Visual localization and mapping leveraging the constraints of local ground manifolds | |
Ma et al. | Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment | |
Nie et al. | A survey of extrinsic parameters calibration techniques for autonomous devices | |
Li et al. | An Efficient LiDAR SLAM with Angle-Based Feature Extraction and Voxel-based Fixed-Lag Smoothing | |
CN118379349A (en) | A visual SLAM method based on 3D semantics in underground parking scenarios | |
Ye et al. | Robust and efficient vehicles motion estimation with low-cost multi-camera and odometer-gyroscope | |
CN107248171A (en) | A kind of monocular vision odometer yardstick restoration methods based on triangulation | |
CN114638902A (en) | An online estimation method of extrinsic parameters for in-vehicle cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |