[go: up one dir, main page]

CN114862964A - Sensor automatic calibration method, electronic device and storage medium - Google Patents

Sensor automatic calibration method, electronic device and storage medium Download PDF

Info

Publication number
CN114862964A
CN114862964A CN202210455494.7A CN202210455494A CN114862964A CN 114862964 A CN114862964 A CN 114862964A CN 202210455494 A CN202210455494 A CN 202210455494A CN 114862964 A CN114862964 A CN 114862964A
Authority
CN
China
Prior art keywords
coordinate
point cloud
image
coordinate system
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210455494.7A
Other languages
Chinese (zh)
Inventor
李旭兴
张蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhixing Technology Co ltd
Original Assignee
Wuhan Zhixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhixing Technology Co ltd filed Critical Wuhan Zhixing Technology Co ltd
Priority to CN202210455494.7A priority Critical patent/CN114862964A/en
Publication of CN114862964A publication Critical patent/CN114862964A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides an automatic calibration method of a sensor, electronic equipment and a storage medium. The method comprises the following steps: acquiring laser point cloud data acquired by laser equipment and image data acquired by a camera; according to the driving state of the mobile device, identifying a scene target related to the driving state in the laser point cloud data to obtain a first coordinate of the scene target under a laser point cloud coordinate system; according to the driving state of the mobile device, identifying a scene target related to the driving state in the image data to obtain a second coordinate of the scene target in an image coordinate system; re-projecting the first coordinate into a third coordinate under an image coordinate system; matching the second coordinate with the third coordinate under an image coordinate system to obtain a matching result; and calibrating external parameters between the laser equipment and the camera based on the matching result. The embodiment of the invention does not depend on the traditional calibration scene, and can realize more accurate automatic calibration of external parameters between the laser equipment and the camera.

Description

传感器自动标定方法及电子设备和存储介质Sensor automatic calibration method, electronic device and storage medium

技术领域technical field

本发明涉及自动驾驶领域,尤其涉及一种传感器自动标定方法及电子设备和存储介质。The invention relates to the field of automatic driving, in particular to a sensor automatic calibration method, an electronic device and a storage medium.

背景技术Background technique

自动驾驶通常由感知系统、决策系统、执行系统、通信系统构成,其通过车辆自身采集数据,并进行数据处理输出,最终决策控制。由于自动驾驶的安全性要求很高,就需要传感器能更充分的采集周围环境信息,从而做出可靠推断。Autonomous driving is usually composed of a perception system, a decision-making system, an execution system, and a communication system. Due to the high safety requirements of autonomous driving, it is necessary for sensors to collect more sufficient information about the surrounding environment to make reliable inferences.

然而,没有单独的一款传感器能达到感知包含距离,物体信息都很全面的特征。因此,为了获得更全面的特征,可靠的传感器感知集成方案有激光雷达和相机。激光雷达可以更好的感知周围物体的位置信息,准度较高,但激光雷达数据比较稀疏,对自动驾驶中碰到的部分障碍物无法给出比较明确的类型特征。同时,针对下雨,下雪,轻量灰尘等情况下,均可能会失效,虽然通过多次回波的方案可优化此类应用,但后端仍需要进行异常目标的过滤。相机能较好的识别场景中的语义信息,识别交通障碍物,但在部分场景下分割容易失效,例如边界不明显的物体,就可能会使整体检测出现异常。However, there is no single sensor that can perceive the features that include the distance and the object information is comprehensive. Therefore, in order to obtain more comprehensive features, reliable sensor perception integration solutions are lidar and camera. Lidar can better perceive the position information of surrounding objects, and the accuracy is high, but the data of Lidar is relatively sparse, and it cannot give clear type characteristics to some obstacles encountered in automatic driving. At the same time, it may fail in the case of rain, snow, light dust, etc. Although such applications can be optimized through the scheme of multiple echoes, the back end still needs to filter abnormal targets. The camera can better identify the semantic information in the scene and identify traffic obstacles, but the segmentation is easy to fail in some scenes, such as objects with unclear boundaries, which may make the overall detection abnormal.

因此,结合激光雷达和相机的特征融合相关信息,能更好的辅助自动驾驶感知操作。在融合使用之前,需要对参数进行标定,使激光中的对应目标位置和相机目标位置无明显偏差。从而更好的完成对目标的融合感知。Therefore, combining the features of lidar and camera with relevant information can better assist autonomous driving perception operations. Before the fusion is used, the parameters need to be calibrated, so that there is no obvious deviation between the corresponding target position in the laser and the target position of the camera. So as to better complete the fusion perception of the target.

在实现本发明过程中,发明人发现相关技术中至少存在如下问题:In the process of realizing the present invention, the inventor found that there are at least the following problems in the related art:

相机和激光雷达的集成已经固定,自动驾驶车辆在行驶过程中难免因为颠簸、急刹车等可能导致已经集成固定的相机和激光雷达的相对位置发生变化,此时相机和激光雷达在集成时的标定参数将不再适用,需要重新标定相机和激光雷达之间的外参,然而,对于已经集成固定的传感器而言,寻找合适场景下的标定环境较为困难,而且具体的场景参数需要进行特殊的目标物体,重新标定难度比较大。因此,如何能够实现相机和激光雷达的外参自动标定则成为目前亟待解决的技术问题。The integration of the camera and the lidar has been fixed, and the relative position of the integrated and fixed camera and lidar may change due to bumps, sudden braking, etc. during the driving process of the autonomous vehicle. At this time, the calibration of the camera and lidar during integration The parameters will no longer be applicable, and the external parameters between the camera and the lidar need to be re-calibrated. However, for the integrated and fixed sensors, it is difficult to find a calibration environment in a suitable scene, and the specific scene parameters require special targets. Objects, it is more difficult to re-calibrate. Therefore, how to realize automatic calibration of external parameters of cameras and lidars has become a technical problem that needs to be solved urgently.

发明内容SUMMARY OF THE INVENTION

本发明技术方案解决现有技术中自动驾驶车辆在行驶过程中导致相机和激光雷达相对位置发生变化时,无法实现自动标定相机和激光雷达的外参的技术问题。第一方面,本发明实施例提供一种传感器自动标定方法,应用于移动装置,包括:The technical solution of the present invention solves the technical problem in the prior art that the external parameters of the camera and the laser radar cannot be automatically calibrated when the relative position of the camera and the laser radar changes during the driving process of the automatic driving vehicle. In a first aspect, an embodiment of the present invention provides an automatic sensor calibration method, applied to a mobile device, including:

获取激光设备采集的激光点云数据;Obtain laser point cloud data collected by laser equipment;

获取相机采集的图像数据;Obtain the image data collected by the camera;

根据所述移动装置的行驶状态,对所述激光点云数据中与所述行驶状态相关的场景目标进行识别,得到所述场景目标在激光点云坐标系下的第一坐标;According to the driving state of the mobile device, identify the scene target related to the driving state in the laser point cloud data, and obtain the first coordinates of the scene target in the laser point cloud coordinate system;

根据所述移动装置的行驶状态,对所述图像数据中与所述行驶状态相关的所述场景目标进行识别,得到所述场景目标在图像坐标系下的第二坐标;According to the driving state of the mobile device, identify the scene object related to the driving state in the image data, and obtain the second coordinate of the scene object in the image coordinate system;

将所述第一坐标重投影为图像坐标系下的第三坐标;reprojecting the first coordinate into a third coordinate in the image coordinate system;

对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果;Matching the second coordinate and the third coordinate in the image coordinate system to obtain a matching result;

基于所述匹配结果,对所述激光设备和所述相机之间的外参进行标定。Based on the matching result, the external parameters between the laser device and the camera are calibrated.

第二方面,本发明实施例提供一种传感器自动标定装置,其特征在于,包括:In a second aspect, an embodiment of the present invention provides an automatic sensor calibration device, characterized in that it includes:

点云获取模块,用于获取激光设备采集的激光点云数据;Point cloud acquisition module, used to acquire laser point cloud data collected by laser equipment;

图像获取模块,用于获取相机采集的图像数据;The image acquisition module is used to acquire the image data collected by the camera;

第一坐标确定模块,用于根据所述移动装置的行驶状态,对所述激光点云数据中与所述行驶状态相关的场景目标进行识别,得到所述场景目标在激光点云坐标系下的第一坐标;The first coordinate determination module is used to identify the scene target related to the driving state in the laser point cloud data according to the driving state of the mobile device, and obtain the scene target in the laser point cloud coordinate system. first coordinate;

第二坐标确定模块,用于根据所述移动装置的行驶状态,对所述图像数据中与所述行驶状态相关的所述场景目标进行识别,得到所述场景目标在图像坐标系下的第二坐标;The second coordinate determination module is configured to identify the scene object related to the driving state in the image data according to the driving state of the mobile device, and obtain the second coordinate of the scene object in the image coordinate system. coordinate;

第三坐标确定模块,用于将所述第一坐标重投影为图像坐标系下的第三坐标;A third coordinate determination module, configured to reproject the first coordinate to a third coordinate in the image coordinate system;

匹配模块,用于对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果;a matching module, configured to match the second coordinate and the third coordinate in the image coordinate system to obtain a matching result;

标定模块,用于基于所述匹配结果,对所述激光设备和所述相机之间的外参进行标定。A calibration module, configured to calibrate an external parameter between the laser device and the camera based on the matching result.

第三方面,提供一种电子设备,其包括:至少一个处理器,以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明任一实施例的传感器自动标定方法的步骤。In a third aspect, an electronic device is provided, comprising: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, The instructions are executed by the at least one processor to enable the at least one processor to perform the steps of the automatic sensor calibration method of any embodiment of the present invention.

第四方面,本发明实施例提供一种移动装置,包括本体和所述本体上安装的本发明任一实施例所述的电子设备。In a fourth aspect, an embodiment of the present invention provides a mobile device, including a main body and the electronic device according to any embodiment of the present invention installed on the main body.

第五方面,本发明实施例提供一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现本发明任一实施例的传感器自动标定方法的步骤。In a fifth aspect, an embodiment of the present invention provides a storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the automatic sensor calibration method of any embodiment of the present invention are implemented.

第六方面,本发明实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本发明实施例中任意一项所述的传感器自动标定方法。In a sixth aspect, an embodiment of the present invention further provides a computer program product, which, when the computer program product runs on a computer, enables the computer to execute the automatic sensor calibration method described in any one of the embodiments of the present invention.

本发明实施例的有益效果在于:本发明技术方案,基于移动装置的行驶状态选取相应的场景目标,通过对场景目标的激光点云坐标和图像坐标进行匹配,根据匹配结果实现相机和激光雷达之间的外参的自动标定,本方案不依赖于传统的标定场景,可实现激光设备和相机之间的标定的更加精准的自动化标定。The beneficial effects of the embodiments of the present invention are as follows: the technical scheme of the present invention selects a corresponding scene target based on the driving state of the mobile device, matches the laser point cloud coordinates of the scene target and the image coordinates, and realizes the matching between the camera and the lidar according to the matching result. The automatic calibration of the external parameters between the laser equipment and the camera does not depend on the traditional calibration scene, and can realize more accurate automatic calibration of the calibration between the laser equipment and the camera.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts.

图1是本发明一实施例提供的一种传感器自动标定方法的流程图;1 is a flowchart of a method for automatic calibration of a sensor provided by an embodiment of the present invention;

图2是本发明一实施例提供的一种传感器自动标定方法的点云数据中高度及反射率过滤的标识牌点云特征的示意图;2 is a schematic diagram of a signboard point cloud feature filtered by height and reflectivity in point cloud data of a sensor automatic calibration method provided by an embodiment of the present invention;

图3是本发明一实施例提供的一种传感器自动标定方法的图像数据中的道路标识牌检测效果示意图;3 is a schematic diagram of a road sign detection effect in image data of an automatic sensor calibration method provided by an embodiment of the present invention;

图4是本发明一实施例提供的一种传感器自动标定方法的交通标识牌点云和图像融合显示示意图;4 is a schematic diagram of a traffic sign point cloud and image fusion display diagram of a sensor automatic calibration method provided by an embodiment of the present invention;

图5是本发明一实施例提供的一种传感器自动标定方法的图像和点云的行人车辆投影匹配示意图;5 is a schematic diagram of pedestrian and vehicle projection matching between an image and a point cloud of an automatic sensor calibration method provided by an embodiment of the present invention;

图6是本发明一实施例提供的一种传感器自动标定方法的激光点云和图像融合标定流程图;6 is a flow chart of laser point cloud and image fusion calibration of a sensor automatic calibration method provided by an embodiment of the present invention;

图7是本发明一实施例提供的一种传感器自动标定装置的结构示意图;7 is a schematic structural diagram of a sensor automatic calibration device provided by an embodiment of the present invention;

图8为本发明一实施例提供的一种传感器自动标定的电子设备的实施例的结构示意图。FIG. 8 is a schematic structural diagram of an embodiment of an electronic device for automatic sensor calibration provided by an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

本领域技术人员知道,本申请的实施方式可以实现为一种系统、装置、设备、方法或计算机程序产品。因此,本公开可以具体实现为以下形式,即:完全的硬件、完全的软件(包括固件、驻留软件、微代码等),或者硬件和软件结合的形式。Those skilled in the art know that the embodiments of the present application can be implemented as a system, apparatus, device, method or computer program product. Accordingly, the present disclosure may be embodied in entirely hardware, entirely software (including firmware, resident software, microcode, etc.), or a combination of hardware and software.

为了便于理解,以下对本申请涉及的技术术语进行解释:For ease of understanding, the following technical terms involved in this application are explained:

本申请所称的“移动装置”可以是任何具备移动能力的设备,包括但不限于汽车、轮船、潜艇、飞机、飞行器等设备,其中汽车包括国际自动机工程师学会(Society ofAutomotive Engineers International,SAE International)或中国国家标准《汽车驾驶自动化分级》制定的L0-L5共六个自动驾驶技术等级的车辆,以下简称自动驾驶车辆ADV(Auto-Driving Vehicle)。The "mobile device" referred to in this application can be any equipment with mobile capabilities, including but not limited to automobiles, ships, submarines, airplanes, aircrafts and other equipment, wherein automobiles include the Society of Automotive Engineers International (SAE International, SAE International) ) or vehicles with six automatic driving technology levels from L0 to L5 formulated by the Chinese national standard "Automotive Driving Automation Classification", hereinafter referred to as ADV (Auto-Driving Vehicle).

本申请所称的“自动驾驶车辆ADV”可以是具有如下各种功能的车辆设备或机器人设备:The "autonomous driving vehicle ADV" referred to in this application may be a vehicle device or a robot device with the following various functions:

(1)载人功能,如家用轿车、公共汽车等;(1) Manned functions, such as family cars, buses, etc.;

(2)载货功能,如普通货车、厢式货车、甩挂车、封闭货车、罐式货车、平板货车、集装厢车、自卸货车、特殊结构货车等;(2) Cargo-carrying functions, such as ordinary trucks, vans, trailers, enclosed trucks, tank trucks, flatbed trucks, container trucks, dump trucks, special structure trucks, etc.;

(3)工具功能,如物流配送车、自动导引运输车AGV、巡逻车、起重机、吊车、挖掘机、推土机、铲车、压路机、装载机、越野工程车、装甲工程车、污水处理车、环卫车、吸尘车、洗地车、洒水车、扫地机器人、送餐机器人、导购机器人、割草机、高尔夫球车等;(3) Tool functions, such as logistics distribution vehicles, automatic guided transport vehicles AGV, patrol vehicles, cranes, cranes, excavators, bulldozers, forklifts, road rollers, loaders, off-road construction vehicles, armored construction vehicles, sewage treatment vehicles, Sanitation vehicles, vacuum cleaners, floor washing vehicles, sprinklers, sweeping robots, food delivery robots, shopping guide robots, lawn mowers, golf carts, etc.;

(4)娱乐功能,如娱乐车、游乐场自动驾驶装置、平衡车等;(4) Entertainment functions, such as recreational vehicles, playground automatic driving devices, balance cars, etc.;

(5)特殊救援功能,如消防车、救护车、电力抢修车、工程抢险车等。(5) Special rescue functions, such as fire trucks, ambulances, electric repair vehicles, engineering rescue vehicles, etc.

如图1所示为本发明一实施例提供的一种传感器自动标定方法的流程图,包括如下步骤:FIG. 1 is a flowchart of a method for automatic calibration of a sensor provided by an embodiment of the present invention, including the following steps:

S11:获取激光设备采集的激光点云数据;S11: Obtain the laser point cloud data collected by the laser device;

S12:获取相机采集的图像数据;S12: obtain image data collected by the camera;

S13:根据所述移动装置的行驶状态,对所述激光点云数据中与所述行驶状态相关的场景目标进行识别,得到所述场景目标在激光点云坐标系下的第一坐标;S13: According to the driving state of the mobile device, identify the scene target related to the driving state in the laser point cloud data, and obtain the first coordinates of the scene target in the laser point cloud coordinate system;

S14:根据所述移动装置的行驶状态,对所述图像数据中与所述行驶状态相关的所述场景目标进行识别,得到所述场景目标在图像坐标系下的第二坐标;S14: According to the driving state of the mobile device, identify the scene object related to the driving state in the image data, and obtain the second coordinate of the scene object in the image coordinate system;

S15:将所述第一坐标重投影为图像坐标系下的第三坐标;S15: Reproject the first coordinate to a third coordinate in the image coordinate system;

S16:对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果;S16: Match the second coordinate and the third coordinate in the image coordinate system to obtain a matching result;

S17:基于所述匹配结果,对所述激光设备和所述相机之间的外参进行标定。S17: Based on the matching result, calibrate the external parameters between the laser device and the camera.

本申请激光设备例如可以为激光雷达。The laser device of the present application can be, for example, a lidar.

为提高本申请的技术方案的普适性,本发明实施例中的场景目标可以设置为车辆周边容易采集得到的目标,例如,动态目标包括行人、车辆等,静态目标包括车道线、标识牌等。In order to improve the universality of the technical solution of the present application, the scene target in the embodiment of the present invention can be set as the target that can be easily collected around the vehicle, for example, the dynamic target includes pedestrians, vehicles, etc., and the static target includes lane lines, signs, etc. .

对于步骤S11、S12,利用车辆搭载的激光设备采集的激光点云数据、相机采集图像数据。For steps S11 and S12, the laser point cloud data and the camera collected by the laser equipment mounted on the vehicle are used to collect image data.

本发明实施例中,预先设置有行驶状态与其对应的场景目标的对应关系,例如,车辆的行驶状态包括车辆行驶、车辆启动、车辆停止等;车辆行驶对应的场景目标包括行人和/车辆等动态目标,车辆启动和车辆停止对应的场景目标包括车道线和/标识牌等静态目标。In the embodiment of the present invention, the correspondence between the driving state and the corresponding scene target is preset. For example, the driving state of the vehicle includes the vehicle driving, the vehicle starting, the vehicle stopping, etc.; the scene target corresponding to the vehicle driving includes pedestrians and/or vehicles and other dynamic Targets, scene targets corresponding to vehicle start and vehicle stop include static targets such as lane lines and/or signs.

前述步骤S13中,对所述激光点云数据中与所述行驶状态相关的场景目标进行识别,在一个实施例中,可采用目标检测模型识别场景目标,目标检测模型例如可采用cnn-seg分割网络,cnn-seg分割网络使用语义分割方式进行目标检测,在进行语义分割的同时还进行了回归层处理,再根据中心偏移和语义分割的结果进行聚集以得到单个目标的检测结果,从而得到每个场景目标对应的激光点云数据。可以通过基于点云的分割检测网络从所述激光点云数据中获取场景目标的点云坐标,例如,从激光点云数据中将地面剔除,提取出前景点云,并将前景点云按照特定的聚类特征得到具体的聚类目标特征,根据聚类目标特征检测得到场景目标对应的目标信息,目标信息包含场景目标的包围框、中心点和高度信息等。根据聚类目标特征得到栅格聚类的信息,根据栅格特征,得到目标类别,从而判定出目标,并得到目标的高度、长度、宽度、朝向、中心点等几个特征。映射到各自的坐标系中,从而得到场景目标在激光点云坐标系下的第一坐标。In the aforementioned step S13, the scene target related to the driving state in the laser point cloud data is identified. In one embodiment, a target detection model can be used to identify the scene target. For example, the target detection model can use cnn-seg segmentation. The network, cnn-seg segmentation network uses semantic segmentation for target detection, and performs regression layer processing while performing semantic segmentation, and then aggregates according to the results of center offset and semantic segmentation to obtain the detection results of a single target, thus obtaining The laser point cloud data corresponding to each scene object. The point cloud coordinates of the scene object can be obtained from the laser point cloud data through a point cloud-based segmentation detection network, for example, the ground is culled from the laser point cloud data, the foreground point cloud is extracted, and the foreground point cloud is sorted according to a specific point cloud. The clustering feature obtains the specific clustering target feature, and the target information corresponding to the scene target is obtained according to the clustering target feature detection, and the target information includes the bounding box, center point and height information of the scene target. According to the clustering target features, the grid clustering information is obtained, and according to the grid features, the target category is obtained, so as to determine the target, and obtain several features such as the height, length, width, orientation, and center point of the target. Map to their respective coordinate systems, so as to obtain the first coordinates of the scene target in the laser point cloud coordinate system.

对于步骤S14,从相机采集的图像中识别场景目标,可通过预置的目标检测模型识别场景目标,可选的,目标检测模型可以使用yolov5的视觉检测模型,以得到场景目标在图像坐标系下的第二坐标。可以通过基于图像的检测分割模型从图像中获取场景目标的目标图像坐标,基于图像的检测分割模型是一种端到端的学习网络,其通过前N个层用于对特征的提取,通过后K个层对提到的特征进行分类,得到场景目标的目标信息,该目标信息包括检测框、方向、中心点等。For step S14, the scene target is identified from the image collected by the camera, and the scene target can be identified by the preset target detection model. Optionally, the target detection model can use the visual detection model of yolov5 to obtain the scene target in the image coordinate system. the second coordinate of . The target image coordinates of the scene object can be obtained from the image through the image-based detection and segmentation model. The image-based detection and segmentation model is an end-to-end learning network, which is used for feature extraction through the first N layers, and after K Each layer classifies the mentioned features, and obtains the target information of the scene target, the target information includes detection frame, direction, center point, etc.

对于步骤S15,由于点云坐标是在激光雷达坐标系下的三维坐标,图像坐标是相机坐标系下的二维坐标,两者不能直接比较。需要先将点云坐标投影到相机坐标系下转换为图像坐标,也就是将步骤S13中的第一坐标重投影为图像坐标系下的第三坐标,在这之后,转换到图像坐标系的第三坐标和图像坐标系下的第二坐标才能进行比较。For step S15, since the point cloud coordinates are the three-dimensional coordinates in the lidar coordinate system, and the image coordinates are the two-dimensional coordinates in the camera coordinate system, the two cannot be directly compared. It is necessary to first project the point cloud coordinates into the camera coordinate system and convert them into image coordinates, that is, reproject the first coordinates in step S13 to the third coordinates in the image coordinate system, and then convert them to the first coordinates of the image coordinate system. The third coordinate and the second coordinate in the image coordinate system can be compared.

对于步骤S16,将点云数据中转换得到的第三坐标与图像数据的第二坐标进行匹配,得到匹配结果。对于不同行驶状态相关的场景目标中坐标的匹配方式有所不同。For step S16, the third coordinate converted from the point cloud data is matched with the second coordinate of the image data to obtain a matching result. The matching methods of the coordinates in the scene objects related to different driving states are different.

作为一种实施方式,当所述行驶状态为启动后的停止状态时,所述场景目标为预定的静态目标。所述静态目标包括车道线和/或标识牌。As an embodiment, when the driving state is a stopped state after starting, the scene target is a predetermined static target. The static objects include lane markings and/or signage.

当所述行驶状态为启动后的静止状态时,对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果包括:When the driving state is the stationary state after starting, the second coordinate and the third coordinate are matched in the image coordinate system, and the matching result obtained includes:

在将所述第一坐标重投影为图像坐标系下的第三坐标后,基于所述第三坐标确定所述静态目标的点云检测框;After reprojecting the first coordinate to a third coordinate in the image coordinate system, determining a point cloud detection frame of the static target based on the third coordinate;

基于所述第二坐标确定所述静态目标的视觉检测框;determining a visual detection frame of the static target based on the second coordinates;

当所述点云检测框和所述视觉检测框的交并比大于设定比值时,基于所述激光点云坐标系和所述图像坐标系之间的重投影关系,确定所述激光设备和所述相机之间的相对位姿,作为匹配结果。When the intersection ratio of the point cloud detection frame and the visual detection frame is greater than a set ratio, determine the laser device and the image coordinate system based on the reprojection relationship between the laser point cloud coordinate system and the image coordinate system. The relative pose between the cameras, as the matching result.

在本实施方式中,正是考虑到了应用的广泛性和实际道路中的场景。在车辆启动,先进行初步的参数预估,此处标定需要采集行驶状态为启动后的静止状态的场景激光雷达的反射率。In the present embodiment, the extensiveness of the application and the scene on the actual road are taken into consideration. When the vehicle is started, a preliminary parameter estimation is performed first. Here, the calibration needs to collect the reflectivity of the lidar in the scene where the driving state is the stationary state after startup.

在这种情况下,首先场景目标的选择更倾向于考虑地面区域和悬空区域,例如,在实际环境中,可以选择车道线和标识牌。正常情况下,道路标识牌在激光的反射率接近反射率的最大值,因此,如图2所示,激光雷达可借助反射率特征筛出标识牌。另外,地面车道线的反射率明显区别于道路上其他非车道线区域。因此,借助车道线和悬空的标识牌特征,可将车道线和悬空标识牌从点云中分隔开。In this case, first of all, the selection of the scene target is more inclined to consider the ground area and the suspended area. For example, in the actual environment, lane lines and signs can be selected. Under normal circumstances, the reflectivity of road signs in the laser is close to the maximum reflectivity. Therefore, as shown in Figure 2, lidar can use the reflectivity feature to screen out signs. In addition, the reflectivity of the ground lane line is significantly different from other non-lane line areas on the road. Therefore, with the lane lines and dangling sign features, the lane lines and dangling sign can be separated from the point cloud.

在视觉图像中,利用已训练的标识牌检测模型,如图3所示,可轻易得到无遮挡的标识牌的检测框。此处可以利用yolov3交通标识牌检测网络模型,检测图像上半部分的交通标识进行深度分隔,得到图像中的标识牌位置。(图2和图3用作示意,为了表明图2和图3是由激光的不同反射率获得的,真实的图像是基于灰度的差别而区分出的)。In the visual image, using the trained sign detection model, as shown in Figure 3, the detection frame of the unobstructed sign can be easily obtained. Here, the yolov3 traffic sign detection network model can be used to detect the traffic signs in the upper half of the image for deep separation, and obtain the location of the sign in the image. (Fig. 2 and Fig. 3 are used for illustration, in order to show that Fig. 2 and Fig. 3 are obtained by different reflectivity of laser light, and the real images are distinguished based on the difference of gray scale).

利用激光点云和图像中的标识牌重投影关系,将点云进行空间坐标转换,粗略得到将点云标识牌投影到图像的旋转位置关系,匹配投影示例如图4所示,提取出图像和点云中的标识牌和车道线,实现初步匹配,达成参数初始化迭代的目的。Using the re-projection relationship between the laser point cloud and the signboard in the image, the point cloud is transformed into spatial coordinates, and the rotational position relationship of the point cloud signboard projected to the image is roughly obtained. The matching projection example is shown in Figure 4, and the image and The signs and lane lines in the point cloud can achieve preliminary matching and achieve the purpose of parameter initialization iteration.

具体的,将点云中分割出的标识牌投影利用投影矩阵进行转换,得到点云在图像上的投影。点云检测框和视觉检测框通过计算IOU(Intersection over Union,交并比),找出重合框比例最大的位置,如果重合框占比累计无法达到预置的比例阈值P0(比例阈值例如设置为75%,本领域技术人员还可以根据实际需求灵活设置比例阈值,本申请不做严格限定),则可认为初步配准效果无法达到需求,则不进行参数更新,不进行参数细优化。Specifically, the projection of the signboard segmented from the point cloud is converted using a projection matrix to obtain the projection of the point cloud on the image. The point cloud detection frame and the visual detection frame calculate the IOU (Intersection over Union, intersection ratio) to find the position with the largest proportion of the overlapping frame. If the cumulative proportion of the overlapping frame cannot reach the preset ratio threshold P0 (for example, the ratio threshold is set to 75%, those skilled in the art can also flexibly set the proportional threshold according to actual needs, which is not strictly limited in this application), then it can be considered that the preliminary registration effect cannot meet the requirements, and no parameter update is performed, and no detailed parameter optimization is performed.

鉴于点云的稀疏性,检测框匹配的过程中,可能会存在点云旋转特征,因此考虑将地面信息加入进行参数初步优化,如地面的车道线。地面的车道线在点云的显示中,提取背景点,即地面信息,利用地面信息中的反射率特征,提取出车道线部分,完成激光设备中和视觉的初始化匹配,实现对激光设备的标定参数初步校准。通过该实施方式可以看出,对于车辆启动后的静止状态,基于初始化标定参数,实现基于激光点云反射率参数初始化的校准方法,针对目前多款多线自动驾驶激光雷达均可适用,适用度更广。In view of the sparseness of the point cloud, there may be point cloud rotation features in the process of detection frame matching, so consider adding ground information for initial parameter optimization, such as the lane lines on the ground. In the display of the point cloud, the ground lane lines are extracted from the background points, that is, the ground information. Using the reflectivity features in the ground information, the lane lines are extracted to complete the initialization matching of the laser equipment and the vision, and realize the calibration of the laser equipment. Preliminary calibration of parameters. It can be seen from this embodiment that, for the stationary state after the vehicle is started, based on the initialization calibration parameters, the calibration method based on the initialization of the laser point cloud reflectivity parameter is realized, which can be applied to a variety of current multi-line autonomous driving lidars, and the applicability is wider.

作为另一种实施方式,当所述行驶状态为运动时,所述场景目标为预定的动态目标。所述动态目标包括:行人和/或车辆。As another implementation manner, when the driving state is motion, the scene target is a predetermined dynamic target. The dynamic targets include: pedestrians and/or vehicles.

当所述行驶状态为运动时,对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果包括:When the driving state is motion, the second coordinate and the third coordinate are matched in the image coordinate system, and the matching result obtained includes:

在将所述第一坐标重投影为图像坐标系下的第三坐标后,对所述第三坐标中的第一数量的动态目标与所述第二坐标中的第二数量的动态目标进行多目标模型的匹配;After the first coordinate is reprojected to the third coordinate in the image coordinate system, the first number of dynamic objects in the third coordinate and the second number of dynamic objects in the second coordinate are multiplied. target model matching;

对匹配的结果进行多点透视成像投影,确定所述激光设备和所述相机之间的相对位姿,作为匹配结果。Multi-point perspective imaging projection is performed on the matching result, and the relative pose between the laser device and the camera is determined as the matching result.

在本实施方式中,正如现有技术中的缺陷所述,自动驾驶车辆在行驶一段时间后,难以避免的晃动等情况存在,这会导致相机和激光的相对位置发生轻微变化。在行驶一段时间之后,相机和激光雷达的相对位置会发生改变,此时行驶状态为运动,在这种场景下需要对所述激光设备和所述相机进行高精度的校准。In this implementation manner, as described in the defects in the prior art, after a period of driving of the autonomous vehicle, unavoidable shaking and other situations exist, which may cause slight changes in the relative positions of the camera and the laser. After driving for a period of time, the relative positions of the camera and the lidar will change. At this time, the driving state is motion. In this scenario, the laser device and the camera need to be calibrated with high precision.

在这种情况下,首先动态目标的选择更倾向于行人和车辆,因为这是驾驶中最容易得到的。同样的,点云坐标是在激光雷达坐标系下的三维坐标,图像坐标是相机坐标系下的二维坐标,两者不能直接比较。还是需要利用重投影将点云坐标系下的第一坐标投影到图像坐标系的第三坐标。当转换后的第三坐标(转换后的点云坐标)和第二坐标之间(图像坐标)的差异大于等于设定阈值时,也就是说激光雷达与相机的相对位置发生改变。In this case, the choice of dynamic targets at first favors pedestrians and vehicles, as these are the easiest to get in driving. Similarly, the point cloud coordinates are the three-dimensional coordinates in the lidar coordinate system, and the image coordinates are the two-dimensional coordinates in the camera coordinate system, and the two cannot be directly compared. It is still necessary to use reprojection to project the first coordinate in the point cloud coordinate system to the third coordinate in the image coordinate system. When the difference between the converted third coordinates (the converted point cloud coordinates) and the second coordinates (image coordinates) is greater than or equal to the set threshold, that is to say, the relative position of the lidar and the camera changes.

基于图像检测、点云的分割检测得到的目标进行匹配,例如,可以使用梯度迭代的方式得到匹配的投影图像,如图5所示的行人和车辆。整体匹配的优化目标包括第i个场景目标的第二坐标(xi,yi)和第三坐标(mk,nk)。针对多目标模型,根据检测的场景目标,可得到如下的优化目标函数:Matching is performed based on the targets obtained by image detection and point cloud segmentation detection. For example, gradient iterations can be used to obtain matched projection images, such as pedestrians and vehicles as shown in Figure 5. The optimization objective of the overall matching includes the second coordinates (x i , y i ) and the third coordinates (m k , n k ) of the ith scene objective. For the multi-objective model, according to the detected scene object, the following optimization objective function can be obtained:

min sum((xi-mk)2+(yi-nk)2),i=0,1,2,3,...,s;k=0,1,2,3,...,s。min sum((x i -m k ) 2 +(y i -n k ) 2 ), i=0, 1, 2, 3, ..., s; k=0, 1, 2, 3, .. ., s.

针对以上目标函数,可以采用pnp(pespective-n-point,多点透视成像)求解,即可得到相机和激光雷达之间的外参(即旋转平移矩阵,把偶哦旋转矩阵和平移矩阵)。For the above objective function, pnp (pespective-n-point, multi-point perspective imaging) can be used to solve the problem, and the external parameters between the camera and the lidar (ie, the rotation and translation matrix, the even rotation matrix and the translation matrix) can be obtained.

对于步骤S17,利用上述步骤的匹配结果优化目标函数对激光设备和所述相机之间的外参进行标定。For step S17, use the matching result of the above steps to optimize the objective function to calibrate the external parameters between the laser device and the camera.

在一些应用场景中,车辆刚开始处于静止状态,当启动车辆后车辆处于运动状态,为进一步提高激光设备和相机之间的外参标定的准确性,可如图6所示,在启动车辆时确定场景目标(例如静态目标,车道线和/或标识牌等),从相机采集的图像中、激光设备采集的激光点云数据中识别出匹配的静态目标(例如车道线、标识牌),根据对静态目标对应的第二坐标和第三座标之间的匹配结果确定是否需要进行相机和激光设备之间的外参标定,以实现初步参数预估(即完成第一次标定);在车辆运动时,确定场景目标为动态目标(例如行人和/或车辆),从相机采集的图像中、激光设备采集的激光点云数据中识别出匹配的动态目标,根据动态目标对应的第二坐标和第三坐标之间的匹配结果标定相机和激光设备之间的外参,以实现第二次标定。在具体实现上,如图6所示,利用点云和图像在静止状态的行驶状态的场景1的场景目标(高处标识牌,车道线)进行初步图像和点云匹配,通过旋转点云,进行前投影,完成参数的搜索计算,确定初始位置参数。在初始化参数基础上,进行行驶状态为运动的场景2的数据建模,场景2依赖于道路中普遍存在的行人和汽车信息。通过深度网络提取出行人和车辆,得到中心点特征,然后进行多目标PNP匹配,最终迭代得到合适的外参,即旋转矩阵和偏移矩阵,图中Min F是利用上述目标函数得到的基于图像检测得到的行人/车辆与激光点云得到的车辆和行人(转换到图像坐标系后)的匹配结果,T0是用于比较的预设阈值,与上述举例的P0意图相似,在此不再赘述。In some application scenarios, the vehicle is in a stationary state at the beginning, and the vehicle is in a moving state after starting the vehicle. In order to further improve the accuracy of the external parameter calibration between the laser device and the camera, as shown in Figure 6, when starting the vehicle Determine the scene target (such as static target, lane line and/or sign, etc.), identify the matching static target (such as lane line, sign) from the image collected by the camera and the laser point cloud data collected by the laser device, according to Determine whether the external parameter calibration between the camera and the laser device needs to be performed based on the matching result between the second coordinate and the third coordinate corresponding to the static target, so as to achieve preliminary parameter estimation (ie, complete the first calibration); When moving, determine that the scene target is a dynamic target (such as pedestrians and/or vehicles), and identify the matching dynamic target from the image collected by the camera and the laser point cloud data collected by the laser device. The matching result between the third coordinates calibrates the external parameters between the camera and the laser device to realize the second calibration. In terms of specific implementation, as shown in Figure 6, using the point cloud and the scene target (high sign, lane line) of scene 1 in the stationary state of the driving state to perform preliminary image and point cloud matching, by rotating the point cloud, Perform front projection, complete the parameter search calculation, and determine the initial position parameters. On the basis of the initialization parameters, the data modeling of scene 2 in which the driving state is motion is carried out, and scene 2 depends on the information of pedestrians and cars ubiquitous in the road. The pedestrians and vehicles are extracted through the deep network, and the center point features are obtained, and then multi-objective PNP matching is performed, and finally the appropriate external parameters, that is, the rotation matrix and the offset matrix, are obtained by iteration. The matching result of the detected pedestrian/vehicle and the vehicle and pedestrian (after conversion to the image coordinate system) obtained from the laser point cloud, T0 is the preset threshold for comparison, which is similar to the P0 in the above example, and will not be repeated here. .

通过该实施方式可以看出,在每次车辆进行启动的过程中,开启校准参数检验过程,当检验参数的效果符合预期阈值,即:相机目标(xi,yi)与点云目标(mi,ni)的偏差小于设定阈值,即可认为激光设备和相机外部倾角等未发生较大晃动。当参数过大或位置不满足需求时,通过判断目标点云坐标和所述目标图像坐标之间的差异,实现参数自检验;针对自动驾驶晃动,造成的参数不适用,系统自动采集数据,更新参数的闭环流程,优化了算法标定的思路,拓宽了可用性,不依赖于传统的标定场景,可实现激光设备和相机之间外参标定的更加精准的自动化标定。It can be seen from this embodiment that, in the process of each vehicle startup, the calibration parameter verification process is started, and when the effect of the verification parameters meets the expected threshold, that is: the camera target (x i , y i ) and the point cloud target (m If the deviation of i , n i ) is less than the set threshold, it can be considered that the laser equipment and the external inclination of the camera do not shake greatly. When the parameters are too large or the position does not meet the requirements, the parameter self-checking is realized by judging the difference between the coordinates of the target point cloud and the coordinates of the target image; for automatic driving shaking, the resulting parameters are not applicable, the system automatically collects data and updates The closed-loop process of parameters optimizes the idea of algorithm calibration, broadens the usability, and does not rely on traditional calibration scenarios, which can realize more accurate automatic calibration of external parameter calibration between laser equipment and cameras.

在经过了激光点云校准优化后,本方法还考虑了应用的广泛性和实际道路中的较为特殊的情况,可以针对性的进行调整,例如:After the calibration and optimization of the laser point cloud, this method also considers the extensiveness of application and the special circumstances in the actual road, and can be adjusted in a targeted manner, such as:

由于视觉在使用过程中,对道路目标的划分不明确,针对有些场景目标是否为前景目标,不能很明确的得到对应的特征(如地面图案等),因此借助激光点云数据可以很明显的判定是否为真实目标。Due to the unclear division of road targets during the use of vision, the corresponding features (such as ground patterns, etc.) cannot be clearly obtained for whether some scene targets are foreground targets, so the laser point cloud data can be used. is the real target.

实际运行过程中的障碍物,根据目标检测模型,很难明确归类到具体已标注的类别中,但在自动驾驶检测中,必须对此类目标进行安装避障等操作。因此,通过在激光点云中的不定类型点云的提取,确定场景目标真实存在,保证安全性。Obstacles in the actual operation process, according to the target detection model, it is difficult to categorize them into specific marked categories, but in automatic driving detection, such targets must be installed and avoided. Therefore, through the extraction of indeterminate types of point clouds in the laser point cloud, it is determined that the scene target really exists and the security is ensured.

在夜晚等视觉较弱的场景,激光点云的可靠性更高,因此通过图像降低视觉目标的权重,可在一定程度上提升夜晚驾驶性能。In scenes with weak vision such as at night, the reliability of the laser point cloud is higher. Therefore, reducing the weight of the visual target through the image can improve the night driving performance to a certain extent.

同样的,经过了图像标定的优化后,激光点云在雨天等异常场景天气下,会出现异常点较多的情况,借助视觉图像的过滤,可判定为车道内噪点,可完成过滤,保证异常障碍物剔除,避免紧急刹车等误操作,提升驾驶体验。Similarly, after the optimization of image calibration, the laser point cloud will have many abnormal points in abnormal weather such as rainy days. With the help of visual image filtering, it can be determined as noise in the lane, and the filtering can be completed to ensure abnormality. Obstacles are removed to avoid misoperations such as emergency braking and improve the driving experience.

根据不同的情况,实现针对不同异常目标在激光和视觉中的筛选,提供了剔除和筛选的思路,便于障碍物的提取,进而提升优化效果。According to different situations, it realizes the screening of different abnormal targets in laser and vision, provides ideas of elimination and screening, facilitates the extraction of obstacles, and improves the optimization effect.

如图7所示为本发明一实施例提供的一种传感器自动标定装置的结构示意图,该系统可执行上述任意实施例所述的传感器自动标定方法,并配置在终端中。FIG. 7 is a schematic structural diagram of a sensor automatic calibration device according to an embodiment of the present invention. The system can execute the sensor automatic calibration method described in any of the above embodiments, and is configured in a terminal.

本实施例提供的一种传感器自动标定装置10包括:点云获取模块11,图像获取模块12,第一坐标确定模块13,第二坐标确定模块14,第三坐标确定模块15,匹配模块16和标定模块17。A sensor automatic calibration device 10 provided in this embodiment includes: a point cloud acquisition module 11, an image acquisition module 12, a first coordinate determination module 13, a second coordinate determination module 14, a third coordinate determination module 15, a matching module 16 and Calibration module 17.

其中,点云获取模块11用于获取激光设备采集的激光点云数据;图像获取模块12用于获取相机采集的图像数据;第一坐标确定模块13用于根据所述移动装置的行驶状态,对所述激光点云数据中与所述行驶状态相关的场景目标进行识别,得到所述场景目标在激光点云坐标系下的第一坐标;第二坐标确定模块14用于根据所述移动装置的行驶状态,对所述图像数据中与所述行驶状态相关的所述场景目标进行识别,得到所述场景目标在图像坐标系下的第二坐标;第三坐标确定模块15用于将所述第一坐标重投影为图像坐标系下的第三坐标;匹配模块16用于对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果;标定模块17用于基于所述匹配结果,对所述激光设备和所述相机之间的外参进行标定。Among them, the point cloud acquisition module 11 is used to acquire laser point cloud data collected by the laser equipment; the image acquisition module 12 is used to acquire the image data collected by the camera; Identify the scene target related to the driving state in the laser point cloud data, and obtain the first coordinates of the scene target in the laser point cloud coordinate system; the second coordinate determination module 14 is used for determining according to the mobile device. Driving state, identify the scene object related to the driving state in the image data, and obtain the second coordinate of the scene object in the image coordinate system; the third coordinate determination module 15 is used for A coordinate is reprojected to a third coordinate in the image coordinate system; the matching module 16 is used for matching the second coordinate and the third coordinate in the image coordinate system to obtain a matching result; the calibration module 17 is used for Based on the matching result, the external parameters between the laser device and the camera are calibrated.

优选地,当所述行驶状态为启动后的静止状态时,所述场景目标为预定的静态目标。Preferably, when the driving state is a static state after starting, the scene target is a predetermined static target.

优选地,当所述行驶状态为启动后的静止状态时,所属匹配模块16对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果包括:在将所述第一坐标重投影为图像坐标系下的第三坐标后,基于所述第三坐标确定所述静态目标的点云检测框;基于所述第二坐标确定所述静态目标的视觉检测框;当所述点云检测框和所述视觉检测框的交并比大于设定比值时,基于所述激光点云坐标系和所述图像坐标系之间的重投影关系,确定所述激光设备和所述相机之间的相对位姿,作为匹配结果。Preferably, when the driving state is the stationary state after starting, the matching module 16 matches the second coordinate and the third coordinate in the image coordinate system, and obtaining a matching result includes: After the first coordinate is reprojected to the third coordinate in the image coordinate system, the point cloud detection frame of the static target is determined based on the third coordinate; the visual detection frame of the static target is determined based on the second coordinate; When the intersection ratio of the point cloud detection frame and the visual detection frame is greater than a set ratio, determine the laser device and the image coordinate system based on the reprojection relationship between the laser point cloud coordinate system and the image coordinate system. The relative pose between the cameras, as the matching result.

优选地,所述静态目标包括车道线和/或标识牌。Preferably, the static objects include lane markings and/or signage.

优选地,当所述行驶状态为运动时,所述场景目标为预定的动态目标。Preferably, when the driving state is motion, the scene target is a predetermined dynamic target.

优选地,当所述行驶状态为运动时,所述匹配模块16对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果包括:在将所述第一坐标重投影为图像坐标系下的第三坐标后,对所述第三坐标中的第一数量的动态目标与所述第二坐标中的第二数量的动态目标进行多目标模型的匹配;对匹配的结果进行多点透视成像投影,确定所述激光设备和所述相机之间的相对位姿,作为匹配结果。Preferably, when the driving state is motion, the matching module 16 matches the second coordinate and the third coordinate in the image coordinate system, and obtaining a matching result includes: After the coordinates are reprojected to the third coordinates in the image coordinate system, the multi-object model matching is performed on the first number of dynamic objects in the third coordinates and the second number of dynamic objects in the second coordinates; The matching result is subjected to multi-point perspective imaging projection, and the relative pose between the laser device and the camera is determined as the matching result.

优选地,所述动态目标包括:行人和/或车辆。Preferably, the dynamic targets include pedestrians and/or vehicles.

优选地,所述激光设备和所述相机响应于所述移动装置的启动而触发采集激光点云数据和图像数据。Preferably, the laser device and the camera are triggered to collect laser point cloud data and image data in response to activation of the mobile device.

本发明实施例还提供了一种非易失性计算机存储介质,计算机存储介质存储有计算机可执行指令,该计算机可执行指令可执行上述任意方法实施例中的传感器自动标定方法;Embodiments of the present invention further provide a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions can execute the automatic sensor calibration method in any of the above method embodiments;

作为一种实施方式,本发明的非易失性计算机存储介质存储有计算机可执行指令,计算机可执行指令设置为:As an embodiment, the non-volatile computer storage medium of the present invention stores computer-executable instructions, and the computer-executable instructions are set to:

获取激光设备采集的激光点云数据;Obtain laser point cloud data collected by laser equipment;

获取相机采集的图像数据;Obtain the image data collected by the camera;

根据所述移动装置的行驶状态,对所述激光点云数据中与所述行驶状态相关的场景目标进行识别,得到所述场景目标在激光点云坐标系下的第一坐标;According to the driving state of the mobile device, identify the scene target related to the driving state in the laser point cloud data, and obtain the first coordinates of the scene target in the laser point cloud coordinate system;

根据所述移动装置的行驶状态,对所述图像数据中与所述行驶状态相关的所述场景目标进行识别,得到所述场景目标在图像坐标系下的第二坐标;According to the driving state of the mobile device, identify the scene object related to the driving state in the image data, and obtain the second coordinate of the scene object in the image coordinate system;

将所述第一坐标重投影为图像坐标系下的第三坐标;reprojecting the first coordinate into a third coordinate in the image coordinate system;

对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果;Matching the second coordinate and the third coordinate in the image coordinate system to obtain a matching result;

基于所述匹配结果,对所述激光设备和所述相机之间的外参进行标定。Based on the matching result, the external parameters between the laser device and the camera are calibrated.

作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的方法对应的程序指令/模块。一个或者多个程序指令存储在非易失性计算机可读存储介质中,当被处理器执行时,执行上述任意方法实施例中的传感器自动标定方法。As a non-volatile computer-readable storage medium, it can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the methods in the embodiments of the present invention. One or more program instructions are stored in a non-volatile computer-readable storage medium, and when executed by a processor, perform the automatic sensor calibration method in any of the above method embodiments.

本发明实施例还提供一种电子设备,其包括:至少一个处理器,以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行传感器自动标定方法。An embodiment of the present invention further provides an electronic device, comprising: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor , the instructions are executed by the at least one processor to enable the at least one processor to perform a sensor automatic calibration method.

在一些实施例中,本发明实施例还提供一种移动装置,包括本体和所述本体上安装的根据前述任一实施例所述的电子设备。其中,移动装置可以是无人驾驶车辆,例如无人驾驶清扫车、无人驾驶洗地车、无人驾驶物流车、无人驾驶乘用车、无人驾驶环卫车、无人驾驶小巴车/大巴车、卡车、矿车等,还可以是机器人等。In some embodiments, embodiments of the present invention further provide a mobile device, comprising a body and the electronic device according to any of the foregoing embodiments mounted on the body. The mobile device may be an unmanned vehicle, such as an unmanned sweeper, an unmanned floor washing vehicle, an unmanned logistics vehicle, an unmanned passenger car, an unmanned sanitation vehicle, and an unmanned minibus /Bus, truck, minecart, etc., also robots, etc.

在一些实施例中,本发明实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本发明实施例中任意一项所述的传感器自动标定方法。In some embodiments, the embodiments of the present invention further provide a computer program product, which, when the computer program product runs on a computer, enables the computer to execute the automatic sensor calibration method described in any one of the embodiments of the present invention .

图8是本申请另一实施例提供的传感器自动标定方法的电子设备的硬件结构示意图,如图8所示,该设备包括:FIG. 8 is a schematic diagram of the hardware structure of an electronic device for an automatic sensor calibration method provided by another embodiment of the present application. As shown in FIG. 8 , the device includes:

一个或多个处理器810以及存储器820,图8中以一个处理器810为例。传感器自动标定方法的设备还可以包括:输入装置830和输出装置840。One or more processors 810 and a memory 820, one processor 810 is taken as an example in FIG. 8 . The apparatus of the sensor automatic calibration method may further include: an input device 830 and an output device 840 .

处理器810、存储器820、输入装置830和输出装置840可以通过总线或者其他方式连接,图8中以通过总线连接为例。The processor 810, the memory 820, the input device 830, and the output device 840 may be connected through a bus or in other ways, and the connection through a bus is taken as an example in FIG. 8 .

存储器820作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的传感器自动标定方法对应的程序指令/模块。处理器810通过运行存储在存储器820中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例传感器自动标定方法。The memory 820, as a non-volatile computer-readable storage medium, can be used to store non-volatile software programs, non-volatile computer-executable programs and modules, such as programs corresponding to the sensor automatic calibration method in the embodiments of the present application Directive/Module. The processor 810 executes various functional applications and data processing of the server by running the non-volatile software programs, instructions and modules stored in the memory 820, that is, to implement the automatic sensor calibration method of the above method embodiment.

存储器820可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储数据等。此外,存储器820可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器820可选包括相对于处理器810远程设置的存储器,这些远程存储器可以通过网络连接至移动装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 820 may include a stored program area and a stored data area, wherein the stored program area can store an operating system and an application program required by at least one function; the stored data area can store data and the like. Additionally, memory 820 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 820 may optionally include memory located remotely from processor 810, which may be connected to the mobile device via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.

输入装置830可接收输入的数字或字符信息。输出装置840可包括显示屏等显示设备。The input device 830 may receive input numerical or character information. The output device 840 may include a display device such as a display screen.

所述一个或者多个模块存储在所述存储器820中,当被所述一个或者多个处理器810执行时,执行上述任意方法实施例中的传感器自动标定方法。The one or more modules are stored in the memory 820, and when executed by the one or more processors 810, perform the automatic sensor calibration method in any of the above method embodiments.

上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。The above product can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method. For technical details not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of this application.

非易失性计算机可读存储介质可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据装置的使用所创建的数据等。此外,非易失性计算机可读存储介质可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,非易失性计算机可读存储介质可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The non-volatile computer-readable storage medium may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the device. data etc. In addition, the non-volatile computer-readable storage medium may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the non-volatile computer-readable storage medium may optionally include memory located remotely from the processor, which may be connected to the device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.

本发明实施例还提供一种电子设备,其包括:至少一个处理器,以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明任一实施例的传感器自动标定方法的步骤。An embodiment of the present invention further provides an electronic device, comprising: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor , the instructions are executed by the at least one processor, so that the at least one processor can execute the steps of the automatic sensor calibration method according to any embodiment of the present invention.

本申请实施例的电子设备以多种形式存在,包括但不限于:The electronic devices in the embodiments of the present application exist in various forms, including but not limited to:

(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机、多媒体手机、功能性手机,以及低端手机等。(1) Mobile communication equipment: This type of equipment is characterized by having mobile communication functions, and its main goal is to provide voice and data communication. Such terminals include: smart phones, multimedia phones, feature phones, and low-end phones.

(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如平板电脑。(2) Ultra-mobile personal computer equipment: This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has the characteristics of mobile Internet access. Such terminals include: PDAs, MIDs, and UMPC devices, such as tablet computers.

(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器,掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。(3) Portable entertainment equipment: This type of equipment can display and play multimedia content. Such devices include: audio and video players, handheld game consoles, e-books, as well as smart toys and portable car navigation devices.

(4)其他具有数据处理功能的移动装置。(4) Other mobile devices with data processing functions.

在本文中,术语“包括”、“包含”,不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。As used herein, the terms "comprising" and "comprising" include not only those elements, but also other elements not expressly listed, or elements inherent to such a process, method, article or apparatus. Without further limitation, an element defined by the phrase "comprises" does not preclude the presence of additional identical elements in a process, method, article, or device that includes the element.

以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment. Those of ordinary skill in the art can understand and implement it without creative effort.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that each embodiment can be implemented by means of software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on this understanding, the above-mentioned technical solutions can be embodied in the form of software products in essence or the parts that make contributions to the prior art, and the computer software products can be stored in computer-readable storage media, such as ROM/RAM, magnetic A disc, an optical disc, etc., includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in various embodiments or some parts of the embodiments.

最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that it can still be The technical solutions described in the foregoing embodiments are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (13)

1.一种传感器自动标定方法,应用于移动装置,所述传感器包括激光设备和相机,所述方法包括:1. A sensor automatic calibration method, applied to a mobile device, the sensor comprises a laser device and a camera, and the method comprises: 获取激光设备采集的激光点云数据;Obtain laser point cloud data collected by laser equipment; 获取相机采集的图像数据;Obtain the image data collected by the camera; 根据所述移动装置的行驶状态,对所述激光点云数据中与所述行驶状态相关的场景目标进行识别,得到所述场景目标在激光点云坐标系下的第一坐标;According to the driving state of the mobile device, identify the scene target related to the driving state in the laser point cloud data, and obtain the first coordinates of the scene target in the laser point cloud coordinate system; 根据所述移动装置的行驶状态,对所述图像数据中与所述行驶状态相关的所述场景目标进行识别,得到所述场景目标在图像坐标系下的第二坐标;According to the driving state of the mobile device, identify the scene object related to the driving state in the image data, and obtain the second coordinate of the scene object in the image coordinate system; 将所述第一坐标重投影为图像坐标系下的第三坐标;reprojecting the first coordinate into a third coordinate in the image coordinate system; 对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果;Matching the second coordinate and the third coordinate in the image coordinate system to obtain a matching result; 基于所述匹配结果,对所述激光设备和所述相机之间的外参进行标定。Based on the matching result, the external parameters between the laser device and the camera are calibrated. 2.根据权利要求1所述的方法,其特征在于,当所述行驶状态为启动后的静止状态时,所述场景目标为预定的静态目标。2 . The method according to claim 1 , wherein when the driving state is a static state after starting, the scene target is a predetermined static target. 3 . 3.根据权利要求2所述的方法,其特征在于,当所述行驶状态为启动后的静止状态时,对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果包括:3 . The method according to claim 2 , wherein when the driving state is a stationary state after starting, the second coordinate and the third coordinate are matched in the image coordinate system, 3 . The matching results obtained include: 在将所述第一坐标重投影为图像坐标系下的第三坐标后,基于所述第三坐标确定所述静态目标的点云检测框;After reprojecting the first coordinate to a third coordinate in the image coordinate system, determining a point cloud detection frame of the static target based on the third coordinate; 基于所述第二坐标确定所述静态目标的视觉检测框;determining a visual detection frame of the static target based on the second coordinates; 当所述点云检测框和所述视觉检测框的交并比大于设定比值时,基于所述激光点云坐标系和所述图像坐标系之间的重投影关系,确定所述激光设备和所述相机之间的相对位姿,作为匹配结果。When the intersection ratio of the point cloud detection frame and the visual detection frame is greater than a set ratio, determine the laser device and the image coordinate system based on the reprojection relationship between the laser point cloud coordinate system and the image coordinate system. The relative pose between the cameras, as the matching result. 4.根据权利要求2所述的方法,其特征在于,所述静态目标包括车道线和/或标识牌。4. The method of claim 2, wherein the static target comprises lane markings and/or signage. 5.根据权利要求1-4中任一项所述的方法,其特征在于,当所述行驶状态为运动时,所述场景目标为预定的动态目标。5 . The method according to claim 1 , wherein when the driving state is motion, the scene target is a predetermined dynamic target. 6 . 6.根据权利要求5所述的方法,其特征在于,当所述行驶状态为运动时,对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果包括:6 . The method according to claim 5 , wherein when the driving state is motion, matching the second coordinate and the third coordinate in the image coordinate system, and obtaining a matching result comprises the following steps: 6 . : 在将所述第一坐标重投影为图像坐标系下的第三坐标后,对所述第三坐标中的第一数量的动态目标与所述第二坐标中的第二数量的动态目标进行多目标模型的匹配;After the first coordinate is reprojected to the third coordinate in the image coordinate system, the first number of dynamic objects in the third coordinate and the second number of dynamic objects in the second coordinate are multiplied. target model matching; 对匹配的结果进行多点透视成像投影,确定所述激光设备和所述相机之间的相对位姿,作为匹配结果。Multi-point perspective imaging projection is performed on the matching result, and the relative pose between the laser device and the camera is determined as the matching result. 7.根据权利要求5所述的方法,其特征在于,所述动态目标包括:行人和/或车辆。7. The method according to claim 5, wherein the dynamic target comprises: pedestrians and/or vehicles. 8.根据权利要求1所述的方法,其特征在于,所述激光设备和所述相机响应于所述移动装置的启动而触发采集激光点云数据和图像数据。8 . The method of claim 1 , wherein the laser device and the camera are triggered to collect laser point cloud data and image data in response to activation of the mobile device. 9 . 9.一种传感器自动标定装置,其特征在于,包括:9. A sensor automatic calibration device, characterized in that, comprising: 点云获取模块,用于获取激光设备采集的激光点云数据;Point cloud acquisition module, used to acquire laser point cloud data collected by laser equipment; 图像获取模块,用于获取相机采集的图像数据;The image acquisition module is used to acquire the image data collected by the camera; 第一坐标确定模块,用于根据所述移动装置的行驶状态,对所述激光点云数据中与所述行驶状态相关的场景目标进行识别,得到所述场景目标在激光点云坐标系下的第一坐标;The first coordinate determination module is used to identify the scene target related to the driving state in the laser point cloud data according to the driving state of the mobile device, and obtain the scene target in the laser point cloud coordinate system. first coordinate; 第二坐标确定模块,用于根据所述移动装置的行驶状态,对所述图像数据中与所述行驶状态相关的所述场景目标进行识别,得到所述场景目标在图像坐标系下的第二坐标;The second coordinate determination module is configured to identify the scene object related to the driving state in the image data according to the driving state of the mobile device, and obtain the second coordinate of the scene object in the image coordinate system. coordinate; 第三坐标确定模块,用于将所述第一坐标重投影为图像坐标系下的第三坐标;A third coordinate determination module, configured to reproject the first coordinate to a third coordinate in the image coordinate system; 匹配模块,用于对所述第二坐标与所述第三坐标在所述图像坐标系下进行匹配,得到匹配结果;a matching module, configured to match the second coordinate and the third coordinate in the image coordinate system to obtain a matching result; 标定模块,用于基于所述匹配结果,对所述激光设备和所述相机之间的外参进行标定。A calibration module, configured to calibrate an external parameter between the laser device and the camera based on the matching result. 10.一种电子设备,其特征在于,包括:至少一个处理器,以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-8中任一项所述的传感器自动标定方法。10. An electronic device, comprising: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, The instructions are executed by the at least one processor to enable the at least one processor to perform the automatic sensor calibration method of any of claims 1-8. 11.一种移动装置,其特征在于,包括本体和所述本体上安装的根据权利要求10所述的电子设备。11. A mobile device, characterized by comprising a body and the electronic device according to claim 10 mounted on the body. 12.一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-8中任一项所述的传感器自动标定方法。12. A storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the automatic sensor calibration method according to any one of claims 1-8 is implemented. 13.一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-8中任一项所述的传感器自动标定方法。13. A computer program product, characterized in that, when the computer program product runs on a computer, the computer is caused to execute the automatic sensor calibration method according to any one of claims 1-8.
CN202210455494.7A 2022-04-27 2022-04-27 Sensor automatic calibration method, electronic device and storage medium Pending CN114862964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210455494.7A CN114862964A (en) 2022-04-27 2022-04-27 Sensor automatic calibration method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210455494.7A CN114862964A (en) 2022-04-27 2022-04-27 Sensor automatic calibration method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114862964A true CN114862964A (en) 2022-08-05

Family

ID=82632993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210455494.7A Pending CN114862964A (en) 2022-04-27 2022-04-27 Sensor automatic calibration method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114862964A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345944A (en) * 2022-08-10 2022-11-15 腾讯科技(深圳)有限公司 Method and device for determining external parameter calibration parameters, computer equipment and storage medium
CN115457088A (en) * 2022-10-31 2022-12-09 成都盛锴科技有限公司 Method and system for fixing axle of train
CN117152266A (en) * 2023-08-14 2023-12-01 武汉光庭信息技术股份有限公司 A joint calibration method and system for vehicle-mounted lidar and camera
CN118038098A (en) * 2024-04-11 2024-05-14 腾讯科技(深圳)有限公司 Image processing method, device, equipment, storage medium and program product

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
CN109343061A (en) * 2018-09-19 2019-02-15 百度在线网络技术(北京)有限公司 Transducer calibration method, device, computer equipment, medium and vehicle
CN111127563A (en) * 2019-12-18 2020-05-08 北京万集科技股份有限公司 Joint calibration method, device, electronic device and storage medium
CN112837383A (en) * 2021-03-01 2021-05-25 东南大学 Camera and lidar recalibration method, device and computer readable storage medium
US20210192788A1 (en) * 2019-12-18 2021-06-24 Motional Ad Llc Camera-to-lidar calibration and validation
CN114076937A (en) * 2020-08-20 2022-02-22 北京万集科技股份有限公司 Method and device for joint calibration of lidar and camera, server, and computer-readable storage medium
CN114076936A (en) * 2020-08-20 2022-02-22 北京万集科技股份有限公司 Precision evaluation method and device of combined calibration parameters, server and computer readable storage medium
CN114241057A (en) * 2021-11-30 2022-03-25 深圳元戎启行科技有限公司 External reference calibration method and system for camera and laser radar and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
CN109343061A (en) * 2018-09-19 2019-02-15 百度在线网络技术(北京)有限公司 Transducer calibration method, device, computer equipment, medium and vehicle
CN111127563A (en) * 2019-12-18 2020-05-08 北京万集科技股份有限公司 Joint calibration method, device, electronic device and storage medium
US20210192788A1 (en) * 2019-12-18 2021-06-24 Motional Ad Llc Camera-to-lidar calibration and validation
CN114076937A (en) * 2020-08-20 2022-02-22 北京万集科技股份有限公司 Method and device for joint calibration of lidar and camera, server, and computer-readable storage medium
CN114076936A (en) * 2020-08-20 2022-02-22 北京万集科技股份有限公司 Precision evaluation method and device of combined calibration parameters, server and computer readable storage medium
CN112837383A (en) * 2021-03-01 2021-05-25 东南大学 Camera and lidar recalibration method, device and computer readable storage medium
CN114241057A (en) * 2021-11-30 2022-03-25 深圳元戎启行科技有限公司 External reference calibration method and system for camera and laser radar and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JORGE BELTRÁN等: "Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor Setups", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 11 April 2022 (2022-04-11) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345944A (en) * 2022-08-10 2022-11-15 腾讯科技(深圳)有限公司 Method and device for determining external parameter calibration parameters, computer equipment and storage medium
CN115457088A (en) * 2022-10-31 2022-12-09 成都盛锴科技有限公司 Method and system for fixing axle of train
CN117152266A (en) * 2023-08-14 2023-12-01 武汉光庭信息技术股份有限公司 A joint calibration method and system for vehicle-mounted lidar and camera
CN118038098A (en) * 2024-04-11 2024-05-14 腾讯科技(深圳)有限公司 Image processing method, device, equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
CN114862964A (en) Sensor automatic calibration method, electronic device and storage medium
WO2021004077A1 (en) Method and apparatus for detecting blind areas of vehicle
CN113261274B (en) An image processing method and related terminal device
CN110471058A (en) The system and method detected automatically for trailer attribute
CN115235500B (en) Pose correction method and device based on lane line constraints, full working condition static environment modeling method and device
US11645832B1 (en) Sensor fusion for precipitation detection and control of vehicles
US20220073104A1 (en) Traffic accident management device and traffic accident management method
CN114494444A (en) Obstacle dynamic and static state estimation method, electronic device and storage medium
WO2018149539A1 (en) A method and apparatus for estimating a range of a moving object
US20230394682A1 (en) Object tracking device and object tracking method
Parra-Tsunekawa et al. A kalman-filtering-based approach for improving terrain mapping in off-road autonomous vehicles
CN116309693A (en) Object motion state detection method, device, mobile device and storage medium
Jiménez et al. Improving the lane reference detection for autonomous road vehicle control
Kim et al. Safety evaluation of autonomous vehicles for a comparative study of camera image distance information and dynamic characteristics measuring equipment
CN115601392A (en) Obstacle motion state detection method, device, mobile device and storage medium
CN114913329A (en) Image processing method, semantic segmentation network training method and device
CN116901936A (en) Automatic parking method, device, equipment and medium
WO2024036984A1 (en) Target localization method and related system, and storage medium
CN114972499A (en) Charging pile detection method and system
CN117671008B (en) Position and posture estimation method, readable storage medium and intelligent device
CN117809145A (en) Fusion method, device, mobile device and storage medium of sensor semantic information
US20240059255A1 (en) Baffle assembly
US20240223915A1 (en) Systems and methods for downsampling images
CN115797903A (en) Blind zone memory method, device, mobile device and storage medium
EP4413534A1 (en) Removing non-relevant points of a point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination