CN111652261A - A Multimodal Perception Fusion System - Google Patents
A Multimodal Perception Fusion System Download PDFInfo
- Publication number
- CN111652261A CN111652261A CN202010120330.XA CN202010120330A CN111652261A CN 111652261 A CN111652261 A CN 111652261A CN 202010120330 A CN202010120330 A CN 202010120330A CN 111652261 A CN111652261 A CN 111652261A
- Authority
- CN
- China
- Prior art keywords
- fusion system
- camera
- cameras
- modal
- imu
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 42
- 230000008447 perception Effects 0.000 title claims abstract description 30
- 238000009434 installation Methods 0.000 claims abstract description 3
- 238000000605 extraction Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims 2
- 229910052705 radium Inorganic materials 0.000 claims 1
- HCWPIIXVSYCSAN-UHFFFAOYSA-N radium atom Chemical compound [Ra] HCWPIIXVSYCSAN-UHFFFAOYSA-N 0.000 claims 1
- 238000010586 diagram Methods 0.000 abstract description 4
- 238000010276 construction Methods 0.000 abstract description 3
- 238000000034 method Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 4
- 238000009877 rendering Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
本发明提供一种用于全场景的多模态感知融合系统,所述多模态感知融合系统包括上位机、激光雷达、多目相机、IMU、红外深度相机、电源,所述多目相机包括两个FLIR工业网口相机和两个USB3.0相机,其组成多模态感知融合系统的步骤为:安装各硬件、软件的安装以及数据的获取、模型的构建。本发明使用建模辨识中的麦夸特算法对外参数进行迭代优化,得到最优估计,从而得到最精确地模型和效果图,使得融合更加精准,能够达到实时的感知环境,并且该多模态感知融合系统小巧,重量较轻,可用于无人车车载,无人机机载,医疗行业,军事无人环境的建模,也可用于室内室外等各种复杂环境,为规划导航奠定基础。
The present invention provides a multi-modal perception fusion system for a full scene. The multi-modal perception fusion system includes a host computer, a laser radar, a multi-eye camera, an IMU, an infrared depth camera, and a power supply, and the multi-eye camera includes Two FLIR industrial network port cameras and two USB3.0 cameras, the steps to form a multi-modal perception fusion system are: installation of hardware, software installation, data acquisition, and model construction. The present invention uses the McQuarte algorithm in the modeling identification to iteratively optimize the external parameters to obtain the optimal estimation, thereby obtaining the most accurate model and effect diagram, making the fusion more accurate, realizing the real-time perception environment, and the multi-modality The perception fusion system is small and light in weight, and can be used in the modeling of unmanned vehicles, unmanned aerial vehicles, medical industry, and military unmanned environments. It can also be used in various complex environments such as indoor and outdoor, laying the foundation for planning and navigation.
Description
技术领域technical field
本发明属于多模态感知融合系统领域,尤其涉及一种用于全场景的多模态 感知融合系统。The invention belongs to the field of multimodal perception fusion systems, and in particular relates to a multimodal perception fusion system for full scenes.
背景技术Background technique
随着传感器技术和互联网的迅速发展,各种不同模态的大数据正在以前所 未有的发展速度迅速涌现。对于一个待描述事物(目标、场景等),通过不同 的方法或视角收集到的耦合的数据样本就是多模态数据,通常把收集这些数据 的每一个方法或视角称之为一个模态。With the rapid development of sensor technology and the Internet, various modalities of big data are rapidly emerging at an unprecedented development speed. For a thing to be described (target, scene, etc.), the coupled data samples collected through different methods or perspectives are multimodal data, and each method or perspective for collecting these data is usually called a modality.
狭义的多模态信息通常关注感知特性不同的模态,而广义的多模态融合则 通常还包括同一模态信息中的多特征融合,以及多个同类型传感器的数据融合 等,因此,多模态感知与学习这一问题与信号处理领域的“多源融合”、“多 传感器融合”,以及机器学习领域的“多视学习”或“多视融合”等有密切的 联系;多模态数据可以获得更加全面准确的信息,增强系统的可靠性和容错性。Narrow multimodal information usually focuses on modalities with different perceptual characteristics, while generalized multimodal fusion usually also includes multi-feature fusion in the same modal information, and data fusion of multiple sensors of the same type. The problem of modal perception and learning is closely related to "multi-source fusion" and "multi-sensor fusion" in the field of signal processing, and "multi-view learning" or "multi-view fusion" in the field of machine learning; The data can obtain more comprehensive and accurate information, and enhance the reliability and fault tolerance of the system.
在多模态感知与学习问题中,由于不同模态之间具有完全不同的描述形式 和复杂的耦合对应关系,因此需要统一地解决关于多模态的感知表示和认知融 合的问题。多模态感知与融合就是要通过适当的变换或投影,使得两个看似完 全无关、不同格式的数据样本,可以相互比较融合,这种异构数据的融合往往 能取得意想不到的效果。In the multimodal perception and learning problem, since different modalities have completely different description forms and complex coupling correspondences, it is necessary to uniformly solve the problems of multimodal perception representation and cognitive fusion. Multimodal perception and fusion is to make two seemingly unrelated data samples of different formats, through appropriate transformation or projection, to be compared and fused with each other. This kind of fusion of heterogeneous data can often achieve unexpected results.
目前,多模态数据已经在互联网信息搜索、人机交互、工业环境故障诊断 和机器人等领域发挥了巨大的作用,视觉与语言之间的多模态学习是目前多模 态融合方面研究成果较为集中的领域,在机器人领域目前仍面临很多需要进一 步探索的挑战性问题;由此,我们研制了一套多模态感知系统,将多目视觉, 激光,双目红外,深度,IMU等多模态,这些硬件按照不同的方位进行安装。 以实现对大型场景,小型工件的自动化感知,扫描与建模,能够实现对全场景 的感知,适用于室内和室外,对环境的RGB图像信息赋予深度信息和距离信息, 但其中最主要的难点在于:异种多源传感器,特征的提取,以及特征之间相关性的求解使得融合更加精准,能够达到实时的感知环境。At present, multimodal data has played a huge role in Internet information search, human-computer interaction, industrial environment fault diagnosis and robotics. In the field of robotics, there are still many challenging problems that need to be further explored; therefore, we have developed a multi-modal perception system that integrates multi-modal vision, laser, binocular infrared, depth, IMU and other multi-modalities. state, these hardwares are installed in different orientations. In order to realize the automatic perception, scanning and modeling of large scenes and small workpieces, it can realize the perception of the whole scene, suitable for indoor and outdoor, and give depth information and distance information to the RGB image information of the environment, but the main difficulty is It lies in: heterogeneous multi-source sensors, feature extraction, and the solution of correlation between features make the fusion more accurate and can achieve real-time perception of the environment.
发明内容SUMMARY OF THE INVENTION
为了解决上述技术问题,本发明提供一种用于全场景的多模态感知融合系 统,以实现对大型场景,小型工件的自动化感知,扫描与建模。所述多模态感 知融合系统包括上位机、激光雷达、多目相机、IMU、红外深度相机、电源,所 述多目相机包括两个FLIR工业网口相机和两个USB3.0相机,其组成多模态感 知融合系统的步骤为:In order to solve the above-mentioned technical problems, the present invention provides a multi-modal perception fusion system for the whole scene, so as to realize the automatic perception, scanning and modeling of large-scale scenes and small workpieces. The multi-modal perception fusion system includes a host computer, a lidar, a multi-eye camera, an IMU, an infrared depth camera, and a power supply. The multi-eye camera includes two FLIR industrial network port cameras and two USB3.0 cameras. The steps of the multimodal perception fusion system are:
S1:安装硬件:将激光雷达以以太网接口连接的方式连接到上位机,将两 个FLIR工业网口相机以以太网接口方式连接到上位机,将两个USB3.0相机、 IMU以及红外深度相机分别连接到上位机的usb3.0接口,将各部分连接好后通 过数据线与电源相连接;S1: Install hardware: connect the lidar to the host computer with the Ethernet interface, connect the two FLIR industrial network port cameras to the host computer with the Ethernet interface, connect the two USB3.0 cameras, IMU and infrared depth The camera is respectively connected to the usb3.0 interface of the host computer, and after all parts are connected, it is connected to the power supply through the data cable;
S2:软件的安装和数据的获取:打开Linux Ubuntu系统,安装配置好各个 模块的驱动和软件,使用Robot Operating System启动各个模态的节点,并且 使用RVIZ将获取到的激光雷达的点云、多目相机的RGB图像、IMU的加速度计 以及陀螺仪信息以及红外深度相机的景深图的这些数据都显示出来;S2: Software installation and data acquisition: Open the Linux Ubuntu system, install and configure the drivers and software of each module, use the Robot Operating System to start the nodes of each mode, and use the RVIZ to obtain the point cloud, multi-modality, and multi-mode of the lidar. The RGB image of the camera, the accelerometer and gyroscope information of the IMU, and the depth map of the infrared depth camera are all displayed;
S3:模型构建:接着使用SLAM理论体系将获取到的数据进行处理,该处理 流程分为两步,分别是前端和后端,前端负责各个模块的特征提取和特征之间 的相关性的表示,后端负责参数的优化和三维重建以及定位,使用建模辨识中 的麦夸特算法对外参数进行迭代优化,得到最优估计,从而得出融合的最终模 型和效果图。S3: Model construction: Then use the SLAM theoretical system to process the acquired data. The processing flow is divided into two steps, namely the front-end and the back-end. The front-end is responsible for the feature extraction of each module and the representation of the correlation between features. The back-end is responsible for parameter optimization, 3D reconstruction and positioning, and iteratively optimizes the external parameters using the McQuarter algorithm in modeling identification to obtain the optimal estimate, thereby obtaining the final model and rendering of the fusion.
优选的,所述多模态感知融合系统采用的操作系统为Linux Ubuntu系统, 采用的中间件为Robot Operating System,使用的编程语言为c++和python。Preferably, the operating system used by the multimodal perception fusion system is Linux Ubuntu system, the middleware used is Robot Operating System, and the programming languages used are c++ and python.
优选的,所述激光雷达采用镭神智能c16-151B。Preferably, the laser radar adopts Leishen Intelligence c16-151B.
优选的,所述红外深度相机的数目为两个,所述红外深度相机和IMU采用 IntelReal Sense D435i。Preferably, the number of the infrared depth cameras is two, and the infrared depth cameras and the IMU adopt IntelReal Sense D435i.
优选的,所述激光雷达投影到地面的距离为10m,投影后下方会有锥形盲区, 所述红外深度相机工作距离为0.2-10m,可以弥补激光雷达投不到的盲区。Preferably, the projected distance of the lidar to the ground is 10m, and there will be a cone-shaped blind spot below after projection, and the working distance of the infrared depth camera is 0.2-10m, which can make up for the blind spot that cannot be projected by the lidar.
优选的,所述激光雷达、多目相机、IMU、红外深度相机都分别具有独立的 传感器。Preferably, the lidar, multi-eye camera, IMU, and infrared depth camera all have independent sensors.
与现有技术相比,本发明的有益效果是:使异类传感器自主联合,能够快 速标定,并且进行采集信息的匹配和三维空间下的融合,由点云生成面片模型, 再进行迭代优化,最终得到能够达到精度的三维重建模型,从而得到最精确地 模型和效果图,使得融合更加精准,能够达到实时的感知环境,为以后的识别 检测技术提供精准的技术数据,并且该多模态感知融合系统小巧,重量较轻, 可用于无人车车载,无人机机载,医疗行业,军事无人环境的建模,也可用于 室内室外等各种复杂环境,为规划导航奠定基础。Compared with the prior art, the beneficial effects of the present invention are: autonomous combination of heterogeneous sensors, rapid calibration, matching of collected information and fusion in three-dimensional space, generating a patch model from point clouds, and then performing iterative optimization, Finally, a 3D reconstruction model that can achieve accuracy is obtained, so as to obtain the most accurate model and renderings, which makes the fusion more accurate, can achieve real-time perception of the environment, and provide accurate technical data for future identification and detection technology. The fusion system is small and light in weight, and can be used in the modeling of unmanned vehicles, unmanned aerial vehicles, medical industry, and military unmanned environments. It can also be used in various complex environments such as indoor and outdoor, laying the foundation for planning and navigation.
附图说明Description of drawings
图1是全场景的多模态感知融合系统的外观图。Figure 1 is an appearance diagram of a full-scene multimodal perception fusion system.
图2是全场景的多模态系统结构图;Fig. 2 is the multimodal system structure diagram of the whole scene;
图3是全场景的多模态感知融合系统的安装步骤图。Fig. 3 is a diagram showing the installation steps of a full-scene multimodal perception fusion system.
图中:1-激光雷达;2-第一FLIR工业网口相机;3-第一USB3.0相机;4- 第一红外深度相机;5-第二红外深度相机;6-第二FLIR工业网口相机;7-第二 USB3.0相机;8-多目相机;9-IMU。In the picture: 1- Lidar; 2- The first FLIR industrial network port camera; 3- The first USB3.0 camera; 4- The first infrared depth camera; 5- The second infrared depth camera; 6- The second FLIR industrial network mouth camera; 7-second USB3.0 camera; 8-multi-eye camera; 9-IMU.
具体实施方式Detailed ways
下面将结合本发明实施例的附图,对本发明实施例中的技术方案进行清楚、 完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部 的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳 动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present invention.
以下对本发明做进一步描述:The present invention is further described below:
实施例:Example:
如附图1所示,一种用于全场景的多模态感知融合系统,所述多模态感知 融合系统采用的操作系统为Linux Ubuntu系统,采用的中间件为Robot Operating System,使用的编程语言为c++和python;所述多模态感知融合系 统包括上位机、激光雷达1、第一FLIR工业网口相机2、第二FLIR工业网口相 机6、第一USB3.0相机3、第二USB3.0相机7、多目相机8、IMU 9、第一红外 深度相机4、第二红外深度相机5、电源;As shown in accompanying
具体的,如附图3所示,其组成多模态感知融合系统的步骤为:Specifically, as shown in Figure 3, the steps of forming a multimodal perception fusion system are:
S1:安装硬件:将激光雷达以以太网接口连接的方式连接到上位机,将第 一FLIR工业网口相机2、第二FLIR工业网口相机3以以太网接口方式连接到上 位机,将第一USB3.0相机3、第二USB3.0相机7、IMU 9以及第一红外深度相 机4和第二红外深度相机5分别连接到上位机的usb3.0接口,将各部分连接好 后通过数据线与电源相连接;S1: Install the hardware: connect the lidar to the host computer via the Ethernet interface, connect the
S2:软件的安装和数据的获取:打开Linux Ubuntu系统,安装配置好各个 模块的驱动和软件,使用Robot Operating System启动各个模态的节点,并且 使用RVIZ将获取到的激光雷达1的点云,多目相机8、第一FLIR工业网口相机 2、第二FLIR工业网口相机6、第一USB3.0相机3以及第二USB3.0相机7的 RGB图像、IMU 9的加速度计以及陀螺仪信息以及第一红外深度相机4、第二红 外深度相机5的景深图的这些数据都显示出来;S2: Software installation and data acquisition: Open the Linux Ubuntu system, install and configure the drivers and software of each module, use the Robot Operating System to start the nodes of each mode, and use RVIZ to obtain the point cloud of
S3:模型构建:接着使用slam理论体系将获取到的数据进行处理,该处理 流程分为两步,分别是前端和后端,前端负责各个模块的特征提取和特征之间 的相关性的表示,后端负责参数的优化和三维重建以及定位,使用建模辨识中 的麦夸特算法对外参数进行迭代优化,得到最优估计,最后得出精确地融合后 最终模型和效果图。S3: Model construction: Then use the slam theoretical system to process the acquired data. The processing flow is divided into two steps, namely the front-end and the back-end. The front-end is responsible for the feature extraction of each module and the representation of the correlation between features. The back-end is responsible for parameter optimization, 3D reconstruction and positioning, and iteratively optimizes external parameters using the McQuarte algorithm in modeling identification to obtain the optimal estimate, and finally obtains the final model and rendering after accurate fusion.
具体的,所述激光雷达1采用镭神智能c16-151B。Specifically, the
具体的,所述第一红外深度相机4、第二红外深度相机5和IMU 9均采用 IntelReal Sense D435i。Specifically, the first
具体的,所述激光雷达1投影到地面的距离为10m,投影后下方会有锥形盲 区,所述第一红外深度相机4、第二红外深度相机5的工作距离为0.2-10m,可 以弥补激光雷达1投不到的盲区。Specifically, the projected distance of the
具体的,所述激光雷达1、第一FLIR工业网口相机2、第二FLIR工业网口 相机6、第一USB3.0相机3、第二USB3.0相机7、多目相机8、IMU 9、第一红 外深度相机4、第二红外深度相机5都分别具有独立的传感器。Specifically, the
如附图2,所示为多模态系统的图结构表示。其中,顶点表示激光雷达,相 机,IMU等传感器。边表示传感器之间的相对位姿变换推导式。As shown in FIG. 2, a diagrammatic representation of a multimodal system is shown. Among them, the vertices represent sensors such as lidar, camera, and IMU. Edges represent relative pose transformation derivations between sensors.
工作流程图如图3所示。The work flow chart is shown in Figure 3.
需要说明的是,在本文中,而且,术语“包括”、“包含”或者其任何其 他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物 品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是 还包括为这种过程、方法、物品或者设备所固有的要素。It should be noted that, in this document, also, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or apparatus comprising a series of elements includes not only those elements, but also other elements not expressly listed or inherent to such a process, method, article or apparatus.
尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言, 可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变 化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications and substitutions can be made in these embodiments without departing from the principle and spirit of the invention and modifications, the scope of the present invention is defined by the appended claims and their equivalents.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010120330.XA CN111652261A (en) | 2020-02-26 | 2020-02-26 | A Multimodal Perception Fusion System |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010120330.XA CN111652261A (en) | 2020-02-26 | 2020-02-26 | A Multimodal Perception Fusion System |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111652261A true CN111652261A (en) | 2020-09-11 |
Family
ID=72346093
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010120330.XA Pending CN111652261A (en) | 2020-02-26 | 2020-02-26 | A Multimodal Perception Fusion System |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111652261A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113327289A (en) * | 2021-05-18 | 2021-08-31 | 中山方显科技有限公司 | Method for simultaneously calibrating internal and external parameters of multi-source heterogeneous sensor |
CN117451030A (en) * | 2023-10-25 | 2024-01-26 | 哈尔滨工业大学 | Multi-mode fusion SLAM method based on scene self-adaption |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107036594A (en) * | 2017-05-07 | 2017-08-11 | 郑州大学 | The positioning of intelligent Power Station inspection intelligent body and many granularity environment perception technologies |
CN107085422A (en) * | 2017-01-04 | 2017-08-22 | 北京航空航天大学 | A remote control system for a multifunctional hexapod robot based on Xtion equipment |
CN107390703A (en) * | 2017-09-12 | 2017-11-24 | 北京创享高科科技有限公司 | A kind of intelligent blind-guidance robot and its blind-guiding method |
US20170371329A1 (en) * | 2014-12-19 | 2017-12-28 | United Technologies Corporation | Multi-modal sensor data fusion for perception systems |
CN108700939A (en) * | 2016-02-05 | 2018-10-23 | 奇跃公司 | System and method for augmented reality |
CN108846867A (en) * | 2018-08-29 | 2018-11-20 | 安徽云能天智能科技有限责任公司 | A kind of SLAM system based on more mesh panorama inertial navigations |
CN109828658A (en) * | 2018-12-17 | 2019-05-31 | 彭晓东 | A kind of man-machine co-melting long-range situation intelligent perception system |
CN110174136A (en) * | 2019-05-07 | 2019-08-27 | 武汉大学 | A kind of underground piping intelligent measurement robot and intelligent detecting method |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN110321000A (en) * | 2019-04-25 | 2019-10-11 | 南开大学 | A kind of dummy emulation system towards intelligence system complex task |
US20190339081A1 (en) * | 2018-05-03 | 2019-11-07 | Orby, Inc. | Unmanned aerial vehicle with enclosed propulsion system for 3-d data gathering and processing |
CN110427022A (en) * | 2019-07-08 | 2019-11-08 | 武汉科技大学 | A kind of hidden fire-fighting danger detection robot and detection method based on deep learning |
-
2020
- 2020-02-26 CN CN202010120330.XA patent/CN111652261A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170371329A1 (en) * | 2014-12-19 | 2017-12-28 | United Technologies Corporation | Multi-modal sensor data fusion for perception systems |
CN108700939A (en) * | 2016-02-05 | 2018-10-23 | 奇跃公司 | System and method for augmented reality |
CN107085422A (en) * | 2017-01-04 | 2017-08-22 | 北京航空航天大学 | A remote control system for a multifunctional hexapod robot based on Xtion equipment |
CN107036594A (en) * | 2017-05-07 | 2017-08-11 | 郑州大学 | The positioning of intelligent Power Station inspection intelligent body and many granularity environment perception technologies |
CN107390703A (en) * | 2017-09-12 | 2017-11-24 | 北京创享高科科技有限公司 | A kind of intelligent blind-guidance robot and its blind-guiding method |
US20190339081A1 (en) * | 2018-05-03 | 2019-11-07 | Orby, Inc. | Unmanned aerial vehicle with enclosed propulsion system for 3-d data gathering and processing |
CN108846867A (en) * | 2018-08-29 | 2018-11-20 | 安徽云能天智能科技有限责任公司 | A kind of SLAM system based on more mesh panorama inertial navigations |
CN109828658A (en) * | 2018-12-17 | 2019-05-31 | 彭晓东 | A kind of man-machine co-melting long-range situation intelligent perception system |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN110321000A (en) * | 2019-04-25 | 2019-10-11 | 南开大学 | A kind of dummy emulation system towards intelligence system complex task |
CN110174136A (en) * | 2019-05-07 | 2019-08-27 | 武汉大学 | A kind of underground piping intelligent measurement robot and intelligent detecting method |
CN110427022A (en) * | 2019-07-08 | 2019-11-08 | 武汉科技大学 | A kind of hidden fire-fighting danger detection robot and detection method based on deep learning |
Non-Patent Citations (2)
Title |
---|
何守印: "《基于多传感器融合的无人机自主避障研究》", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》 * |
陈梦晓: "《基于多传感器数据的移动机器人定位与建图》", 《中国硕士学位论文全文数据库 (信息科技辑)》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113327289A (en) * | 2021-05-18 | 2021-08-31 | 中山方显科技有限公司 | Method for simultaneously calibrating internal and external parameters of multi-source heterogeneous sensor |
CN117451030A (en) * | 2023-10-25 | 2024-01-26 | 哈尔滨工业大学 | Multi-mode fusion SLAM method based on scene self-adaption |
CN117451030B (en) * | 2023-10-25 | 2024-06-14 | 哈尔滨工业大学 | Multi-mode fusion SLAM method based on scene self-adaption |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113111887B (en) | Semantic segmentation method and system based on information fusion of camera and laser radar | |
CN112894832B (en) | Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium | |
CN111258313B (en) | Multi-sensor fusion SLAM system and robot | |
US12094226B2 (en) | Simultaneous localization and mapping method, device, system and storage medium | |
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
JP6745328B2 (en) | Method and apparatus for recovering point cloud data | |
US20200302241A1 (en) | Techniques for training machine learning | |
Moghadam et al. | Line-based extrinsic calibration of range and image sensors | |
CN113570715B (en) | Rotating laser real-time positioning modeling system and method based on sensor fusion | |
CN110097553A (en) | The semanteme for building figure and three-dimensional semantic segmentation based on instant positioning builds drawing system | |
US20210383096A1 (en) | Techniques for training machine learning | |
CN108803591B (en) | A map generation method and robot | |
CN114913290A (en) | Multi-view-angle fusion scene reconstruction method, perception network training method and device | |
US12194634B2 (en) | Error detection method and robot system based on a plurality of pose identifications | |
CN113720324A (en) | Octree map construction method and system | |
CN110675436A (en) | Laser radar and stereoscopic vision registration method based on 3D feature points | |
GB2572025A (en) | Urban environment labelling | |
CN115272452A (en) | A target detection and positioning method, device, unmanned aerial vehicle and storage medium | |
WO2023056789A1 (en) | Obstacle identification method and system for automatic driving of agricultural machine, device, and storage medium | |
CN111652261A (en) | A Multimodal Perception Fusion System | |
CN117213515A (en) | Visual SLAM path planning method and device, electronic equipment and storage medium | |
US20230219220A1 (en) | Error detection method and robot system based on association identification | |
CN118031976B (en) | A human-machine collaborative system for exploring unknown environments | |
Muharom et al. | Real-Time 3D Modeling and Visualization Based on RGB-D Camera using RTAB-Map through Loop Closure | |
CN116883652A (en) | A drivable area segmentation method, device, readable storage medium and robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200911 |
|
RJ01 | Rejection of invention patent application after publication |