CN113408625B - Multi-source heterogeneous data single-frame fusion and consistent characterization method applied to unmanned system - Google Patents
Multi-source heterogeneous data single-frame fusion and consistent characterization method applied to unmanned system Download PDFInfo
- Publication number
- CN113408625B CN113408625B CN202110690846.2A CN202110690846A CN113408625B CN 113408625 B CN113408625 B CN 113408625B CN 202110690846 A CN202110690846 A CN 202110690846A CN 113408625 B CN113408625 B CN 113408625B
- Authority
- CN
- China
- Prior art keywords
- data
- fusion
- source
- dimension
- heterogeneous
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/10—Pre-processing; Data cleansing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及传感器数据融合技术领域,尤其涉及一种应用于无人系统的多源异构数据单帧融合与一致表征方法。The invention relates to the technical field of sensor data fusion, in particular to a single frame fusion and consistent representation method of multi-source heterogeneous data applied to an unmanned system.
背景技术Background technique
无人系统利用多种不同传感器产生的异构数据,包括但不限于2D图像数据,3D点云数据,惯导数据,天文数据,温度数据以及力学数据等多源异构数据,完成数据的融合以及对环境的一致表征,能够最大限度地利用多种不同传感器产生的数据,实现环境的综合感知与描述。传统方法多以特征级和决策级融合为主,在大量异构数据间进行数据级融合的方法较少,尤其是面向单帧的数据级融合,可为无人系统提供更全面更高质量的感知信息。The unmanned system uses heterogeneous data generated by a variety of different sensors, including but not limited to 2D image data, 3D point cloud data, inertial navigation data, astronomical data, temperature data, mechanical data and other multi-source heterogeneous data to complete data fusion. As well as a consistent representation of the environment, it can maximize the use of data generated by a variety of different sensors to achieve comprehensive perception and description of the environment. Traditional methods are mainly based on feature-level and decision-level fusion, and there are few methods for data-level fusion among a large number of heterogeneous data, especially for single-frame data-level fusion, which can provide more comprehensive and higher-quality unmanned systems. perceptual information.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种应用于无人系统的多源异构数据单帧融合与一致表征方法,实现不同传感器异构数据的融合,完成环境的一致表征,解决多源异构数据难以进行数据级融合的问题。The purpose of the present invention is to provide a multi-source heterogeneous data single-frame fusion and consistent characterization method applied to unmanned systems, to realize the fusion of heterogeneous data of different sensors, to complete the consistent characterization of the environment, and to solve the difficulty of multi-source heterogeneous data. The problem of data-level fusion.
为了达到上述目的,本发明的技术方案如下:In order to achieve the above object, technical scheme of the present invention is as follows:
一种应用于无人系统的多源异构数据单帧融合与一致表征方法,该方法包括以下步骤:A multi-source heterogeneous data single-frame fusion and consistent characterization method applied to an unmanned system, the method includes the following steps:
步骤一:数据采集与预处理;Step 1: Data collection and preprocessing;
使用M个多源传感器进行同时刻数据采集,对获得的M个数据块进行预处理,使M个数据块都拥有三维空间位置属性的描述,且所述三维空间位置属性共享同一个三维空间坐标系;Use M multi-source sensors to collect data at the same time, and preprocess the obtained M data blocks, so that the M data blocks have descriptions of three-dimensional space position attributes, and the three-dimensional space position attributes share the same three-dimensional space coordinates Tie;
步骤二:同类型数据融合;Step 2: Fusion of the same type of data;
根据传感器类型对预处理后的M个数据块进行分类,将传感器原理相同或表征同一物理性质的数据块归为同类型数据,并对同类型数据进行数据级融合,得到N个已同构融合的融合数据组,N即为异构数据的异构种类数量;Classify the preprocessed M data blocks according to the sensor type, classify the data blocks with the same sensor principle or characterize the same physical properties as the same type of data, and perform data-level fusion on the same type of data to obtain N isomorphic fusions The fusion data set of N is the number of heterogeneous types of heterogeneous data;
步骤三:融合数据组维度对齐;Step 3: Align the dimensions of the fusion data set;
对N个融合数据组进行维度统计,构建一个X维度的数据表达模型,X为可覆盖各融合数据组的所有不同维度的最小取值;将N个融合数据组映射至此数据表达模型,输出N个维度对齐后的数据组;Perform dimension statistics on N fusion data groups, and construct a data expression model of X dimension, where X is the minimum value that can cover all different dimensions of each fusion data group; map the N fusion data groups to this data expression model, and output N A dimension aligned data set;
步骤四:异构融合与一致表征;Step 4: Heterogeneous fusion and consistent characterization;
对所述数据表达模型进行分析,得到X个维度中的冗余关系和维度变换关系,根据实际任务选择最终需要输出的数据形态,形成具有相应维度的一致表征数据空间,并将N个维度对齐后的数据组根据维度变换关系转换至此数据空间,通过置信判别处理后合并为一致表征数据并输出。Analyze the data expression model to obtain redundant relationships and dimension transformation relationships in X dimensions, select the final data form to be output according to the actual task, form a consistent representation data space with corresponding dimensions, and align the N dimensions The latter data set is transformed into this data space according to the dimension transformation relationship, and after processing by confidence discrimination, it is merged into consistent representation data and output.
进一步地,所述步骤一中的M个多源传感器是指M个提供数据源的传感器,其采集得到的多源异构数据类型包括2D图像数据、3D点云数据、惯导数据、天文数据、温度数据、力学数据。Further, the M multi-source sensors in the step 1 refer to M sensors that provide data sources, and the multi-source heterogeneous data types collected by them include 2D image data, 3D point cloud data, inertial navigation data, and astronomical data. , temperature data, mechanical data.
进一步地,所述步骤一中的预处理包括平滑去噪、缺失值处理、数据规范化、色差校正中的任意一种或几种,对于不包括三维位置信息的数据源,还需根据传感器安装位置和传感器参数模型估算所采集数据的三维空间位置,并作为数据的固定属性输出。Further, the preprocessing in the first step includes any one or more of smoothing denoising, missing value processing, data normalization, and chromatic aberration correction. and sensor parameter model to estimate the three-dimensional spatial position of the collected data and output it as a fixed attribute of the data.
进一步地,所述步骤二中的数据级融合是对预处理后的数据的合并处理,包括冗余剔除、数据归一化处理。Further, the data-level fusion in the second step is a merging process of the preprocessed data, including redundancy elimination and data normalization.
进一步地,所述步骤三的数据表达模型指包含所有待融合数据的属性维度的数据表达空间,是对所有待融合的数据的原有属性维度的并集。Further, the data expression model in the third step refers to a data expression space that includes attribute dimensions of all the data to be fused, and is a union of the original attribute dimensions of all the data to be fused.
进一步地,所述步骤四中的维度变换关系指将原数据改变数据维度时原数据集合与新数据集合间的数据转换方法,所述数据转换方法可以改变原有数据的形态和描述方式,但应尽量保持数据信息的完整性,所述数据转换方法包括网格生成、主成分分析、因子分析、线性组合、聚类中的任意一种。Further, the dimension transformation relationship in the step 4 refers to the data conversion method between the original data set and the new data set when the original data is changed to the data dimension. The data conversion method can change the form and description method of the original data, but The integrity of the data information should be maintained as much as possible, and the data transformation method includes any one of grid generation, principal component analysis, factor analysis, linear combination, and clustering.
进一步地,所述步骤四的置信度判别处理是针对多源异构数据在转换至相同数据描述后在同一数据空间点出现冲突时的处理策略,置信度参数包括基于先验的数据源优先级、预处理中的原数据误差统计、数据密度、数据连贯性,最终根据置信度参数加权计算得到冲突合并后的数据。Further, the confidence discrimination processing in the step 4 is a processing strategy for multi-source heterogeneous data when a conflict occurs at the same data space point after being converted to the same data description, and the confidence parameter includes a priori-based data source priority. , the original data error statistics, data density, and data consistency in the preprocessing, and finally the conflict-merged data is obtained by weighted calculation according to the confidence parameter.
本发明的有益效果如下:The beneficial effects of the present invention are as follows:
本发明通过数据采集与预处理,同类型数据融合,融合数据组维度对齐以及异构数据融合与一致表征,解决了多源异构数据因含义不同、维度各异等问题所导致的融合困难,实现了单帧异构数据间的数据级融合和一致性表征。Through data collection and preprocessing, the same type of data fusion, the dimensional alignment of fusion data groups, and the fusion and consistent representation of heterogeneous data, the invention solves the fusion difficulties caused by the problems of different meanings and different dimensions of multi-source heterogeneous data. It realizes data-level fusion and consistent representation between single-frame heterogeneous data.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the accompanying drawings used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts.
图1是本发明提供的多源异构数据单帧融合与一致表征方法的流程图。FIG. 1 is a flow chart of the method for single frame fusion and consistent characterization of multi-source heterogeneous data provided by the present invention.
图2是根据一示例性实施例示出的一种多源异构数据单帧融合与一致表征方法流程框图。Fig. 2 is a flow chart of a method for single-frame fusion and consistent characterization of multi-source heterogeneous data according to an exemplary embodiment.
具体实施方式Detailed ways
下面将结合附图详细描述本发明的实施例,所述实施例的示例在附图中示出。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。Embodiments of the present invention will be described in detail below with reference to the accompanying drawings, examples of which are illustrated in the accompanying drawings. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to explain the present invention and should not be construed as limiting the present invention. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
如图1所示,本发明的应用于无人系统的多源异构数据单帧融合与一致表征方法,具体实现步骤如下:As shown in FIG. 1 , the method for single-frame fusion and consistent characterization of multi-source heterogeneous data applied to an unmanned system of the present invention, the specific implementation steps are as follows:
步骤一:数据采集与预处理,使用M个多源传感器进行同时刻数据采集,对获得的M个数据块进行预处理,使M个数据块都拥有三维空间位置属性的描述,且所述三维空间位置属性共享同一个三维空间坐标系。Step 1: Data acquisition and preprocessing, using M multi-source sensors to collect data at the same time, and preprocessing the obtained M data blocks, so that the M data blocks have descriptions of three-dimensional spatial position attributes, and the three-dimensional Spatial location attributes share the same three-dimensional spatial coordinate system.
进一步地,结合图2所示的一实施例说明,通过以下子步骤来实现:Further, in conjunction with the description of an embodiment shown in FIG. 2 , the following sub-steps are used to achieve:
S101:M个多源传感器固定安装在无人系统上,其共享同一个三维空间坐标系。使用硬件触发和软件同步相结合的方式完成M个多源传感器的数据采集,使得所有数据的时间一致性精度达到毫秒量级;S101: M multi-source sensors are fixedly installed on the unmanned system, which share the same three-dimensional space coordinate system. Using a combination of hardware triggering and software synchronization to complete the data acquisition of M multi-source sensors, so that the time consistency accuracy of all data reaches the millisecond level;
具体地,M个多源传感器是指M个提供数据源的传感器,其传感器类型可以相同也可以不同,其采集得到的多源异构数据类型包括但不限于2D图像数据、3D点云数据、惯导数据、天文数据、温度数据、力学数据。使用下位机MCU同步产生多路PWM脉冲信号触发相应传感器,传感器在PWM信号的上升沿或下降沿开始采集数据。同时,使用软件补偿的方式,补偿传感器数据在信号传输链路中的时间延迟,使得上位机获得具有同一时间戳的同一时刻的一帧多源异构数据。Specifically, M multi-source sensors refer to M sensors that provide data sources, and the sensor types may be the same or different, and the multi-source heterogeneous data types collected by them include but are not limited to 2D image data, 3D point cloud data, Inertial navigation data, astronomical data, temperature data, mechanical data. Use the lower computer MCU to synchronously generate multiple PWM pulse signals to trigger the corresponding sensor, and the sensor starts to collect data on the rising or falling edge of the PWM signal. At the same time, the software compensation method is used to compensate the time delay of sensor data in the signal transmission link, so that the upper computer can obtain a frame of multi-source heterogeneous data with the same time stamp at the same time.
S102:完成多源异构数据的预处理;S102: Complete multi-source heterogeneous data preprocessing;
具体地,预处理包括但不限于将采集到的多源异构数据进行平滑去噪、缺失值处理、数据规范化、色差校正等过程;同时,对于本身不包含三维空间位置信息的数据源,还需根据传感器安装位置和传感器参数模型估算所采集数据的三维空间位置,并将该三维空间位置信息作为数据的固定属性输出。Specifically, preprocessing includes, but is not limited to, smoothing and denoising the collected multi-source heterogeneous data, missing value processing, data normalization, chromatic aberration correction, etc.; It is necessary to estimate the three-dimensional space position of the collected data according to the sensor installation position and the sensor parameter model, and output the three-dimensional space position information as a fixed attribute of the data.
步骤二:同类型数据融合;Step 2: Fusion of the same type of data;
对预处理后的M个数据块根据传感器类型进行分类,将传感器原理相同或表征同一物理性质的数据块归为同类型数据,并对同类型数据进行数据级融合,得到N个已同构融合的融合数据组,N即为异构数据的异构种类数量。Classify the preprocessed M data blocks according to the sensor type, classify the data blocks with the same sensor principle or characterize the same physical properties as the same type of data, and perform data-level fusion on the same type of data to obtain N isomorphic fusions. The fusion data set of N is the number of heterogeneous types of heterogeneous data.
进一步地,结合图2所示的一实施例说明,通过以下子步骤来实现:Further, in conjunction with the description of an embodiment shown in FIG. 2 , the following sub-steps are used to achieve:
将来自不同传感器的2D图像,通过特征提取、特征描述、特征匹配、关联计算、仿射变换等步骤,实现多张2D图像的对齐与像素层面的融合,并剔除图像冗余部分,最终以2D图像的形式输出;Through the steps of feature extraction, feature description, feature matching, association calculation, affine transformation, etc., the 2D images from different sensors are used to achieve alignment of multiple 2D images and fusion at the pixel level, and remove redundant parts of the image. output in the form of an image;
将来自不同传感器的3D点云数据,通过点云特征提取、点云配准、点云融合等步骤,完成多个传感器的单帧点云数据之间的数据级融合,并剔除冗余信息,最终以3D点云的形式输出;The 3D point cloud data from different sensors, through the steps of point cloud feature extraction, point cloud registration, point cloud fusion, etc., completes the data-level fusion between the single-frame point cloud data of multiple sensors, and eliminates redundant information, The final output is in the form of a 3D point cloud;
将来自多个不同传感器产生的惯导数据、天文数据、温度数据以及力学数据等其它传感器产生的感知数据进行相互验证,合并融合以及冗余剔除,最终以数据的形式输出。The sensory data generated by other sensors such as inertial navigation data, astronomical data, temperature data, and mechanical data from multiple different sensors are mutually verified, merged, fused, and redundantly eliminated, and finally output in the form of data.
步骤三:融合数据组维度对齐。Step 3: Align the dimensions of the fusion data set.
基于对N个融合数据组的维度统计,构建一个X维度的数据表达模型,X为可覆盖各融合数据组所有不同维度的最小取值;将N个融合数据组映射至此数据表达模型,输出N个维度对齐后的数据组。Based on the dimensional statistics of N fusion data sets, construct a data expression model of X dimension, where X is the minimum value that can cover all different dimensions of each fusion data set; map the N fusion data sets to this data expression model, and output N A dimension-aligned dataset.
进一步地,结合图2所示的一实施例说明,通过以下子步骤来实现:Further, in conjunction with the description of an embodiment shown in FIG. 2 , the following sub-steps are used to achieve:
步骤S301:在本实施例中3D点云的数据维度是三维(xyz),2D图像的数据维度是五维(像素坐标xy+彩色信息RGB),热红外的数据维度是三维(像素坐标xy+摄氏度I),力学的数据维度是四维的数值(传感器坐标xyz+牛顿N),天文数据的数据维度是两维角度数值(度),惯导数据为六维(三个方向的速度角速率和加速度),总计18维数据,内容包括:Step S301: In this embodiment, the data dimension of the 3D point cloud is three-dimensional (xyz), the data dimension of the 2D image is five-dimensional (pixel coordinates xy+color information RGB), and the thermal infrared data dimension is three-dimensional (pixel coordinates xy+degree Celsius 1). ), the data dimension of mechanics is a four-dimensional value (sensor coordinates xyz + Newton N), the data dimension of astronomical data is a two-dimensional angle value (degrees), and the inertial navigation data is six-dimensional (velocity, angular rate and acceleration in three directions), A total of 18-dimensional data, including:
[3dx,3dy,3dz,2dx,2dy,R,G,B,I,N,d1,d2,gx,gy,gz,ax,ay,az]。[3dx,3dy,3dz,2dx,2dy,R,G,B,I,N,d1,d2,gx,gy,gz,ax,ay,az].
步骤S302:将一帧多传感器的数据组成一个18维的向量,向量中传感器没有有效数据的位置为空,每一帧数据变成一个长度一致的向量。Step S302 : forming a frame of multi-sensor data into an 18-dimensional vector, the position of the sensor without valid data in the vector is empty, and each frame of data becomes a vector with the same length.
步骤四:异构融合与一致表征。Step 4: Heterogeneous fusion and consistent characterization.
对数据表达模型进行分析,得到X个维度中的冗余关系和维度变换关系,根据实际任务选择最终需要输出的数据形态,形成具有相应维度的一致表征数据空间,并将N个维度对齐后的数据组根据维度变换关系转换至此数据空间,通过置信判别处理后合并为一致表征数据。Analyze the data expression model to obtain the redundant relationship and dimension transformation relationship in the X dimensions, select the final data form to be output according to the actual task, form a consistent representation data space with corresponding dimensions, and align the N dimensions. The data group is transformed into this data space according to the dimensional transformation relationship, and merged into consistent representation data after processing by confidence discrimination.
进一步地,结合图2所示的一实施例说明,通过以下子步骤来实现:Further, in conjunction with the description of an embodiment shown in FIG. 2 , the following sub-steps are used to achieve:
步骤S401:对步骤S302中的数据向量进行维度规约,如主成分分析等并结合各传感器安装位置等外部参数信息得到步骤S301中18维数据间的冗余关系和维度变换关系。Step S401: Perform dimension reduction on the data vector in step S302, such as principal component analysis, and combine external parameter information such as the installation position of each sensor to obtain the redundancy relationship and dimension transformation relationship between the 18-dimensional data in step S301.
步骤S402:根据可视化任务的输出要求,可以选择直接输出2D图像数据;或以步骤输出的3D点云数据为基底,并将该数据进行网格重建、网格校正等步骤,将输出的2D图像数据贴图在3D点云上,完成3D数据的输出;或将其它数据以分层绘制形式叠加在上述的3D数据上,形成多源异构融合数据的输出,形成具有相应维度的一致性表征结果。Step S402: According to the output requirements of the visualization task, you can choose to directly output the 2D image data; or use the 3D point cloud data output in the step as the base, and perform grid reconstruction, grid correction and other steps on the data, and the
以上结合一实施例说明本发明的实施方式仅为本发明一种较佳实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于本发明所涵盖的范围。The embodiment of the present invention described above in conjunction with an embodiment is only a preferred embodiment of the present invention. Of course, this does not limit the scope of rights of the present invention. Those of ordinary skill in the art can understand all or part of the process for realizing the above example. , and the equivalent changes made according to the claims of the present invention still belong to the scope covered by the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110690846.2A CN113408625B (en) | 2021-06-22 | 2021-06-22 | Multi-source heterogeneous data single-frame fusion and consistent characterization method applied to unmanned system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110690846.2A CN113408625B (en) | 2021-06-22 | 2021-06-22 | Multi-source heterogeneous data single-frame fusion and consistent characterization method applied to unmanned system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113408625A CN113408625A (en) | 2021-09-17 |
CN113408625B true CN113408625B (en) | 2022-08-09 |
Family
ID=77682411
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110690846.2A Active CN113408625B (en) | 2021-06-22 | 2021-06-22 | Multi-source heterogeneous data single-frame fusion and consistent characterization method applied to unmanned system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113408625B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230123736A1 (en) * | 2021-10-14 | 2023-04-20 | Redzone Robotics, Inc. | Data translation and interoperability |
CN114827209B (en) * | 2022-05-07 | 2024-08-13 | 南京四维智联科技有限公司 | Data acquisition method and device, electronic equipment and storage medium |
CN116701962B (en) * | 2023-08-07 | 2023-10-27 | 北京电科智芯科技有限公司 | Edge data processing method, device, computing equipment and storage medium |
CN117171534B (en) * | 2023-11-03 | 2024-03-19 | 济南二机床集团有限公司 | Multi-source heterogeneous data acquisition method, system, device and medium for numerical control machine tool |
CN117216722B (en) * | 2023-11-09 | 2024-02-27 | 山东农业大学 | Sensor time sequence data-based multi-source heterogeneous data fusion system |
CN117760460B (en) * | 2023-12-13 | 2025-02-18 | 华中光电技术研究所(中国船舶集团有限公司第七一七研究所) | A method for selecting source of multi-type position information of laser inertial navigation display and control device |
CN118656596B (en) * | 2024-08-16 | 2024-12-10 | 北京迅奥科技有限公司 | A scene data fusion analysis management system and method based on multi-source heterogeneity |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102945585A (en) * | 2012-11-21 | 2013-02-27 | 苏州两江科技有限公司 | Method for raising fire alarm through multi-sensor data fusion |
CN105893612A (en) * | 2016-04-26 | 2016-08-24 | 中国科学院信息工程研究所 | Consistency expression method for multi-source heterogeneous big data |
CN109166149A (en) * | 2018-08-13 | 2019-01-08 | 武汉大学 | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU |
CN110873879A (en) * | 2018-08-30 | 2020-03-10 | 沈阳航空航天大学 | Device and method for deep fusion of characteristics of multi-source heterogeneous sensor |
CN111046245A (en) * | 2019-12-11 | 2020-04-21 | 杭州趣链科技有限公司 | Multi-source heterogeneous data source fusion calculation method, system, equipment and storage medium |
CN111753024A (en) * | 2020-06-24 | 2020-10-09 | 河北工程大学 | A multi-source heterogeneous data entity alignment method for public security |
CN111950627A (en) * | 2020-08-11 | 2020-11-17 | 重庆大学 | A multi-source information fusion method and its application |
CN112634451A (en) * | 2021-01-11 | 2021-04-09 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8290741B2 (en) * | 2010-01-13 | 2012-10-16 | Raytheon Company | Fusing multi-sensor data sets according to relative geometrical relationships |
CN104010370B (en) * | 2014-04-28 | 2019-07-09 | 北京邮电大学 | Heterogeneous system fused controlling method and device |
US11874676B2 (en) * | 2019-11-22 | 2024-01-16 | JAR Scientific, LLC | Cooperative unmanned autonomous aerial vehicles for power grid inspection and management |
-
2021
- 2021-06-22 CN CN202110690846.2A patent/CN113408625B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102945585A (en) * | 2012-11-21 | 2013-02-27 | 苏州两江科技有限公司 | Method for raising fire alarm through multi-sensor data fusion |
CN105893612A (en) * | 2016-04-26 | 2016-08-24 | 中国科学院信息工程研究所 | Consistency expression method for multi-source heterogeneous big data |
CN109166149A (en) * | 2018-08-13 | 2019-01-08 | 武汉大学 | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU |
CN110873879A (en) * | 2018-08-30 | 2020-03-10 | 沈阳航空航天大学 | Device and method for deep fusion of characteristics of multi-source heterogeneous sensor |
CN111046245A (en) * | 2019-12-11 | 2020-04-21 | 杭州趣链科技有限公司 | Multi-source heterogeneous data source fusion calculation method, system, equipment and storage medium |
CN111753024A (en) * | 2020-06-24 | 2020-10-09 | 河北工程大学 | A multi-source heterogeneous data entity alignment method for public security |
CN111950627A (en) * | 2020-08-11 | 2020-11-17 | 重庆大学 | A multi-source information fusion method and its application |
CN112634451A (en) * | 2021-01-11 | 2021-04-09 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
Non-Patent Citations (2)
Title |
---|
Multi-source heterogeneous data fusion;L. Zhang et al.;《2018 International Conference on Artificial Intelligence and Big Data》;20180628;第47-51页 * |
基于改进型支持度函数的畜禽养殖物联网数据融合方法;段青玲 等;《农业工程学报》;20170228;第33卷;第239-245页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113408625A (en) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113408625B (en) | Multi-source heterogeneous data single-frame fusion and consistent characterization method applied to unmanned system | |
JP7205613B2 (en) | Image processing device, image processing method and program | |
CN107430776B (en) | Template manufacturing device and template manufacturing method | |
CN103559737A (en) | Object panorama modeling method | |
CN117274756A (en) | Fusion method and device of two-dimensional image and point cloud based on multi-dimensional feature registration | |
CN110728671A (en) | Vision-Based Dense Reconstruction Methods for Textureless Scenes | |
CN113205604A (en) | Feasible region detection method based on camera and laser radar | |
CN113643436B (en) | Depth data splicing and fusion method and device | |
CN115272596A (en) | A multi-sensor fusion SLAM method for monotonous and textureless large scenes | |
CN116012712A (en) | Object general feature-based target detection method, device, equipment and medium | |
Wang et al. | A method for detecting windows from mobile LiDAR data | |
CN116573017A (en) | Method, system, device and medium for sensing foreign objects in urban rail train running boundary | |
JP2006214893A (en) | Computer software program for measuring the three-dimensional shape of an object using the object measuring method and computer system | |
CN115457539A (en) | 3D target detection algorithm based on multiple sensors | |
CN115329847A (en) | Multisource heterogeneous sensor pre-fusion method applied to unmanned system | |
CN114155406A (en) | Pose estimation method based on region-level feature fusion | |
CN117078470B (en) | BIM+GIS-based three-dimensional sign dismantling management system | |
CN118031976A (en) | Man-machine cooperative system for exploring unknown environment | |
CN117333846A (en) | Detection method and system based on sensor fusion and incremental learning in severe weather | |
CN117990085A (en) | Autonomous exploration map generation method and system based on double millimeter wave radar | |
CN116721215A (en) | A multi-person collaborative guidance and docking method for aircraft wings based on augmented reality | |
CN116363615A (en) | Data fusion method, device, vehicle and storage medium | |
CN116540260A (en) | A three-dimensional imaging method, system and medium based on single-line lidar | |
Habib et al. | Integration of lidar and airborne imagery for realistic visualization of 3d urban environments | |
CN119227011B (en) | A multi-sensor data fusion method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |