[go: up one dir, main page]

CN110807782A - A map representation system for visual robot and its construction method - Google Patents

A map representation system for visual robot and its construction method Download PDF

Info

Publication number
CN110807782A
CN110807782A CN201911023177.2A CN201911023177A CN110807782A CN 110807782 A CN110807782 A CN 110807782A CN 201911023177 A CN201911023177 A CN 201911023177A CN 110807782 A CN110807782 A CN 110807782A
Authority
CN
China
Prior art keywords
information
map
semantic
voxel
topology
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911023177.2A
Other languages
Chinese (zh)
Other versions
CN110807782B (en
Inventor
檀祖冰
张彧
陈龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201911023177.2A priority Critical patent/CN110807782B/en
Publication of CN110807782A publication Critical patent/CN110807782A/en
Application granted granted Critical
Publication of CN110807782B publication Critical patent/CN110807782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明属于移动机器人环境表示、规划和定位领域,更具体地,涉及一种视觉机器人的地图表示系统及其构建方法。该地图表示系统由多信息体素层、地图元素层以及拓扑层的数据结构组成,分别涵盖了空间信息、场景实例、连通性三方面特征。该地图由语义信息提取模块、几何信息提取模块、场景语义提取模块、多信息体素整合模块、提取空间拓扑模块和拓扑整合模块六个模块共同完成构建。本发明仅基于视觉传感器,具有构建流程清晰,信息全面,层次关系紧密、易于可视化的优点,适用于移动机器人室内外场景的规划、定位与导航等工作。

Figure 201911023177

The invention belongs to the field of mobile robot environment representation, planning and positioning, and more particularly relates to a map representation system of a visual robot and a construction method thereof. The map representation system is composed of data structures of multi-information voxel layer, map element layer and topology layer, which respectively cover three aspects of spatial information, scene instance and connectivity. The map is constructed by six modules: semantic information extraction module, geometric information extraction module, scene semantic extraction module, multi-information voxel integration module, extraction spatial topology module and topology integration module. The invention is only based on the visual sensor, has the advantages of clear construction process, comprehensive information, close hierarchical relationship and easy visualization, and is suitable for the planning, positioning and navigation of indoor and outdoor scenes of mobile robots.

Figure 201911023177

Description

一种视觉机器人的地图表示系统及其构建方法A map representation system for visual robot and its construction method

技术领域technical field

本发明属于移动机器人环境表示、规划和定位领域,更具体地,涉及一种 视觉机器人的地图表示系统及其构建方法。The invention belongs to the field of mobile robot environment representation, planning and positioning, and more particularly, relates to a map representation system of a visual robot and a construction method thereof.

背景技术Background technique

在机器人领域中,如何设计一个表示环境的地图是重要且关键的问题。对 于规划模块来说,需要地图提供快速碰撞检测和障碍物检测的支持;对于定位 模块来说,需要地图提供高质量的环境建模;此外,对于操控机器人的人类来 说,地图需要提供友好并直观的可视化表示。现今,机器人领域广泛使用视觉 传感器,如基于RGB-D相机、双目立体相机等,或者测距传感器,如激光雷达 等,用于地图的构建。通常,视觉传感器成本较低,安装便捷并能获取到较高 频率的数据;而测距传感器虽然天生具备了高精度的深度测量能力,但成本昂 贵。In the field of robotics, how to design a map representing the environment is an important and critical problem. For the planning module, the map needs to provide support for fast collision detection and obstacle detection; for the positioning module, the map needs to provide high-quality environment modeling; in addition, for the human manipulating the robot, the map needs to provide friendly and Intuitive visual representation. Today, vision sensors are widely used in the field of robotics, such as RGB-D cameras, binocular stereo cameras, etc., or ranging sensors, such as lidar, for map construction. Generally, vision sensors are low-cost, easy to install, and can obtain high-frequency data; while ranging sensors are inherently capable of high-precision depth measurement, they are expensive.

目前,进行地图构建时,不同的环境(如废墟、商场、空地等)会影响传 感器的发挥,使得得到的数据包含噪音,影响建图的质量,进一步影响了规划、 定位的结果。在最近的研究中,研究人员在地图中额外引入了环境语义信息, 有效地降低环境对算法的影响;此外,地图的增量更新与修补优化也称为目前 的重点研究重点。但目前,没有一种用于机器人的地图表示方法——紧密结合 拓扑、几何、语义三种信息,并支持快速构建、增量更新、高效的数据存储与 读取、直观的可视化等性质。At present, during map construction, different environments (such as ruins, shopping malls, open spaces, etc.) will affect the performance of the sensor, so that the obtained data contains noise, which affects the quality of the map, and further affects the results of planning and positioning. In recent studies, researchers have additionally introduced environmental semantic information into the map to effectively reduce the impact of the environment on the algorithm; in addition, the incremental update and patch optimization of the map is also known as the current focus of research. But at present, there is no map representation method for robots, which closely combines topological, geometric, and semantic information, and supports properties such as rapid construction, incremental update, efficient data storage and reading, and intuitive visualization.

发明内容SUMMARY OF THE INVENTION

本发明为克服上述现有技术中的缺陷,提供一种视觉机器人的地图表示系 统及其构建方法,紧密结合拓扑、几何、语义三种信息,支持快速构建、增量 更新、高效的数据存储与读取、提供快速碰撞检测和障碍物检测的支持、提供 高质量的环境建模。In order to overcome the above-mentioned defects in the prior art, the present invention provides a map representation system of a visual robot and a construction method thereof, which closely integrates three types of information of topology, geometry and semantics, and supports rapid construction, incremental update, efficient data storage and Read, provide support for fast collision detection and obstacle detection, provide high-quality environment modeling.

为解决上述技术问题,本发明采用的技术方案是:一种视觉机器人的地图 表示系统,包括语义信息提取模块、几何信息提取模块、场景语义提取模块、 多信息体素整合模块、提取空间拓扑模块、拓扑整合模块、以及地图元素层、 多信息体素层和拓扑图层;其中,In order to solve the above technical problems, the technical solution adopted in the present invention is: a map representation system for visual robots, including a semantic information extraction module, a geometric information extraction module, a scene semantic extraction module, a multi-information voxel integration module, and an extraction space topology module. , a topology integration module, and a map element layer, a multi-information voxel layer, and a topology layer; wherein,

所述的语义信息提取模块用于利用视觉传感器在需要构造地图的环境中采 集图像,并进行图像语义信息提取,得到图像分割结果;接着进行其余特定语 义提取,得到特定的语义信息,特定的语义信息部分的结果最终会存储在地图 元素层中;The semantic information extraction module is used to collect images in an environment where a map needs to be constructed by using a visual sensor, and extract semantic information of images to obtain image segmentation results; then perform other specific semantic extraction to obtain specific semantic information, specific semantic information The result of the information part will eventually be stored in the map element layer;

所述的几何信息提取模块用于使用视觉传感器在需要构造地图的环境中采 集图像,经过计算得到深度图、顶点集合和顶点对应的法向量,接着基于深度 图由几种关于距离与法线的几何特征分割深度图;The geometric information extraction module is used to collect images in an environment where a map needs to be constructed by using a visual sensor, obtain a depth map, a set of vertices and a normal vector corresponding to the vertices through calculation, and then calculate several distances and normals based on the depth map. Geometric feature segmentation depth map;

所述的场景语义提取模块用于利用视觉传感器在需要构造地图的环境中采 集图像,并进行场景语义提取,得到场景的分类信息,场景的分类信息是人为 定义的、具有一定的标识度的场景称号;The scene semantic extraction module is used to collect images in an environment where a map needs to be constructed by using a visual sensor, and perform scene semantic extraction to obtain scene classification information, and the scene classification information is an artificially defined scene with a certain degree of identification. title;

所述的多信息体素整合模块,用于将语义信息提取模块得到的图像分割结 果和几何信息提取模块得到的深度分割结果进行融合,得到三维语义部件,该 部件描述了场景中某一个由人类预定的有一定标识度和对规划、定位来说意义 的物体;接着,结合视觉SLAM方法得到的相机位姿,将使用三维语义部件与 几何信息提取模块得到的顶点集合计算得到的多信息体素,更新到地图的多信 息体素层中;The multi-information voxel integration module is used to fuse the image segmentation results obtained by the semantic information extraction module and the depth segmentation results obtained by the geometric information extraction module to obtain a three-dimensional semantic component, which describes a certain scene in the scene by a human. Predetermined objects with a certain degree of identity and significance for planning and positioning; then, combined with the camera pose obtained by the visual SLAM method, the multi-information voxels calculated by using the 3D semantic components and the vertex set obtained by the geometric information extraction module , updated to the multi-information voxel layer of the map;

所述的提取空间拓扑模块用于基于所述的多信息体素整合模块得到的多信 息体素层,提取出包含场景语义的空间凸包及表达其连接关系的拓扑信息;The described extraction space topology module is used for the multi-information voxel layer obtained based on the described multi-information voxel integration module, and extracts the spatial convex hull comprising scene semantics and the topology information expressing its connection relationship;

所述拓扑整合模块用于基于空间的相邻关系,关联所述的多信息体素整合 模块中三维空间部件和所述的提取空间拓扑模块得到的三维空间凸包,计算空 间节点到部件节点的拓扑信息,并将此信息与空间节点到空间节点的拓扑信息 合并,得到完整的拓扑图层;The topology integration module is used to associate the three-dimensional space components in the multi-information voxel integration module and the three-dimensional space convex hull obtained by the extraction space topology module based on the adjacent relationship in space, and calculate the relationship between the space nodes and the component nodes. topology information, and combine this information with the topology information from space nodes to space nodes to obtain a complete topology layer;

所述的地图元素层、多信息体素层和拓扑图层共同构成地图,拓扑图将地 图元素层中的空间凸包和三维部件信息联系起来。The map element layer, the multi-information voxel layer and the topology layer together constitute a map, and the topology map connects the spatial convex hull in the map element layer with the three-dimensional component information.

进一步的,所述的多信息体素层时使用一个附带有空间信息、语义信息的 立方体,其不真实存在于客观世界中,而是用于近似表述客观世界的一个立方 区域的数据抽象对象;所述的多信息体素层是从机器人规划和定位的需求出发, 预定义的多种环境表示对象,一般由描述其位置、语义、几何等字段构成,如 用于表示障碍物的Object,由其中心位置、类别为障碍物及包围它的凸包构成。Further, the multi-information voxel layer uses a cube with attached spatial information and semantic information, which does not actually exist in the objective world, but is a data abstract object used to approximate a cubic area of the objective world; The multi-information voxel layer is based on the requirements of robot planning and positioning, and a variety of predefined environments represent objects, which are generally composed of fields describing their position, semantics, geometry, etc. Its center position and category are obstacles and the convex hull surrounding it.

本发明还提供一种视觉机器人的地图构建方法,包括以下步骤:The present invention also provides a map construction method for a visual robot, comprising the following steps:

S1.使用语义信息提取模块,利用视觉传感器在需要构造地图的环境中采 集图像,并进行部件语义信息提取,得到图像分割结果(包括但不仅限于门窗、 桌椅),进行其余特定语义提取,得到特定的语义信息(包括但不仅限于可行使 区域);特定的语义信息部分的结果最终会存储在地图元素层中;S1. Use the semantic information extraction module, use the visual sensor to collect images in the environment where the map needs to be constructed, and extract the semantic information of the components to obtain the image segmentation results (including but not limited to doors and windows, tables and chairs), and extract other specific semantics to obtain: Specific semantic information (including but not limited to exercisable areas); the results of specific semantic information parts will eventually be stored in the map element layer;

S2.使用几何信息提取模块,使用视觉传感器在需要构造地图的环境中采 集图像,得到深度图、顶点集合和顶点对应的法线,接着从深度图中计算得到 深度分割结果;S2. Use the geometric information extraction module, use the visual sensor to collect images in the environment where the map needs to be constructed, obtain the depth map, the vertex set and the normal corresponding to the vertex, and then calculate the depth segmentation result from the depth map;

S3.使用场景语义提取模块,利用视觉传感器在需要构造地图的环境中采 集图像,并进行场景语义提取,得到场景的分类信息;S3. Using the scene semantic extraction module, the visual sensor is used to collect images in the environment where the map needs to be constructed, and the scene semantic extraction is performed to obtain the classification information of the scene;

S4.使用多信息体素整合模块,将步骤S1中提取的图像分割结果和步骤 S2中提取的深度分割结果进行融合,得到三维语义部件;接着,基于视觉SLAM 方法得到的视觉传感器姿态,将三维语义部件与步骤S2中提取的顶点集合计算 得到多信息体素,构成多信息体素层;S4. Use the multi-information voxel integration module to fuse the image segmentation results extracted in step S1 and the depth segmentation results extracted in step S2 to obtain three-dimensional semantic components; then, based on the visual sensor posture obtained by the visual SLAM method, the three-dimensional The semantic component and the vertex set extracted in step S2 are calculated to obtain multi-information voxels to form a multi-information voxel layer;

S5.使用提取空间拓扑模块,基于步骤S4中的多信息体素层,提取出包含 场景语义的空间凸包及表达其连接关系的拓扑信息;S5. use the extraction space topology module, based on the multi-information voxel layer in step S4, extract the space convex hull comprising scene semantics and the topology information expressing its connection relationship;

S6.使用拓扑整合模块,基于空间的相邻关系,关联步骤S4中三维空间部 件和步骤S5中的三维空间凸包,得到空间节点到部件节点的拓扑信息,并将此 信息与空间节点到空间节点的拓扑信息合并,得到完整的拓扑图,构成拓扑图 层;S6. Using the topology integration module, based on the adjacent relationship of the space, associate the three-dimensional space component in step S4 with the three-dimensional space convex hull in step S5, obtain the topology information from the space node to the component node, and associate this information with the space node to the space The topology information of nodes is merged to obtain a complete topology map, which constitutes a topology layer;

S7.通过步骤S1中的地图元素层、步骤S4中的多信息体素层、步骤S6中 的拓扑图层共同构成本发明的地图。S7. The map of the present invention is formed together by the map element layer in step S1, the multi-information voxel layer in step S4, and the topology layer in step S6.

进一步的,所述的步骤S1具体包括:Further, the step S1 specifically includes:

S11.从视觉传感器中得到RGB图像;S11. Obtain an RGB image from a vision sensor;

S12.基于步骤S11对RGB图像中的每一个像素进行推断,判断其从属对 象的类别(包括但不仅限于门窗、桌椅),过程基于深度神经网络完成,最终得 到每一个类别的每一个实例的二值掩码图像集合;S12. Infer each pixel in the RGB image based on step S11, and determine the category of its subordinate objects (including but not limited to doors and windows, desks and chairs), the process is completed based on a deep neural network, and finally the A collection of binary mask images;

S13.其余特定语义提取由实际的应用决定,基于步骤S11对RGB图像, 使用特征检测和推理模块,将RGB图像进行特征提取,接着进行特定语义提取, 得到特定语义,特定语义包括但不仅限于可行使区域。S13. The rest of the specific semantic extraction is determined by the actual application. Based on step S11, the RGB image is extracted by using the feature detection and inference module, and then the specific semantic extraction is performed to obtain the specific semantic. The specific semantic includes but is not limited to: exercise area.

进一步的,所述的步骤S2具体包括:Further, the step S2 specifically includes:

S21.从视觉传感器中得到深度图;S21. Obtain a depth map from a vision sensor;

S22.基于步骤S21从深度图,基于一个像素及其邻点计算此像素对应的曲 面法线,接着计算相邻像素的法线角度差距,最后计算相邻像素的深度差距, 共得到三种特征;S22. From the depth map in step S21, calculate the surface normal corresponding to this pixel based on a pixel and its adjacent points, then calculate the normal angle difference between adjacent pixels, and finally calculate the depth difference between adjacent pixels. A total of three features are obtained ;

S23.基于所述步骤S22得到的三种特征,将深度图上进行聚类和切割,划 分得到多个不同的深度分割区域,称之为深度分割结果。S23. Based on the three kinds of features obtained in the step S22, clustering and cutting are carried out on the depth map, and a plurality of different depth segmentation regions are obtained by division, which are called as depth segmentation results.

进一步的,所述的步骤S3具体包括:Further, the step S3 specifically includes:

S31.从视觉传感器中得到RGB图像;S31. Obtain an RGB image from a vision sensor;

S32.由RGB图像的像素及像素之间的联系,计算出高维度的特征,接着 以此推断出场景的分类信息;过程基于深度神经网络实现,包括但不仅限于 CNN网络,场景包括但不仅限于商场、厨房、仓库、走廊。S32. Calculate the high-dimensional features from the pixels of the RGB image and the relationship between the pixels, and then infer the classification information of the scene; the process is implemented based on deep neural networks, including but not limited to CNN networks, and scenes include but not limited to Shopping malls, kitchens, warehouses, corridors.

进一步的,所述的步骤S4具体包括:Further, the step S4 specifically includes:

S41.记所述步骤S2中提取的深度分割结果为集合S,si∈S为一个几何部 件切分块,由一定数量的像素及其深度信息组成,记所述步骤S1中提取的图像 分割结果为集合R,rj∈R为一个图像上的区域,由一定数量的像素及其分类信 息组成;对每一个si,计算与之重叠区域最高的rj,计算公式如下:S41. Denote the depth segmentation result extracted in the step S2 as a set S, where s i ∈ S is a geometric component segmentation block, consisting of a certain number of pixels and their depth information, denote the image segmentation extracted in the step S1 The result is a set R, where r j ∈ R is an area on an image, which consists of a certain number of pixels and their classification information; for each s i , calculate the r j with the highest overlapping area, and the calculation formula is as follows:

S42.对所述步骤S41中的每一个si,赋予与之重叠区域最高的rj的类别, 确定每一个几何部件切分块的最佳分类,将相邻的具有共同实例对象和类别的 几何切分块,融合为三维语义部件;S42. For each s i in the step S41, assign the category of the highest r j to the overlapping area, determine the best category of each geometric component segment, and assign adjacent objects with common instance objects and categories to Geometric segmentation and fusion into 3D semantic components;

S43.将三维语义部件与地图中存在的三维语义部件进行一一比较,若当前 时刻与地图中存在相同的实例即同一个客观环境中的物体时,进行实例追踪, 若地图中不存在时,将新的三维语义部件添加到地图中,再进行实例追踪;所 述的实例追踪是保证在构建地图时,每一个实例对象的检测结果,在多帧图像 中保持时间不相干性的方法;S43. Compare the three-dimensional semantic components and the three-dimensional semantic components existing in the map one by one. If the current moment and the map have the same instance, that is, an object in the same objective environment, perform instance tracking. Add a new three-dimensional semantic component to the map, and then perform instance tracking; the instance tracking is a method to ensure that the detection result of each instance object is kept temporally incoherent in multiple frames of images when the map is constructed;

S44.将所述步骤S43维护的三维语义部件,与所述步骤S2中提取的顶点 集合进行融合;先将空间进行体素划分,只考虑截断距离t内的体素;假设x为 当前体素的中心,p为一个顶点的三维位置,s为传感器的原点,此时有:S44. fuse the three-dimensional semantic component maintained in the step S43 with the vertex set extracted in the step S2; first divide the space into voxels, and only consider the voxels within the truncation distance t; suppose x is the current voxel The center of , p is the three-dimensional position of a vertex, s is the origin of the sensor, at this time there are:

d(x,p,s)=||p-x||sign((p-x)·(p-s))d(x,p,s)=||p-x||sign((p-x)·(p-s))

Figure BDA0002247867700000051
Figure BDA0002247867700000051

∈=4v∈=4v

Figure BDA0002247867700000052
Figure BDA0002247867700000052

Wi+1(x,p,s)=min(Wi(x)+w(x,p),Wmax)W i+1 (x,p,s)=min(W i (x)+w(x,p),W max )

式中,v为体素的大小,z为从s到p处的深度,Wmax限制了更新的最大权 值;由上式,可以计算得到第i+1次更新体素x时的TSDF值Di+1(x,p,s)和权 值Wi+1(x,p,s);初始时TSDF值D(x,p,s)和权值W(x,p,s)都被初始化为0;In the formula, v is the size of the voxel, z is the depth from s to p, and W max limits the maximum weight of the update; from the above formula, the TSDF value of the i+1th update voxel x can be calculated. D i+1 (x, p, s) and weight W i+1 (x, p, s); initial TSDF value D (x, p, s) and weight W (x, p, s) are both is initialized to 0;

S45.基于步骤S44中的TSDF值,计算ESDF的值;所述过程具体包括: 以TSDF表面出发,通过26-邻域搜索的方式,将设置的截断区域内的区域,直 接将TSDF的值作为ESDF的值,在截断区域外的值则通过水波纹传播算法计 算得到,最后得到ESDF;S45. Calculate the value of ESDF based on the TSDF value in step S44; the process specifically includes: Starting from the TSDF surface, by means of 26-neighbor search, the area in the set truncation area is directly set as the value of the TSDF. The value of ESDF, the value outside the truncation area is calculated by the water ripple propagation algorithm, and finally the ESDF is obtained;

其中,所述的水波纹算法按如下步骤进行:S451.波从一个体素记为v开始, 传播到它的26-邻域,对未更新ESDF距离的体素更新它们的ESDF距离为体素 v的ESDF值加上单位距离,并将新更新的体素放入波纹扩散队列;对波纹扩散 队列中每一个体素,依次递归地执行S451步骤,直到所有体素都已经更新了 ESDF距离为止;Among them, the described water ripple algorithm is carried out according to the following steps: S451. The wave starts from a voxel denoted as v, propagates to its 26-neighborhood, and updates the voxels whose ESDF distances have not been updated with their ESDF distances as voxels The ESDF value of v is added to the unit distance, and the newly updated voxel is put into the ripple diffusion queue; for each voxel in the ripple diffusion queue, step S451 is performed recursively until all voxels have updated the ESDF distance. ;

S46.基于步骤S42与S43得到的三维语义部件信息与步骤S45中的ESDF, 组成多信息体素;S46. Based on the three-dimensional semantic component information obtained in steps S42 and S43 and the ESDF in step S45, form a multi-information voxel;

S47.通过视觉SLAM方法获取到当前的传感器位姿,将传感器坐标系下的 多信息体素转换为地图坐标系下的表示,接着更新到地图的信息体素层中;信 息体素层通过哈希的方式存储了多个多信息体素,每一体素包含了三维位置信 息、语义编码信息、是否被占据信息以及混合的截距/欧式距离场构成。S47. Obtain the current sensor pose through the visual SLAM method, convert the multi-information voxels in the sensor coordinate system into the representation in the map coordinate system, and then update them to the information voxel layer of the map; The method stores multiple multi-information voxels, and each voxel contains three-dimensional position information, semantic encoding information, whether it is occupied or not, and a mixed intercept/Euclidean distance field composition.

进一步的,所述的步骤S5具体包括:Further, the step S5 specifically includes:

S51.在采集数据的路线上随机采集凸包生长点;S51. Randomly collect convex hull growing points on the data collection route;

S52.在所述步骤S51的每一个凸包生成点上,加以高度和体积限制,不断 往外扩张,形成空间凸包;并基于坐标位置和步骤S3给出的场景语义信息,得 到带语义的空间凸包集合;S52. On each convex hull generation point of the step S51, add height and volume restrictions, and continuously expand outward to form a spatial convex hull; and based on the coordinate position and the scene semantic information given in step S3, obtain a space with semantics convex hull set;

S53.基于所述步骤S52中的带语义的空间凸包集合,利用它们相互间的空 间相邻关系,得到表示空间连接关系的无向图;S53. based on the spatial convex hull set with semantics in the described step S52, utilize their mutual space adjacent relationship to obtain the undirected graph representing the spatial connection relationship;

S54.设置一定的允许容纳障碍物非凸部分的阈值,对凸包进行合并,使得 得到的凸包更加符合人类直觉上环境原有的空间形状,得到更大的、带语义信 息的类椭圆形的空间凸包集合;过程在语义冲突时,不进行凸包的合并。S54. Set a certain threshold that allows the non-convex part of the obstacle to be accommodated, and merge the convex hull, so that the obtained convex hull is more in line with the original spatial shape of the environment based on human intuition, and a larger elliptical-like shape with semantic information is obtained. The spatial convex hull set of ; the process does not merge the convex hull when the semantics conflict.

进一步的,所述的步骤S6具体包括:Further, the step S6 specifically includes:

S61.对每一个步骤S4中得到的三维语义部件,根据自身位置寻找到其所 属的步骤S54得到的一个带语义信息的类椭圆形的空间凸包(如杯子这一个三 维语义部件匹配到房间这一个空间凸包上),得到空间节点到部件节点的连接关 系;S61. For each three-dimensional semantic component obtained in step S4, find an ellipse-like spatial convex hull with semantic information obtained in step S54 to which it belongs according to its own position (such as a three-dimensional semantic component of a cup that matches the room. On a space convex hull), get the connection relationship between space nodes and component nodes;

S62.将所有由步骤S61得到的空间节点到部件节点的连接关系,整合到步 骤S5得到的拓扑图上,得到完整的拓扑图。S62. Integrate all the connection relationships between the space nodes obtained in step S61 to the component nodes on the topology map obtained in step S5 to obtain a complete topology map.

与现有技术相比,有益效果是:Compared with the prior art, the beneficial effects are:

1.本发明仅使用视觉传感器完成机器人地图的构建,而不需要使用测距扫 描仪的数据进行构建,降低了硬件成本;1. the present invention only uses the visual sensor to complete the construction of the robot map, and does not need to use the data of the ranging scanner to construct, which reduces the hardware cost;

2.本地图在结构组成上包含了几何、语义与拓扑的信息,蕴含了更多的环 境信息;并利用了各层次间特有的关联性质进行地图构建,制图流程清晰,提 高了制图的效率。层次间的关系紧密结合、数据与拓扑的分离,易于后续的增 量更新与修补优化过程进行;2. The structure of this map contains geometric, semantic and topological information, and contains more environmental information; and uses the unique correlation properties of each level to construct the map, the mapping process is clear, and the efficiency of mapping is improved. The relationship between the layers is closely combined, and the data and topology are separated, which is easy to carry out the subsequent incremental update and repair optimization process;

3.本发明使用哈希表的形式存储和管理环境的几何数据,减小了数据查找 和存取的开销,并支持地图的动态无规则形状拓展,提高了数据的插入、更新 和速度,非网格的结构使得建造的地图能够最大化利用存储空间,有利于环境 的最大化覆盖;3. The present invention stores and manages the geometric data of the environment in the form of a hash table, reduces the overhead of data search and access, and supports the dynamic and irregular shape expansion of the map, and improves the insertion, update and speed of data. The structure of the grid enables the constructed map to maximize the use of storage space, which is conducive to the maximum coverage of the environment;

4.本发明中的机器人地图,使用了包含了几何与语义信息结合的多信息体 素结构,该结构不仅利于规划模块进行障碍物与碰撞检测而从优化局部规划效 率、利于定位模块使用丰富的几何信息和语义信息从而提高配准的精度和效率, 而且提供直观、形象的可视化结果,利于人类下达直观的导航指示。4. The robot map in the present invention uses a multi-information voxel structure that includes a combination of geometric and semantic information. This structure is not only beneficial for the planning module to detect obstacles and collisions, but also optimizes the local planning efficiency and facilitates the positioning module to use abundant resources. The geometric information and semantic information can improve the accuracy and efficiency of registration, and provide intuitive and vivid visualization results, which are helpful for humans to issue intuitive navigation instructions.

附图说明Description of drawings

图1是本发明地图表示系统整体框架示意图。FIG. 1 is a schematic diagram of the overall framework of the map representation system of the present invention.

图2是本发明多信息体素层中的空间表示与数据表示的联系示意图。FIG. 2 is a schematic diagram of the connection between the spatial representation and the data representation in the multi-information voxel layer of the present invention.

图3是本发明拓扑层结构示意图。FIG. 3 is a schematic diagram of the topology layer structure of the present invention.

图4是本发明地图构建方法整体流程图。FIG. 4 is an overall flow chart of the map construction method of the present invention.

具体实施方式Detailed ways

附图仅用于示例性说明,不能理解为对本发明的限制;为了更好说明本实 施例,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸;对于 本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。 附图中描述位置关系仅用于示例性说明,不能理解为对本发明的限制。The accompanying drawings are for illustrative purposes only, and should not be construed as limiting the present invention; in order to better illustrate the present embodiment, some parts of the accompanying drawings may be omitted, enlarged or reduced, and do not represent the size of the actual product; for those skilled in the art It is understandable to the artisan that certain well-known structures and descriptions thereof may be omitted from the drawings. The positional relationships described in the drawings are only for exemplary illustration, and should not be construed as limiting the present invention.

实施例1:Example 1:

如图1所示,本发明提供一种视觉机器人的地图表示系统,包括语义信息 提取模块、几何信息提取模块、场景语义提取模块、多信息体素整合模块、提 取空间拓扑模块、拓扑整合模块、以及地图元素层、多信息体素层和拓扑图层; 先使用语义信息提取模块、几何信息提取模块、场景语义提取模块三个模块从 视觉传感器中分别提取图像分割信息、深度分割信息和顶点和法线、场景语义 信息。接着,图像分割信息、深度分割信息将被融合为时间不相干的三维空间 部件信息,使用多信息体素整合模块将三维空间部件信息和顶点信息构建得到 多信息体素,使用视觉SLAM方法得到的相机位姿,将体素更新到地图的多信息体素层中。得到多信息体素层后,使用提取空间拓扑模块和场景语义信息, 将环境空间划分、合并为多个带语义的三维空间凸包集合,并由它们的空间相 邻关系得到表示空间节点到空间节点的拓扑信息。至此,部分三维空间部件信 息、语义信息提取模块种的语义信息和带语义的三维空间凸包集合被整合到地 图元素层中。此外,基于三维空间部件的空间位置,使用拓扑整合模块,先关 联三维空间部件和三维空间凸包,得到空间节点到部件节点的拓扑信息,在将 此信息与空间节点到空间节点的拓扑信息合并,得到完整的拓扑图。最后,多 信息体素层、地图元素集合和拓扑图共同构成了本发明的地图。As shown in FIG. 1, the present invention provides a map representation system of a visual robot, including a semantic information extraction module, a geometric information extraction module, a scene semantic extraction module, a multi-information voxel integration module, a spatial topology extraction module, a topology integration module, And map element layer, multi-information voxel layer and topology layer; first use three modules: semantic information extraction module, geometric information extraction module, scene semantic extraction module to extract image segmentation information, depth segmentation information and vertex and Normal, scene semantic information. Next, the image segmentation information and depth segmentation information will be fused into time-independent 3D spatial component information, and the 3D spatial component information and vertex information will be constructed using the multi-information voxel integration module to obtain multi-information voxels, which are obtained by using the visual SLAM method. Camera pose, updating voxels into the map's informative voxel layer. After obtaining the multi-information voxel layer, using the extracted spatial topology module and scene semantic information, the environment space is divided and merged into multiple 3D space convex hull sets with semantics, and the spatial adjacent relationship between them is used to represent the space node to space. Node topology information. So far, part of the 3D space component information, the semantic information of the semantic information extraction module, and the 3D space convex hull set with semantics have been integrated into the map element layer. In addition, based on the spatial position of the three-dimensional space components, the topology integration module is used to first associate the three-dimensional space components and the three-dimensional space convex hull to obtain the topology information from the space nodes to the component nodes, and then combine this information with the topological information from the space nodes to the space nodes. , to get the complete topology map. Finally, the multi-information voxel layer, the set of map elements and the topological map together constitute the map of the present invention.

本系统的应用场景为室内或者室外多种机器人作业的区域,要求视觉传感 器像素较高,有合适的焦距,能够清晰地拍摄0-20米以内的场景,并经过标定、 有准确内外参数估计。本系统只需要一个RGB-D摄像头或者一个双目摄像头, 放置在机器人前面即可。The application scenario of this system is indoor or outdoor areas where various robots work. It requires a visual sensor with high pixels and a suitable focal length to clearly capture scenes within 0-20 meters. This system only needs an RGB-D camera or a binocular camera, which can be placed in front of the robot.

为了更好地说明本发明地具体实施方案,下面将结合图1-4以及具体的实 施方式对本发明提供的地图构建方法进行详细地说明。In order to better illustrate the specific embodiments of the present invention, the map construction method provided by the present invention will be described in detail below with reference to Figures 1-4 and the specific embodiments.

如图4所示,一种视觉机器人的地图构建方法,包括以下步骤:As shown in Figure 4, a map construction method for a visual robot includes the following steps:

步骤1:接入视觉传感器,使用先前标定得到的内外参数校准RGB图像及 深度图,得到校准过后的RGB图像及深度图。所述的深度图,如果视觉传感器 为RGB-D相机,则天生具备深度信息;如视觉传感器为双目立体相机,则通过 “视差法”计算每点像素的深度从而的大深度图。Step 1: Connect the vision sensor, use the internal and external parameters obtained from the previous calibration to calibrate the RGB image and depth map, and obtain the calibrated RGB image and depth map. For the depth map, if the visual sensor is an RGB-D camera, it has depth information; if the visual sensor is a binocular stereo camera, the depth of each pixel is calculated by the "parallax method" to obtain a large depth map.

步骤2:使用Mask-RCNN方法进行图像分割,得到图像切割区域和对应的 标签。Step 2: Use the Mask-RCNN method for image segmentation to obtain the image cut area and the corresponding label.

步骤3:使用DCNN基于RGB图像进行环境分类,得到场景类别信息。Step 3: Use DCNN to classify the environment based on RGB images to obtain scene category information.

步骤4:基于像素及其邻点计算此像素对应的曲面法线,接着计算相邻像 素的法线角度差距,最后计算相邻像素的深度差距,共得到三种特征。Step 4: Calculate the surface normal corresponding to the pixel based on the pixel and its adjacent points, then calculate the normal angle difference between adjacent pixels, and finally calculate the depth difference between adjacent pixels, and obtain three features in total.

步骤5:基于步骤4得到的三种特征,在深度图上进行聚类和切割,划分 得到多个不同的深度分割区域。Step 5: Based on the three features obtained in Step 4, perform clustering and cutting on the depth map to obtain multiple different depth segmentation regions.

步骤6:记所述步骤2中提取的深度分割结果为集合S,si∈S为一个几何部 件切分块,由一定数量的像素及其深度信息组成,记所述步骤1中提取的图像分 割结果为集合R,rj∈R为一个图像上的区域,由一定数量的像素及其分类信息 组成;对每一个si,计算与之重叠区域最高的rj,计算公式如下:Step 6: Denote the depth segmentation result extracted in the step 2 as a set S, and si ∈ S is a geometric component segmentation block, which consists of a certain number of pixels and their depth information, and denote the image extracted in the step 1. The segmentation result is a set R, r j ∈ R is an area on an image, which consists of a certain number of pixels and their classification information; for each s i , calculate the r j with the highest overlapping area, and the calculation formula is as follows:

步骤7:对所述步骤6中的每一个si,赋予与之重叠区域最高的rj的类别, 确定每一个几何部件切分块的最佳分类,将相邻的具有共同实例对象和类别的 几何切分块,融合为三维语义部件。Step 7: For each s i in the step 6, assign the category of the highest r j to the overlapping area, determine the best category of each geometric component segment, and classify the adjacent objects with common instances and categories. The geometric segmentation blocks are fused into 3D semantic components.

步骤8:将三维语义部件与地图中存在的三维语义部件进行一一比较,若 当前时刻与地图中存在相同的实例(同一个客观环境中的物体)时,进行实例 追踪,若地图中不存在时,将新的三维语义部件添加到地图中,再进行实例追 踪。所述的实例追踪是保证在构建地图时,每一个实例对象的检测结果,在多 帧图像中保持时间不相干性的方法。Step 8: Compare the three-dimensional semantic components with the three-dimensional semantic components existing in the map one by one. If the same instance (object in the same objective environment) exists in the current moment and the map, perform instance tracking. If there is no such instance in the map , add new 3D semantic parts to the map, and then perform instance tracking. The instance tracking is a method to ensure that the detection result of each instance object is kept temporally incoherent in multiple frames of images when the map is constructed.

步骤9:将所述步骤7得到的、所述步骤8维护的三维语义部件,与所述 步骤2中提取的顶点集合进行融合。先将空间进行体素划分,只考虑截断距离 内的体素。假设x为当前体素的中心,p为一个顶点的三维位置,s为传感器的原 点,此时有:Step 9: fuse the three-dimensional semantic components obtained in the step 7 and maintained in the step 8 with the vertex set extracted in the step 2. The space is first divided into voxels, and only voxels within the truncation distance are considered. Assuming that x is the center of the current voxel, p is the three-dimensional position of a vertex, and s is the origin of the sensor, then we have:

d(x,p,s)=||p-x||sign((p-x)·(p-s))d(x,p,s)=||p-x||sign((p-x)·(p-s))

Figure BDA0002247867700000091
Figure BDA0002247867700000091

∈=4v∈=4v

Figure BDA0002247867700000092
Figure BDA0002247867700000092

Wi+1(x,p,s)=min(Wi(x)+w(x,p),Wmax)W i+1 (x,p,s)=min(W i (x)+w(x,p),W max )

式中,v为体素的大小,z为从s到p处的深度,Wmax限制了更新的最大权 值;由上式,可以计算得到第i+1次更新体素x时的TSDF值Di+1(x,p,s)和权 值Wi+1(x,p,s);初始时TSDF值D(x,p,s)和权值W(x,p,s)都被初始化为0。In the formula, v is the size of the voxel, z is the depth from s to p, and W max limits the maximum weight of the update; from the above formula, the TSDF value of the i+1th update voxel x can be calculated. D i+1 (x, p, s) and weight W i+1 (x, p, s); initial TSDF value D (x, p, s) and weight W (x, p, s) are both is initialized to 0.

步骤10:基于所述步骤9中的TSDF值,计算ESDF的值。所述过程具体 包括:以TSDF表面出发,通过26-邻域搜索的方式,将设置的截断区域内的区 域,直接将TSDF的值作为ESDF的值,在截断区域外的值则通过水波纹传播 算法计算得到,最后得到ESDF。Step 10: Calculate the ESDF value based on the TSDF value in the step 9. The process specifically includes: starting from the TSDF surface, by means of 26-neighbor search, directly using the TSDF value as the ESDF value in the area within the set cut-off area, and the value outside the cut-off area is propagated through water ripples. The algorithm calculates and finally obtains the ESDF.

其中,水波纹算法按如下步骤进行:Among them, the water ripple algorithm is carried out according to the following steps:

步骤10.1:波从一个体素(记为v)开始,传播到它的26-邻域,对未更新 ESDF距离的体素更新它们的ESDF距离为体素v的ESDF值加上单位距离,并 将新更新的体素放入波纹扩散队列。对波纹扩散队列中每一个体素,依次递归 地执行所述步骤10.1,直到所有体素都已经更新了ESDF距离为止。Step 10.1: The wave starts at a voxel (denoted v), propagates to its 26-neighborhood, updates the voxels whose ESDF distance has not been updated with their ESDF distance equal to the ESDF value of voxel v plus the unit distance, and Puts the newly updated voxels into the ripple diffusion queue. For each voxel in the ripple diffusion queue, step 10.1 is performed recursively in turn, until all voxels have updated ESDF distances.

步骤11:基于所述步骤7得到的、所述步骤8维护的三维语义部件信息与 所述步骤10中的ESDF,通过坐标关联组成多信息体素。Step 11: Based on the three-dimensional semantic component information obtained in the step 7 and maintained in the step 8 and the ESDF in the step 10, a multi-information voxel is formed through coordinate association.

步骤12:通过视觉SLAM方法获取到当前的传感器位姿,将传感器坐标系 下的多信息体素转换为地图坐标系下的表示,接着更新到地图的信息体素层中。 信息体素层通过哈希的方式存储了多个多信息体素,每一体素包含了三维位置信 息、语义编码信息、是否被占据信息以及混合的截距/欧式距离场构成,图2展 示了多信息体素的存储形式、空间表示和数据结构。Step 12: Obtain the current sensor pose through the visual SLAM method, convert the multi-information voxels in the sensor coordinate system to the representation in the map coordinate system, and then update them to the information voxel layer of the map. The information voxel layer stores multiple multi-information voxels by hashing. Each voxel contains three-dimensional position information, semantic coding information, whether it is occupied or not, and a mixed intercept/Euclidean distance field composition. Figure 2 shows Storage form, spatial representation and data structure of multi-information voxels.

步骤13:在采集数据的路线上随机设置凸包生长点,在每一个点出加以高 度和体积限制,不断往外扩张,形成空间凸包;并基于坐标位置和步骤3给出 的场景语义信息,得到带语义的空间凸包集合。Step 13: Randomly set the convex hull growth point on the route of collecting data, impose height and volume restrictions on each point, and continue to expand outward to form a spatial convex hull; and based on the coordinate position and the scene semantic information given in step 3, Get the set of spatial convex hulls with semantics.

步骤14:基于所述步骤13中的带语义的空间凸包集合,利用它们相互间 的空间相邻关系,得到表示空间连接关系的无向图。Step 14: Based on the set of spatial convex hulls with semantics in the step 13, using the spatial adjacent relationship between them, obtain an undirected graph representing the spatial connection relationship.

步骤15:设置一定的允许容纳障碍物(非凸部分)的阈值,对凸包进行合 并,使得得到的凸包更加符合人类直觉上环境原有的空间形状,得到更大的、 带语义信息的类椭圆形的空间凸包集合。所述过程在语义冲突时,不进行凸包 的合并。Step 15: Set a certain threshold that allows obstacles (non-convex parts) to be accommodated, and merge the convex hulls, so that the obtained convex hulls are more in line with the original spatial shape of the environment based on human intuition, and a larger and more semantic information is obtained. A collection of ellipse-like spatial convex hulls. The process does not merge convex hulls when semantics conflict.

步骤16:对每一个所述步骤7得到的、所述步骤8维护的三维语义部件, 根据自身位置寻找到其所属的步骤13得到的一个带语义信息的类椭圆形的空 间凸包(如杯子这一个三维语义部件匹配到房间这一个空间凸包上),得到空间 节点到部件节点的连接关系。Step 16: For each three-dimensional semantic component obtained in step 7 and maintained in step 8, find an ellipse-like spatial convex hull (such as a cup) with semantic information obtained in step 13 to which it belongs according to its own position. This three-dimensional semantic component is matched to the space convex hull of the room), and the connection relationship between the space node and the component node is obtained.

步骤17:将所有由步骤17得到的空间节点到部件节点的连接关系,整合 到步骤5得到的无向图上,得到完整的拓扑图。Step 17: Integrate all the connection relationships between space nodes and component nodes obtained in step 17 into the undirected graph obtained in step 5 to obtain a complete topology map.

步骤18:通过步骤12中的多信息体素层、步骤2中的特定的语义信息和 步骤8中的三维语义部件、步骤17中的拓扑图共同构成本发明的地图。Step 18: The map of the present invention is formed by the multi-information voxel layer in step 12, the specific semantic information in step 2, the three-dimensional semantic component in step 8, and the topology map in step 17.

显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并 非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述 说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有 的实施方式予以穷举。凡在本发明的精神和原则之内所作的任何修改、等同替 换和改进等,均应包含在本发明权利要求的保护范围之内。Obviously, the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. For those of ordinary skill in the art, changes or modifications in other different forms can also be made on the basis of the above description. There is no need and cannot be exhaustive of all implementations here. Any modifications, equivalent replacements and improvements made within the spirit and principle of the present invention shall be included within the protection scope of the claims of the present invention.

Claims (10)

1.一种视觉机器人的地图表示系统,其特征在于,包括语义信息提取模块、几何信息提取模块、场景语义提取模块、多信息体素整合模块、提取空间拓扑模块、拓扑整合模块、以及地图元素层、多信息体素层和拓扑图层;其中,1. a map representation system of a visual robot, is characterized in that, comprises semantic information extraction module, geometric information extraction module, scene semantic extraction module, multi-information voxel integration module, extraction space topology module, topology integration module and map element layers, informative voxel layers, and topology layers; where, 所述的语义信息提取模块用于利用视觉传感器在需要构造地图的环境中采集图像,并进行图像语义信息提取,得到图像分割结果;接着进行其余特定语义提取,得到特定的语义信息,特定的语义信息部分的结果最终会存储在地图元素层中;The semantic information extraction module is used to collect images in an environment where a map needs to be constructed by using a visual sensor, and extract semantic information of images to obtain image segmentation results; then perform other specific semantic extraction to obtain specific semantic information, specific semantic information The result of the information part will eventually be stored in the map element layer; 所述的几何信息提取模块用于使用视觉传感器在需要构造地图的环境中采集图像,经过计算得到深度图、顶点集合和顶点对应的法向量,接着基于深度图由几种关于距离与法线的几何特征分割深度图;The geometric information extraction module is used to collect images in an environment where a map needs to be constructed by using a visual sensor, obtain a depth map, a set of vertices and a normal vector corresponding to the vertices through calculation, and then calculate several distances and normals based on the depth map. Geometric feature segmentation depth map; 所述的场景语义提取模块用于利用视觉传感器在需要构造地图的环境中采集图像,并进行场景语义提取,得到场景的分类信息,场景的分类信息是人为定义的、具有一定的标识度的场景称号;The scene semantic extraction module is used to collect images in an environment where a map needs to be constructed by using a visual sensor, and perform scene semantic extraction to obtain scene classification information, and the scene classification information is an artificially defined scene with a certain degree of identification. title; 所述的多信息体素整合模块,用于将语义信息提取模块得到的图像分割结果和几何信息提取模块得到的深度分割结果进行融合,得到三维语义部件,该部件描述了场景中某一个由人类预定的有一定标识度和对规划、定位来说意义的物体;接着,结合视觉SLAM方法得到的相机位姿,将使用三维语义部件与几何信息提取模块得到的顶点集合计算得到的多信息体素,更新到地图的多信息体素层中;The multi-information voxel integration module is used to fuse the image segmentation results obtained by the semantic information extraction module and the depth segmentation results obtained by the geometric information extraction module to obtain a three-dimensional semantic component, which describes a certain scene in the scene by a human. Predetermined objects with a certain degree of identity and significance for planning and positioning; then, combined with the camera pose obtained by the visual SLAM method, the multi-information voxels calculated by using the 3D semantic components and the vertex set obtained by the geometric information extraction module , updated to the multi-information voxel layer of the map; 所述的提取空间拓扑模块用于基于所述的多信息体素整合模块得到的多信息体素层,提取出包含场景语义的空间凸包及表达其连接关系的拓扑信息;The extraction spatial topology module is used for extracting the spatial convex hull containing scene semantics and the topology information expressing its connection relationship based on the multi-information voxel layer obtained by the multi-information voxel integration module; 所述拓扑整合模块用于基于空间的相邻关系,关联所述的多信息体素整合模块中三维空间部件和所述的提取空间拓扑模块得到的三维空间凸包,计算空间节点到部件节点的拓扑信息,并将此信息与空间节点到空间节点的拓扑信息合并,得到完整的拓扑图层;The topology integration module is used to associate the three-dimensional space components in the multi-information voxel integration module and the three-dimensional space convex hull obtained by the extraction space topology module based on the adjacent relationship in space, and calculate the relationship between the space nodes and the component nodes. topology information, and combine this information with the topology information from space nodes to space nodes to obtain a complete topology layer; 所述的地图元素层、多信息体素层和拓扑图层共同构成地图,拓扑图将地图元素层中的空间凸包和三维部件信息联系起来。The map element layer, the multi-information voxel layer and the topology layer together constitute a map, and the topology map connects the spatial convex hull in the map element layer with the three-dimensional component information. 2.根据权利要求1所述的视觉机器人的地图表示系统,其特征在于,所述的多信息体素层是使用一个附带有空间信息、语义信息的立方体,其不真实存在于客观世界中,而是用于近似表述客观世界的一个立方区域的数据抽象对象;所述的多信息体素层是从机器人规划和定位的需求出发,预定义的多种环境表示对象,由描述其位置、语义、几何字段构成。2. The map representation system of a visual robot according to claim 1, wherein the multi-information voxel layer uses a cube with attached spatial information and semantic information, which does not actually exist in the objective world, It is a data abstract object used to approximate a cubic area of the objective world; the multi-information voxel layer is based on the needs of robot planning and positioning, and a variety of predefined environments represent objects, which are described by their location, semantics , Geometry field composition. 3.一种视觉机器人的地图构建方法,其特征在于,包括以下步骤:3. a map construction method of visual robot, is characterized in that, comprises the following steps: S1.使用语义信息提取模块,利用视觉传感器在需要构造地图的环境中采集图像,并进行部件语义信息提取,得到图像分割结果,进行其余特定语义提取,得到特定的语义信息;特定的语义信息部分的结果最终会存储在地图元素层中;S1. Use the semantic information extraction module, use the visual sensor to collect images in the environment where the map needs to be constructed, and extract the semantic information of the components to obtain the image segmentation result, and perform the remaining specific semantic extraction to obtain the specific semantic information; the specific semantic information part The result will eventually be stored in the map element layer; S2.使用几何信息提取模块,使用视觉传感器在需要构造地图的环境中采集图像,得到深度图、顶点集合和顶点对应的法线,接着从深度图中计算得到深度分割结果;S2. Use the geometric information extraction module, use the visual sensor to collect images in the environment where the map needs to be constructed, obtain the depth map, the vertex set and the normal corresponding to the vertex, and then calculate the depth segmentation result from the depth map; S3.使用场景语义提取模块,利用视觉传感器在需要构造地图的环境中采集图像,并进行场景语义提取,得到场景的分类信息;S3. Using the scene semantic extraction module, the visual sensor is used to collect images in the environment where the map needs to be constructed, and the scene semantic extraction is performed to obtain the classification information of the scene; S4.使用多信息体素整合模块,将步骤S1中提取的图像分割结果和步骤S2中提取的深度分割结果进行融合,得到三维语义部件;接着,基于视觉SLAM方法得到的视觉传感器姿态,将三维语义部件与步骤S2中提取的顶点集合计算得到多信息体素,构成多信息体素层;S4. Use the multi-information voxel integration module to fuse the image segmentation results extracted in step S1 and the depth segmentation results extracted in step S2 to obtain three-dimensional semantic components; then, based on the visual sensor posture obtained by the visual SLAM method, the three-dimensional The semantic component and the vertex set extracted in step S2 are calculated to obtain multi-information voxels to form a multi-information voxel layer; S5.使用提取空间拓扑模块,基于步骤S4中的多信息体素层,提取出包含场景语义的空间凸包及表达其连接关系的拓扑信息;S5. Using the extraction spatial topology module, based on the multi-information voxel layer in step S4, extract the spatial convex hull containing scene semantics and the topology information expressing its connection relationship; S6.使用拓扑整合模块,基于空间的相邻关系,关联步骤S4中三维空间部件和步骤S5中的三维空间凸包,得到空间节点到部件节点的拓扑信息,并将此信息与空间节点到空间节点的拓扑信息合并,得到完整的拓扑图,构成拓扑图层;S6. Using the topology integration module, based on the adjacent relationship of the space, associate the three-dimensional space component in step S4 with the three-dimensional space convex hull in step S5, obtain the topology information from the space node to the component node, and associate this information with the space node to the space The topology information of nodes is merged to obtain a complete topology map, which constitutes a topology layer; S7.通过步骤S1中的地图元素层、步骤S4中的多信息体素层、步骤S6中的拓扑图层共同构成本发明的地图。S7. The map of the present invention is formed by the map element layer in step S1, the multi-information voxel layer in step S4, and the topology layer in step S6. 4.根据权利要求3所述的视觉机器人的地图构建方法,其特征在于,所述的步骤S1具体包括:4. the map construction method of visual robot according to claim 3, is characterized in that, described step S1 specifically comprises: S11.从视觉传感器中得到RGB图像;S11. Obtain an RGB image from a vision sensor; S12.基于步骤S11对RGB图像中的每一个像素进行推断,判断其从属对象的类别,过程基于深度神经网络完成,最终得到每一个类别的每一个实例的二值掩码图像集合;S12. Infer each pixel in the RGB image based on step S11, determine the category of its subordinate object, the process is completed based on the deep neural network, and finally obtain the binary mask image set of each instance of each category; S13.其余特定语义提取由实际的应用决定,基于步骤S11对RGB图像,使用特征检测和推理模块,将RGB图像进行特征提取,接着进行特定语义提取,得到特定语义,特定语义包括但不仅限于可行使区域。S13. The rest of the specific semantic extraction is determined by the actual application. Based on step S11, the RGB image is extracted using the feature detection and inference module, and then the specific semantic extraction is performed to obtain the specific semantic. The specific semantics include but are not limited to available exercise area. 5.根据权利要求3所述的视觉机器人的地图构建方法,其特征在于,所述的步骤S2具体包括:5. The map construction method of visual robot according to claim 3, is characterized in that, described step S2 specifically comprises: S21.从视觉传感器中得到深度图;S21. Obtain a depth map from a vision sensor; S22.基于步骤S21从深度图,基于一个像素及其邻点计算此像素对应的曲面法线,接着计算相邻像素的法线角度差距,最后计算相邻像素的深度差距,共得到三种特征;S22. Based on step S21, from the depth map, calculate the surface normal corresponding to this pixel based on a pixel and its adjacent points, then calculate the normal angle difference between adjacent pixels, and finally calculate the depth difference between adjacent pixels to obtain three kinds of features. ; S23.基于所述步骤S22得到的三种特征,将深度图上进行聚类和切割,划分得到多个不同的深度分割区域,称之为深度分割结果。S23. Based on the three features obtained in the step S22, cluster and cut the depth map to obtain a plurality of different depth segmentation regions, which are called depth segmentation results. 6.根据权利要求3所述的视觉机器人的地图构建方法,其特征在于,所述的步骤S3具体包括:6. The map construction method of visual robot according to claim 3, is characterized in that, described step S3 specifically comprises: S31.从视觉传感器中得到RGB图像;S31. Obtain an RGB image from a vision sensor; S32.由RGB图像的像素及像素之间的联系,计算出高维度的特征,接着以此推断出场景的分类信息;过程基于深度神经网络实现,包括但不仅限于CNN网络,场景包括但不仅限于商场、厨房、仓库、走廊。S32. Calculate the high-dimensional features from the pixels of the RGB image and the relationship between the pixels, and then infer the classification information of the scene; the process is implemented based on deep neural networks, including but not limited to CNN networks, and scenes include but not limited to Shopping malls, kitchens, warehouses, corridors. 7.根据权利要求3所述的视觉机器人的地图构建方法,其特征在于,所述的步骤S4具体包括:7. The map construction method of visual robot according to claim 3, is characterized in that, described step S4 specifically comprises: S41.记所述步骤S2中提取的深度分割结果为集合S,si∈S为一个几何部件切分块,由一定数量的像素及其深度信息组成,记所述步骤S1中提取的图像分割结果为集合R,rj∈R为一个图像上的区域,由一定数量的像素及其分类信息组成;对每一个si,计算与之重叠区域最高的rj,计算公式如下:S41. Denote the depth segmentation result extracted in the step S2 as a set S, where s i ∈ S is a geometric component segmentation block, consisting of a certain number of pixels and their depth information, denote the image segmentation extracted in the step S1 The result is a set R, where r j ∈ R is an area on an image, which consists of a certain number of pixels and their classification information; for each s i , calculate the r j with the highest overlapping area, and the calculation formula is as follows:
Figure FDA0002247867690000031
Figure FDA0002247867690000031
S42.对所述步骤S41中的每一个si,赋予与之重叠区域最高的rj的类别,确定每一个几何部件切分块的最佳分类,将相邻的具有共同实例对象和类别的几何切分块,融合为三维语义部件;S42. For each s i in the step S41, assign the category of the highest r j to the overlapping area, determine the best category for each geometric component segment, and assign adjacent objects with common instance objects and categories to Geometric segmentation and fusion into 3D semantic components; S43.将三维语义部件与地图中存在的三维语义部件进行一一比较,若当前时刻与地图中存在相同的实例即同一个客观环境中的物体时,进行实例追踪,若地图中不存在时,将新的三维语义部件添加到地图中,再进行实例追踪;所述的实例追踪是保证在构建地图时,每一个实例对象的检测结果,在多帧图像中保持时间不相干性的方法;S43. Compare the three-dimensional semantic components and the three-dimensional semantic components existing in the map one by one. If the same instance exists in the current moment and the map, that is, an object in the same objective environment, perform instance tracking. If it does not exist in the map, Add a new three-dimensional semantic component to the map, and then perform instance tracking; the instance tracking is a method to ensure that the detection result of each instance object is kept temporally incoherent in multiple frames of images when the map is constructed; S44.将所述步骤S43维护的三维语义部件,与所述步骤S2中提取的顶点集合进行融合;先将空间进行体素划分,只考虑截断距离t内的体素;假设x为当前体素的中心,p为一个顶点的三维位置,s为传感器的原点,此时有:S44. fuse the three-dimensional semantic component maintained in the step S43 with the vertex set extracted in the step S2; first divide the space into voxels, and only consider the voxels within the truncation distance t; suppose x is the current voxel The center of , p is the three-dimensional position of a vertex, s is the origin of the sensor, at this time there are: d(x,p,s)=||p-x||sign((p-x)·(p-s))d(x, p, s)=||p-x||sign((p-x)·(p-s))
Figure FDA0002247867690000041
Figure FDA0002247867690000041
∈=4v∈=4v Wi+1(x,p,s)=min(Wi(x)+w(x,p),Wmax)W i+1 (x, p, s)=min(W i (x)+w(x, p), W max ) 式中,v为体素的大小,z为从s到p处的深度,Wmax限制了更新的最大权值;由上式,可以计算得到第i+1次更新体素x时的TSDF值Di+1(x,p,s)和权值Wi+1(x,p,s);初始时TSDF值D(x,p,s)和权值W(x,p,s)都被初始化为0;In the formula, v is the size of the voxel, z is the depth from s to p, and W max limits the maximum weight of the update; from the above formula, the TSDF value of the i+1th update voxel x can be calculated. D i+1 (x, p, s) and weight Wi +1 (x, p, s); initial TSDF value D (x, p, s) and weight W (x, p, s) are both is initialized to 0; S45.基于步骤S44中的TSDF值,计算ESDF的值;所述过程具体包括:以TSDF表面出发,通过26-邻域搜索的方式,将设置的截断区域内的区域,直接将TSDF的值作为ESDF的值,在截断区域外的值则通过水波纹传播算法计算得到,最后得到ESDF;S45. Calculate the value of ESDF based on the TSDF value in step S44; the process specifically includes: starting from the TSDF surface, by means of 26-neighbor search, the area in the set truncation area is directly set as the value of TSDF as The value of ESDF, the value outside the truncation area is calculated by the water ripple propagation algorithm, and finally the ESDF is obtained; S46.基于步骤S42与S43得到的三维语义部件信息与步骤S45中的ESDF,组成多信息体素;S46. Based on the three-dimensional semantic component information obtained in steps S42 and S43 and the ESDF in step S45, form a multi-information voxel; S47.通过视觉SLAM方法获取到当前的传感器位姿,将传感器坐标系下的多信息体素转换为地图坐标系下的表示,接着更新到地图的信息体素层中;信息体素层通过哈希的方式存储了多个多信息体素,每一体素包含了三维位置信息、语义编码信息、是否被占据信息以及混合的截距/欧式距离场构成。S47. Obtain the current sensor pose through the visual SLAM method, convert the multi-information voxels in the sensor coordinate system into the representation in the map coordinate system, and then update them to the information voxel layer of the map; The method stores multiple multi-information voxels, and each voxel contains three-dimensional position information, semantic encoding information, whether it is occupied or not, and a mixed intercept/Euclidean distance field composition.
8.根据权利要求7所述的视觉机器人的地图构建方法,其特征在于,所述的水波纹算法按如下步骤进行:波从一个体素记为v开始,传播到它的26-邻域,对未更新ESDF距离的体素更新它们的ESDF距离为体素v的ESDF值加上单位距离,并将新更新的体素放入波纹扩散队列;对波纹扩散队列中每一个体素,依次递归地执行以上步骤,直到所有体素都已经更新了ESDF距离为止。8. the map construction method of visual robot according to claim 7 is characterized in that, described water ripple algorithm is carried out as follows: wave starts from a voxel is marked as v, propagates to its 26-neighborhood, For voxels whose ESDF distance has not been updated, update their ESDF distance to the ESDF value of voxel v plus the unit distance, and put the newly updated voxel into the ripple diffusion queue; for each voxel in the ripple diffusion queue, recurse in turn Repeat the above steps until all voxels have updated ESDF distances. 9.根据权利要求3所述的视觉机器人的地图构建方法,其特征在于,所述的步骤S5具体包括:9. The map construction method of visual robot according to claim 3, is characterized in that, described step S5 specifically comprises: S51.在采集数据的路线上随机采集凸包生长点;S51. Randomly collect convex hull growing points on the data collection route; S52.在所述步骤S51的每一个凸包生成点上,加以高度和体积限制,不断往外扩张,形成空间凸包;并基于坐标位置和步骤S3给出的场景语义信息,得到带语义的空间凸包集合;S52. On each convex hull generation point of the step S51, add height and volume restrictions, and continuously expand outward to form a spatial convex hull; and based on the coordinate position and the scene semantic information given in step S3, obtain a space with semantics convex hull set; S53.基于所述步骤S52中的带语义的空间凸包集合,利用它们相互间的空间相邻关系,得到表示空间连接关系的无向图;S53. Based on the set of spatial convex hulls with semantics in the step S52, utilize the spatial adjacent relationship between them to obtain an undirected graph representing the spatial connection relationship; S54.设置一定的允许容纳障碍物非凸部分的阈值,对凸包进行合并,使得得到的凸包更加符合人类直觉上环境原有的空间形状,得到更大的、带语义信息的类椭圆形的空间凸包集合;过程在语义冲突时,不进行凸包的合并。S54. Set a certain threshold that allows the non-convex part of the obstacle to be accommodated, and merge the convex hull, so that the obtained convex hull is more in line with the original spatial shape of the environment intuitionally, and a larger elliptical-like shape with semantic information is obtained. The spatial convex hull set of ; the process does not merge the convex hull when the semantics conflict. 10.根据权利要求3所述的视觉机器人的地图构建方法,其特征在于,所述的步骤S6具体包括:10. The map construction method of a visual robot according to claim 3, wherein the step S6 specifically comprises: S61.对每一个步骤S4中得到的三维语义部件,根据自身位置寻找到其所属的步骤S54得到的一个带语义信息的类椭圆形的空间凸包,得到空间节点到部件节点的连接关系;S61. to the three-dimensional semantic component obtained in each step S4, find an ellipse-like space convex hull with semantic information obtained according to its own position in the step S54 to which it belongs, and obtain the connection relationship between the space node and the component node; S62.将所有由步骤S61得到的空间节点到部件节点的连接关系,整合到步骤S5得到的拓扑图上,得到完整的拓扑图。S62. Integrate all the connection relationships between the space nodes and the component nodes obtained in step S61 into the topology map obtained in step S5 to obtain a complete topology map.
CN201911023177.2A 2019-10-25 2019-10-25 A map representation system for visual robot and its construction method Active CN110807782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911023177.2A CN110807782B (en) 2019-10-25 2019-10-25 A map representation system for visual robot and its construction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911023177.2A CN110807782B (en) 2019-10-25 2019-10-25 A map representation system for visual robot and its construction method

Publications (2)

Publication Number Publication Date
CN110807782A true CN110807782A (en) 2020-02-18
CN110807782B CN110807782B (en) 2021-08-20

Family

ID=69489109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911023177.2A Active CN110807782B (en) 2019-10-25 2019-10-25 A map representation system for visual robot and its construction method

Country Status (1)

Country Link
CN (1) CN110807782B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112146660A (en) * 2020-09-25 2020-12-29 电子科技大学 An indoor map localization method based on dynamic word vector
CN112703368A (en) * 2020-04-16 2021-04-23 华为技术有限公司 Vehicle positioning method and device and positioning layer generation method and device
CN112837372A (en) * 2021-03-02 2021-05-25 浙江商汤科技开发有限公司 Data generation method and device, electronic equipment and storage medium
CN114493152A (en) * 2021-12-30 2022-05-13 上海赛可出行科技服务有限公司 Method for automatically driving taxi position water wave scheduling
CN115097857A (en) * 2022-07-18 2022-09-23 浙江大学 Real-time trajectory planning method considering the shape of rotor UAV in complex environment
CN115454055A (en) * 2022-08-22 2022-12-09 中国电子科技南湖研究院 Multilayer fusion map representation method for indoor autonomous navigation and operation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107066507A (en) * 2017-01-10 2017-08-18 中国人民解放军国防科学技术大学 A kind of semantic map constructing method that cloud framework is mixed based on cloud robot
CN108596974A (en) * 2018-04-04 2018-09-28 清华大学 Dynamic scene robot localization builds drawing system and method
CN109407115A (en) * 2018-12-25 2019-03-01 中山大学 A kind of road surface extraction system and its extracting method based on laser radar
CN109917419A (en) * 2019-04-12 2019-06-21 中山大学 A Dense System and Method for Depth Filling Based on LiDAR and Image
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning
CN110363816A (en) * 2019-06-25 2019-10-22 广东工业大学 A deep learning-based approach to semantic mapping of mobile robot environments

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107066507A (en) * 2017-01-10 2017-08-18 中国人民解放军国防科学技术大学 A kind of semantic map constructing method that cloud framework is mixed based on cloud robot
CN108596974A (en) * 2018-04-04 2018-09-28 清华大学 Dynamic scene robot localization builds drawing system and method
CN109407115A (en) * 2018-12-25 2019-03-01 中山大学 A kind of road surface extraction system and its extracting method based on laser radar
CN109917419A (en) * 2019-04-12 2019-06-21 中山大学 A Dense System and Method for Depth Filling Based on LiDAR and Image
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A 3D Semantic Map Construction Method for Indoor Environment Based on Deep Learning
CN110363816A (en) * 2019-06-25 2019-10-22 广东工业大学 A deep learning-based approach to semantic mapping of mobile robot environments

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李伟等: "基于仿生的机器人室内地图构建方法的研究", 《东北师范大学报(自然科学版)》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112703368A (en) * 2020-04-16 2021-04-23 华为技术有限公司 Vehicle positioning method and device and positioning layer generation method and device
CN112146660A (en) * 2020-09-25 2020-12-29 电子科技大学 An indoor map localization method based on dynamic word vector
CN112837372A (en) * 2021-03-02 2021-05-25 浙江商汤科技开发有限公司 Data generation method and device, electronic equipment and storage medium
CN114493152A (en) * 2021-12-30 2022-05-13 上海赛可出行科技服务有限公司 Method for automatically driving taxi position water wave scheduling
CN114493152B (en) * 2021-12-30 2025-05-23 上海赛可出行科技服务有限公司 Automatic taxi position water wave scheduling method
CN115097857A (en) * 2022-07-18 2022-09-23 浙江大学 Real-time trajectory planning method considering the shape of rotor UAV in complex environment
CN115097857B (en) * 2022-07-18 2024-04-30 浙江大学 Real-time trajectory planning method considering the shape of rotary-wing UAV in complex environment
CN115454055A (en) * 2022-08-22 2022-12-09 中国电子科技南湖研究院 Multilayer fusion map representation method for indoor autonomous navigation and operation
CN115454055B (en) * 2022-08-22 2023-09-19 中国电子科技南湖研究院 Multi-layer fusion map representation method for indoor autonomous navigation and operation

Also Published As

Publication number Publication date
CN110807782B (en) 2021-08-20

Similar Documents

Publication Publication Date Title
CN110807782B (en) A map representation system for visual robot and its construction method
CN111461245B (en) Wheeled robot semantic mapping method and system fusing point cloud and image
US11127189B2 (en) 3D skeleton reconstruction from images using volumic probability data
CN112784873B (en) Semantic map construction method and device
CN108564616B (en) Fast robust RGB-D indoor three-dimensional scene reconstruction method
CN103712617B (en) A kind of creation method of the multilamellar semanteme map of view-based access control model content
CN105096386B (en) A wide range of complicated urban environment geometry map automatic generation method
CN111340939B (en) Indoor three-dimensional semantic map construction method
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN110969648B (en) A 3D target tracking method and system based on point cloud sequence data
CN106940186A (en) A kind of robot autonomous localization and air navigation aid and system
CN118521653B (en) Positioning and mapping method and system based on fusion of LiDAR and inertial measurement in complex scenes
CN113408584A (en) RGB-D multi-modal feature fusion 3D target detection method
CN112507056A (en) Map construction method based on visual semantic information
CN113885510B (en) Four-foot robot obstacle avoidance and pilot following method and system
CN114299386A (en) Laser SLAM method integrating laser odometer and loop detection
CN108537214A (en) An automatic construction method of indoor semantic map
CN112991534A (en) Indoor semantic map construction method and system based on multi-granularity object model
CN116007607A (en) An Indoor Dynamic SLAM Method Based on Multi-source Semantic Awareness
CN118329053A (en) Static map construction method based on 3D laser radar in dynamic scene
CN117470218A (en) A positioning and mapping method that effectively combines environmental plane information
CN117671175A (en) Space-time multi-dimension-based digital twin system for forest complex environment and construction method thereof
CN115200601A (en) Navigation method, device, wheeled robot and storage medium
CN118687554A (en) A point cloud map management method for laser radar simultaneous positioning and mapping
CN118521702A (en) Point cloud rendering method and system based on nerve radiation field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant