[go: up one dir, main page]

CN118612555A - Environmental perception method of self-mobile device, self-mobile device, medium, device - Google Patents

Environmental perception method of self-mobile device, self-mobile device, medium, device Download PDF

Info

Publication number
CN118612555A
CN118612555A CN202410304322.9A CN202410304322A CN118612555A CN 118612555 A CN118612555 A CN 118612555A CN 202410304322 A CN202410304322 A CN 202410304322A CN 118612555 A CN118612555 A CN 118612555A
Authority
CN
China
Prior art keywords
point cloud
frame
image data
depth image
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410304322.9A
Other languages
Chinese (zh)
Inventor
孙张阳
曹丽娜
陈佳搏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Stone Innovation Technology Co ltd
Original Assignee
Beijing Stone Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Stone Innovation Technology Co ltd filed Critical Beijing Stone Innovation Technology Co ltd
Priority to CN202410304322.9A priority Critical patent/CN118612555A/en
Publication of CN118612555A publication Critical patent/CN118612555A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

本公开涉及人工智能技术领域,提供了一种自移动设备的环境感知方法、自移动设备、计算机存储介质、电子设备,其中,自移动设备设置有主深度相机,自移动设备的环境感知方法包括:在自移动设备的行进过程中,通过主深度相机采集每帧深度图像数据;主深度相机被配置为:在每个数据采集周期内,按照预设的M个曝光参数交替的采集M帧深度图像数据;对每帧深度图像数据进行解码,获得每帧深度图像数据对应的每帧点云;对每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将第一融合点云重新作为每帧的点云,以基于每帧的点云进行环境感知。本公开能够提高自移动设备的环境感知精度,增强其导航、避障等功能的准确性。

The present disclosure relates to the field of artificial intelligence technology, and provides an environment perception method for a self-mobile device, a self-mobile device, a computer storage medium, and an electronic device, wherein the self-mobile device is provided with a main depth camera, and the environment perception method for the self-mobile device includes: during the movement of the self-mobile device, each frame of depth image data is collected by the main depth camera; the main depth camera is configured to: in each data collection cycle, alternately collect M frames of depth image data according to preset M exposure parameters; decode each frame of depth image data to obtain each frame of point cloud corresponding to each frame of depth image data; fuse the M frames of point cloud corresponding to each data collection cycle to obtain a first fused point cloud, and use the first fused point cloud as the point cloud of each frame again, so as to perform environment perception based on the point cloud of each frame. The present disclosure can improve the environmental perception accuracy of the self-mobile device and enhance the accuracy of its navigation, obstacle avoidance and other functions.

Description

自移动设备的环境感知方法、自移动设备、介质、设备Environmental perception method of self-mobile device, self-mobile device, medium, device

技术领域Technical Field

本公开涉及人工智能技术领域,特别涉及一种自移动设备的环境感知方法、自移动设备、计算机存储介质及电子设备。The present disclosure relates to the field of artificial intelligence technology, and in particular to an environment perception method for a self-mobile device, a self-mobile device, a computer storage medium, and an electronic device.

背景技术Background Art

随着科学技术的发展,扫地机器人等自移动设备的应用也越来越广泛。通过在自移动设备上设置深度相机,可以帮助自移动设备更好的感知环境。With the development of science and technology, the application of self-moving devices such as sweeping robots is becoming more and more widespread. By setting a depth camera on the self-moving device, the self-moving device can be helped to better perceive the environment.

然而,当视野内存在不同距离的多个物体时,相关技术中的深度相机不能同时看到近处物体和远处物体,从而,影响到自移动设备的环境感知精度。However, when there are multiple objects at different distances in the field of view, the depth camera in the related art cannot see both nearby objects and distant objects at the same time, thereby affecting the environmental perception accuracy of the self-mobile device.

鉴于此,本领域亟需开发一种新的自移动设备的环境感知方法及装置。In view of this, there is an urgent need in the art to develop a new method and device for environmental perception of self-mobile equipment.

需要说明的是,上述背景技术部分公开的信息仅用于加强对本公开的背景的理解。It should be noted that the information disclosed in the above background technology section is only used to enhance the understanding of the background of the present disclosure.

发明内容Summary of the invention

本公开的目的在于提供一种自移动设备的环境感知方法、自移动设备、计算机存储介质及电子设备,进而至少在一定程度上克服由于相关技术的限制而导致的自移动设备的环境感知精度不足的技术问题。The purpose of the present disclosure is to provide an environment perception method for a self-mobile device, a self-mobile device, a computer storage medium and an electronic device, thereby at least to a certain extent overcoming the technical problem of insufficient environment perception accuracy of the self-mobile device due to the limitations of related technologies.

本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。Other features and advantages of the present disclosure will become apparent from the following detailed description, or may be learned in part by the practice of the present disclosure.

根据本公开的第一方面,提供一种自移动设备的环境感知方法,所述自移动设备设置有主深度相机,所述方法包括:According to a first aspect of the present disclosure, there is provided an environment perception method for a self-mobile device, wherein the self-mobile device is provided with a main depth camera, and the method comprises:

在所述自移动设备的行进过程中,通过所述主深度相机采集每帧深度图像数据;所述主深度相机被配置为:在每个数据采集周期内,按照预设的M个曝光参数交替的采集M帧深度图像数据,所述M个曝光参数呈单调递增或单调递减变化趋势;M为大于1的整数;During the movement of the self-moving device, each frame of depth image data is collected by the main depth camera; the main depth camera is configured to: in each data collection cycle, alternately collect M frames of depth image data according to preset M exposure parameters, and the M exposure parameters show a monotonically increasing or monotonically decreasing trend; M is an integer greater than 1;

对所述每帧深度图像数据进行解码,获得所述每帧深度图像数据对应的每帧点云;Decoding each frame of the depth image data to obtain each frame of point cloud corresponding to each frame of the depth image data;

对所述每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将所述第一融合点云重新作为每帧的点云,以基于所述每帧的点云进行环境感知。The M frames of point clouds corresponding to each data acquisition cycle are fused to obtain a first fused point cloud, and the first fused point cloud is used as the point cloud of each frame again to perform environmental perception based on the point cloud of each frame.

在本公开的示例性实施例中,在通过所述主深度相机采集每帧深度图像数据之前,所述方法还包括:In an exemplary embodiment of the present disclosure, before collecting each frame of depth image data by the main depth camera, the method further includes:

通过所述主深度相机按照默认曝光参数采集预览深度图像数据;Collecting preview depth image data by the main depth camera according to default exposure parameters;

在对所述预览深度图像数据解码完成获得预览点云时,通过所述主深度相机采集所述每帧深度图像数据。When the preview point cloud is obtained by decoding the preview depth image data, each frame of the depth image data is collected by the main depth camera.

在本公开的示例性实施例中,所述通过所述主深度相机采集所述每帧深度图像数据,包括:In an exemplary embodiment of the present disclosure, collecting each frame of depth image data by the main depth camera includes:

通过所述主深度相机按照所述M个曝光参数中的第一个曝光参数采集所述每个数据采集周期中的第一帧深度图像数据;Collecting, by the main depth camera, a first exposure parameter among the M exposure parameters, a first frame of depth image data in each data collection cycle;

在对所述第一帧深度图像数据解码完成获得第一帧点云时,通过所述主深度相机按照所述M个曝光参数中的第二个曝光参数采集所述每个数据采集周期中的第二帧深度图像数据;When the first frame of depth image data is decoded to obtain the first frame of point cloud, the second frame of depth image data in each data acquisition cycle is collected by the main depth camera according to the second exposure parameter of the M exposure parameters;

直至在对第M-1帧深度图像数据解码完成获得第M-1帧点云时,通过所述主深度相机按照所述M个曝光参数中的第M个曝光参数采集所述每个数据采集周期中的第M帧深度图像数据。Until the M-1th frame of depth image data is decoded to obtain the M-1th frame of point cloud, the M-th frame of depth image data in each data acquisition cycle is collected by the main depth camera according to the M-th exposure parameter among the M exposure parameters.

在本公开的示例性实施例中,所述对所述每个数据采集周期对应的M帧点云进行融合获得第一融合点云,包括:In an exemplary embodiment of the present disclosure, fusing the M frames of point clouds corresponding to each data acquisition cycle to obtain a first fused point cloud includes:

从所述M帧点云中确定待融合的第一点云和第二点云;Determine a first point cloud and a second point cloud to be fused from the M frame point clouds;

根据所述第一点云对应的第一时间戳和所述第二点云对应的第二时间戳确定第一时间范围,并获取所述自移动设备在所述第一时间范围内的运动数据;所述运动数据包含惯性测量单元数据和轮速计数据;Determine a first time range according to a first timestamp corresponding to the first point cloud and a second timestamp corresponding to the second point cloud, and obtain motion data of the self-mobile device within the first time range; the motion data includes inertial measurement unit data and wheel speed meter data;

基于所述第一时间范围内的运动数据,对所述第一点云和所述第二点云进行融合获得所述第一融合点云。Based on the motion data within the first time range, the first point cloud and the second point cloud are fused to obtain the first fused point cloud.

在本公开的示例性实施例中,所述从所述M帧点云中确定待融合的第一点云和第二点云,包括:In an exemplary embodiment of the present disclosure, determining the first point cloud and the second point cloud to be fused from the M frame point clouds includes:

在获取到所述M帧点云中的每个当前帧点云时,根据所述当前帧点云对应的类型标签,确定所述当前帧点云的类型;所述当前帧点云的类型是根据采集所述当前帧点云对应的当前帧深度图像数据时所采用的曝光参数确定的,所述当前帧点云的类型包括第一类型和第二类型,所述第一类型关联的曝光参数小于所述第二类型关联的曝光参数;When each current frame point cloud in the M frame point clouds is acquired, the type of the current frame point cloud is determined according to the type label corresponding to the current frame point cloud; the type of the current frame point cloud is determined according to the exposure parameters used when collecting the current frame depth image data corresponding to the current frame point cloud, and the type of the current frame point cloud includes a first type and a second type, and the exposure parameter associated with the first type is less than the exposure parameter associated with the second type;

若所述当前帧点云属于所述第一类型,则将所述当前帧点云暂存至数据队列中;If the current frame point cloud belongs to the first type, temporarily storing the current frame point cloud in a data queue;

若所述当前帧点云属于所述第二类型,则将所述当前帧点云确定为所述第一点云;If the current frame point cloud belongs to the second type, determining the current frame point cloud as the first point cloud;

根据所述第一点云对应的所述第一时间戳,从所述数据队列中筛选所述第二点云。The second point cloud is filtered from the data queue according to the first timestamp corresponding to the first point cloud.

在本公开的示例性实施例中,所述根据所述第一点云对应的所述第一时间戳,从所述数据队列中筛选所述第二点云,包括:In an exemplary embodiment of the present disclosure, the filtering of the second point cloud from the data queue according to the first timestamp corresponding to the first point cloud includes:

剔除所述数据队列中暂存的、与所述第一时间戳相同的无效点云;Eliminating invalid point clouds temporarily stored in the data queue and having the same timestamp as the first timestamp;

从所述数据队列中的剩余点云中筛选出与所述第一时间戳之间的时间间隔小于预设间隔阈值的目标剩余点云,作为所述第二点云。Filter out target remaining point clouds whose time interval with the first timestamp is less than a preset interval threshold from the remaining point clouds in the data queue as the second point clouds.

在本公开的示例性实施例中,所述基于所述第一时间范围内的运动数据,对所述第一点云和所述第二点云进行融合获得所述第一融合点云,包括:In an exemplary embodiment of the present disclosure, the step of fusing the first point cloud and the second point cloud based on the motion data within the first time range to obtain the first fused point cloud includes:

利用差速模型基于所述第一时间范围内的运动数据确定所述自移动设备在所述第一时间范围内的位姿变换数据;Determine the position transformation data of the self-moving device within the first time range by using a differential model based on the motion data within the first time range;

根据所述第一时间范围内的位姿变换数据对所述第二点云进行位姿变换,获得第一变换点云;Performing posture transformation on the second point cloud according to the posture transformation data within the first time range to obtain a first transformed point cloud;

将所述第一变换点云与所述第一点云进行合并,获得所述第一融合点云。The first transformed point cloud is merged with the first point cloud to obtain the first fused point cloud.

在本公开的示例性实施例中,所述主深度相机采用第一工作模式采集所述每帧深度图像数据,所述第一工作模式包括点阵模式。In an exemplary embodiment of the present disclosure, the main depth camera collects the depth image data of each frame in a first working mode, and the first working mode includes a dot matrix mode.

在本公开的示例性实施例中,所述自移动设备还设置有辅助深度相机,所述主深度相机和所述辅助深度相机设置于所述自移动设备的不同方位,所述辅助深度相机与所述主深度相机之间具有交叉且不重合的视场角区间。In an exemplary embodiment of the present disclosure, the self-mobile device is also provided with an auxiliary depth camera, the main depth camera and the auxiliary depth camera are arranged at different positions of the self-mobile device, and there is an intersecting and non-overlapping field of view angle interval between the auxiliary depth camera and the main depth camera.

在本公开的示例性实施例中,所述辅助深度相机具有所述第一工作模式和第二工作模式,所述第二工作模式包括面阵模式;In an exemplary embodiment of the present disclosure, the auxiliary depth camera has the first working mode and the second working mode, and the second working mode includes an area array mode;

在所述主深度相机采集到所述每帧深度图像数据之后,所述方法还包括:After the main depth camera collects each frame of depth image data, the method further includes:

通过所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据;所述辅助深度相机被配置为:在每个数据采集周期内,按照预设的N个曝光参数交替的采集N帧目标深度图像数据,所述N个曝光参数呈单调递增或单调递减变化趋势;N为大于1的整数;;The auxiliary depth camera collects each frame of target depth image data according to the first working mode; the auxiliary depth camera is configured to: in each data collection cycle, alternately collect N frames of target depth image data according to preset N exposure parameters, and the N exposure parameters show a monotonically increasing or monotonically decreasing trend; N is an integer greater than 1;

对所述每帧目标深度图像数据进行解码,获得所述每帧目标深度图像数据对应的每帧目标点云;Decoding each frame of target depth image data to obtain each frame of target point cloud corresponding to each frame of target depth image data;

对所述每个数据采集周期对应的N帧目标点云进行融合获得第二融合点云;Fusing the N frames of target point clouds corresponding to each data acquisition cycle to obtain a second fused point cloud;

对所述第一融合点云和所述第二融合点云进行再融合获得第三融合点云,并将所述第三融合点云重新作为每帧的最终点云,以基于所述每帧的最终点云进行环境感知。The first fused point cloud and the second fused point cloud are re-fused to obtain a third fused point cloud, and the third fused point cloud is used as the final point cloud of each frame again, so as to perform environment perception based on the final point cloud of each frame.

在本公开的示例性实施例中,所述通过所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据,包括:In an exemplary embodiment of the present disclosure, collecting each frame of target depth image data by the auxiliary depth camera according to the first working mode includes:

通过所述主深度相机将所述辅助深度相机的触发信号置为有效电平,以通过所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据。The trigger signal of the auxiliary depth camera is set to a valid level through the main depth camera, so that each frame of target depth image data is collected through the auxiliary depth camera according to the first working mode.

在本公开的示例性实施例中,所述对所述第一融合点云和所述第二融合点云进行再融合获得第三融合点云,包括:In an exemplary embodiment of the present disclosure, the step of re-fusing the first fused point cloud and the second fused point cloud to obtain a third fused point cloud includes:

获取所述第一融合点云对应的第三时间戳,以及,获取所述第二融合点云对应的第四时间戳;Obtaining a third timestamp corresponding to the first fused point cloud, and obtaining a fourth timestamp corresponding to the second fused point cloud;

根据所述第三时间戳和所述第四时间戳确定第二时间范围,并获取所述自移动设备在所述第二时间范围内的运动数据;Determine a second time range according to the third timestamp and the fourth timestamp, and obtain motion data of the mobile device within the second time range;

基于所述第二时间范围内的运动数据,对所述第一融合点云和所述第二融合点云进行再融合获得所述第三融合点云。Based on the motion data within the second time range, the first fused point cloud and the second fused point cloud are re-fused to obtain the third fused point cloud.

在本公开的示例性实施例中,所述基于所述第二时间范围内的运动数据,对所述第一融合点云和所述第二融合点云进行再融合获得所述第三融合点云,包括:In an exemplary embodiment of the present disclosure, the step of re-fusing the first fused point cloud and the second fused point cloud to obtain the third fused point cloud based on the motion data within the second time range includes:

利用差速模型基于所述第二时间范围内的运动数据确定所述自移动设备在所述第二时间范围内的位姿变换数据;Determine the position transformation data of the self-mobile device within the second time range by using a differential model based on the motion data within the second time range;

根据所述第二时间范围内的位姿变换数据对所述第一融合点云进行位姿变换,获得第二变换点云;Performing posture transformation on the first fused point cloud according to the posture transformation data within the second time range to obtain a second transformed point cloud;

将所述第二变换点云与所述第二融合点云进行合并,获得所述第三融合点云。The second transformed point cloud and the second fused point cloud are merged to obtain the third fused point cloud.

在本公开的示例性实施例中,在通过所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据之后,所述方法还包括:In an exemplary embodiment of the present disclosure, after collecting each frame of target depth image data by the auxiliary depth camera according to the first working mode, the method further includes:

间隔指定时长之后,通过所述辅助深度相机按照所述第二工作模式按照自动曝光参数采集每帧参考深度图像数据;After a specified time interval, collecting each frame of reference depth image data according to the second working mode and the automatic exposure parameters by the auxiliary depth camera;

对所述每帧参考深度图像数据进行解码,获得所述每帧参考深度图像数据对应的每帧参考点云,以基于所述每帧参考点云进行环境感知;所述自动曝光参数取决于环境亮度;Decoding each frame of the reference depth image data to obtain each frame of the reference point cloud corresponding to the each frame of the reference depth image data, so as to perform environmental perception based on the each frame of the reference point cloud; the automatic exposure parameter depends on the ambient brightness;

其中,所述指定时长与所述辅助深度相机的帧率之间具备预设关联关系。There is a preset correlation between the specified time length and the frame rate of the auxiliary depth camera.

在本公开的示例性实施例中,所述辅助深度相机的帧率大于所述主深度相机的帧率,且所述辅助深度相机的帧率与所述主深度相机的帧率满足预设数值关系。In an exemplary embodiment of the present disclosure, the frame rate of the auxiliary depth camera is greater than the frame rate of the main depth camera, and the frame rate of the auxiliary depth camera and the frame rate of the main depth camera satisfy a preset numerical relationship.

在本公开的示例性实施例中,所述自移动设备还设置有RGB相机,所述RGB相机与所述辅助深度相机组合设置,所述RGB相机的帧率与所述主深度相机的帧率相同;In an exemplary embodiment of the present disclosure, the self-mobile device is further provided with an RGB camera, the RGB camera is provided in combination with the auxiliary depth camera, and the frame rate of the RGB camera is the same as the frame rate of the main depth camera;

在所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据之后,所述方法还包括:After the auxiliary depth camera collects each frame of target depth image data according to the first working mode, the method further includes:

通过所述RGB相机采集所述自移动设备在行进过程中的每帧图像数据,以基于所述每帧图像数据进行环境感知。Each frame of image data of the self-mobile device during its movement is collected by the RGB camera, so as to perform environmental perception based on each frame of image data.

在本公开的示例性实施例中,所述通过所述RGB相机采集所述自移动设备在行进过程中的每帧图像数据,包括:In an exemplary embodiment of the present disclosure, collecting each frame of image data of the self-moving device during movement by the RGB camera includes:

通过所述主深度相机将所述RGB相机的触发信号置为有效电平,以通过所述RGB相机采集所述自移动设备在行进过程中的每帧图像数据。The trigger signal of the RGB camera is set to a valid level through the main depth camera, so as to collect each frame of image data of the self-moving device during the moving process through the RGB camera.

根据本公开的第二方面,提供一种自移动设备,所述自移动设备设置有主深度相机,所述自移动设备包括:According to a second aspect of the present disclosure, a self-mobile device is provided, wherein the self-mobile device is provided with a main depth camera, and the self-mobile device comprises:

获取模块,被配置为在所述自移动设备的行进过程中,通过所述主深度相机采集每帧深度图像数据;所述主深度相机被配置为:在每个数据采集周期内,按照预设的M个曝光参数交替的采集M帧深度图像数据,所述M个曝光参数呈单调递增或单调递减变化趋势;M为大于1的整数;The acquisition module is configured to collect each frame of depth image data through the main depth camera during the movement of the self-mobile device; the main depth camera is configured to: in each data collection cycle, alternately collect M frames of depth image data according to preset M exposure parameters, and the M exposure parameters show a monotonically increasing or monotonically decreasing trend; M is an integer greater than 1;

解码模块,被配置为对所述每帧深度图像数据进行解码,获得所述每帧深度图像数据对应的每帧点云;A decoding module, configured to decode each frame of the depth image data to obtain each frame of point cloud corresponding to each frame of the depth image data;

环境感知模块,被配置为对所述每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将所述第一融合点云重新作为每帧的点云,以基于所述每帧的点云进行环境感知。The environment perception module is configured to fuse the M frames of point clouds corresponding to each data acquisition cycle to obtain a first fused point cloud, and use the first fused point cloud as the point cloud of each frame to perform environment perception based on the point cloud of each frame.

根据本公开的第三方面,提供一种计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面所述的自移动设备的环境感知方法。According to a third aspect of the present disclosure, a computer storage medium is provided, on which a computer program is stored. When the computer program is executed by a processor, the method for environmental perception of a mobile device described in the first aspect is implemented.

根据本公开的第四方面,提供一种电子设备,包括:处理器;以及存储器,用于存储所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来执行上述第一方面所述的自移动设备的环境感知方法。According to a fourth aspect of the present disclosure, an electronic device is provided, comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the environment perception method of the self-mobile device described in the first aspect above by executing the executable instructions.

由上述技术方案可知,本公开示例性实施例中的自移动设备的环境感知方法、自移动设备、计算机存储介质及电子设备至少具备以下优点和积极效果:It can be seen from the above technical solutions that the environment perception method of the self-mobile device, the self-mobile device, the computer storage medium and the electronic device in the exemplary embodiments of the present disclosure have at least the following advantages and positive effects:

在本公开的一些实施例所提供的技术方案中,一方面,本公开通过在自移动设备的行进过程中,实时获取并处理每帧深度图像数据,能够实现实时的环境感知,为自移动设备的导航、避障、定位等功能提供可靠性保障;进一步的,通过将主深度相机配置为在每个数据采集周期内,按照预设的M个曝光参数(M个曝光参数呈单调递增或单调递减变化趋势)交替的采集M帧深度图像数据,能够实现在曝光参数较小的时候,使自移动设备能够看清近距离物体,在曝光参数较大的时候,使自移动设备能够看清远距离物体;另一方面,通过将每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将第一融合点云重新作为每帧的点云,能够将看清近距离物体的点云和看清远距离物体的点云融合在一起,从而,解决相关技术中当视野内存在不同距离的多个物体时,自移动设备无法同时看到远处物体和近处物体的问题,提高了自移动设备的环境感知精度,从而增强其导航、避障、定位等功能的准确性。In the technical solutions provided by some embodiments of the present disclosure, on the one hand, the present disclosure can realize real-time environmental perception by acquiring and processing each frame of depth image data in real time during the movement of the self-mobile device, and provide reliability guarantee for the navigation, obstacle avoidance, positioning and other functions of the self-mobile device; further, by configuring the main depth camera to alternately collect M frames of depth image data according to the preset M exposure parameters (the M exposure parameters are monotonically increasing or monotonically decreasing) in each data acquisition cycle, it can be achieved that when the exposure parameter is small, the self-mobile device can see close objects clearly, and when the exposure parameter is large, the self-mobile device can see distant objects clearly; on the other hand, by fusing the M frames of point clouds corresponding to each data acquisition cycle to obtain a first fused point cloud, and reusing the first fused point cloud as the point cloud of each frame, the point cloud of the close-range object can be seen clearly and the point cloud of the distant object can be seen clearly. The problem in the related art that when there are multiple objects at different distances in the field of view, the self-mobile device cannot see distant objects and near objects at the same time is solved, and the environmental perception accuracy of the self-mobile device is improved, thereby enhancing the accuracy of its navigation, obstacle avoidance, positioning and other functions.

本公开应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。The accompanying drawings herein are incorporated into the specification and constitute a part of the specification, illustrate embodiments consistent with the present disclosure, and together with the specification are used to explain the principles of the present disclosure. Obviously, the accompanying drawings described below are only some embodiments of the present disclosure, and for ordinary technicians in this field, other accompanying drawings can be obtained based on these accompanying drawings without creative work.

图1示出本公开实施例中自移动设备的环境感知方法的流程示意图;FIG1 is a schematic diagram showing a flow chart of a method for environmental perception of a mobile device in an embodiment of the present disclosure;

图2示出本公开实施例中主深度相机如何采集每个数据采集周期内M帧深度图像数据中的每帧深度图像数据的流程示意图;FIG2 is a schematic diagram showing a flow chart of how the main depth camera collects each frame of depth image data in M frames of depth image data in each data collection cycle in an embodiment of the present disclosure;

图3示出本公开实施例中如何对每个数据采集周期对应的M帧点云进行融合获得第一融合点云的流程示意图;FIG3 is a schematic diagram showing a flow chart of how to fuse M frames of point clouds corresponding to each data acquisition cycle to obtain a first fused point cloud in an embodiment of the present disclosure;

图4示出本公开实施例中如何从M帧点云中确定待融合的第一点云和第二点云的流程示意图;FIG4 is a schematic diagram showing a flow chart of how to determine the first point cloud and the second point cloud to be fused from M frame point clouds in an embodiment of the present disclosure;

图5示出本公开实施例中如何基于第一时间范围内的运动数据,对第一点云和第二点云进行融合获得第一融合点云的流程示意图;FIG5 is a schematic diagram showing a flow chart of how to fuse a first point cloud and a second point cloud to obtain a first fused point cloud based on motion data within a first time range in an embodiment of the present disclosure;

图6示出本公开实施例中辅助深度相机如何采集每帧目标深度图像数据的流程示意图;FIG6 is a schematic diagram showing a flow chart of how an auxiliary depth camera collects each frame of target depth image data in an embodiment of the present disclosure;

图7示出本公开实施例中如何对第一融合点云和第二融合点云进行再融合获得第三融合点云的流程示意图;FIG7 is a schematic diagram showing a process of re-fusing the first fused point cloud and the second fused point cloud to obtain a third fused point cloud in an embodiment of the present disclosure;

图8示出本公开实施例中主深度相机、辅助深度相机和RGB相机的出图时序图;FIG8 shows a timing diagram of image output of a main depth camera, an auxiliary depth camera, and an RGB camera in an embodiment of the present disclosure;

图9示出本公开实施例中如何针对主深度相机和辅助深度相机设置曝光参数的时序图;FIG. 9 is a timing diagram showing how to set exposure parameters for a main depth camera and an auxiliary depth camera in an embodiment of the present disclosure;

图10示出本公开实施例中如何获得深度相机对应的每帧点云的流程示意图;FIG10 is a schematic diagram showing a flow chart of how to obtain each frame of point cloud corresponding to a depth camera in an embodiment of the present disclosure;

图11示出本公开实施例中针对单个ToF相机如何获得每帧融合点云的流程示意图;FIG11 is a schematic diagram showing a flow chart of how to obtain a fused point cloud for each frame for a single ToF camera in an embodiment of the present disclosure;

图12示出本公开实施例中自移动设备的环境感知方法的示意图;FIG12 is a schematic diagram showing an environment perception method of a mobile device in an embodiment of the present disclosure;

图13示出本公开示例性实施例中自移动设备的结构示意图;FIG13 is a schematic diagram showing the structure of a self-moving device in an exemplary embodiment of the present disclosure;

图14示出本公开示例性实施例中电子设备的结构示意图。FIG. 14 is a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.

具体实施方式DETAILED DESCRIPTION

现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。Example embodiments will now be described more fully with reference to the accompanying drawings. However, example embodiments can be implemented in a variety of forms and should not be construed as being limited to the examples set forth herein; on the contrary, these embodiments are provided so that the present disclosure will be more comprehensive and complete, and the concepts of the example embodiments are fully conveyed to those skilled in the art. The described features, structures, or characteristics may be combined in one or more embodiments in any suitable manner. In the following description, many specific details are provided to provide a full understanding of the embodiments of the present disclosure. However, those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced while omitting one or more of the specific details, or other methods, components, devices, steps, etc. may be adopted. In other cases, known technical solutions are not shown or described in detail to avoid obscuring various aspects of the present disclosure.

本说明书中使用用语“一个”、“一”、“该”和“所述”用以表示存在一个或多个要素/组成部分/等;用语“包括”和“具有”用以表示开放式的包括在内的意思并且是指除了列出的要素/组成部分/等之外还可存在另外的要素/组成部分/等;用语“第一”和“第二”等仅作为标记使用,不是对其对象的数量限制。The terms "a", "an", "the" and "said" used in this specification are used to indicate the presence of one or more elements/components/etc.; the terms "including" and "having" are used to express an open-ended inclusion and mean that additional elements/components/etc. may exist in addition to the listed elements/components/etc.; the terms "first" and "second" etc. are used only as labels and are not intended to limit the quantity of their objects.

此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。In addition, the accompanying drawings are only schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings represent the same or similar parts, and their repeated descriptions will be omitted. Some of the block diagrams shown in the accompanying drawings are functional entities and do not necessarily correspond to physically or logically independent entities.

相关技术中,当视野内存在不同距离的物体时,单一曝光的深度相机不能同时看到近处物体和远处物体。具体而言,当曝光参数较大的时候,近处物体会过曝,导致看不见。而当曝光参数小的时候,远处物体会因为光强弱,导致无法得到测量的距离。In the related art, when there are objects at different distances in the field of view, a single-exposure depth camera cannot see both nearby and distant objects at the same time. Specifically, when the exposure parameter is large, nearby objects will be overexposed and invisible. When the exposure parameter is small, the distance of distant objects cannot be measured due to weak light intensity.

在本公开的实施例中,首先提供了一种自移动设备的环境感知方法,至少在一定程度上克服相关技术中自移动设备的环境感知精度不足的缺陷。In the embodiments of the present disclosure, firstly, a method for environment perception of a self-moving device is provided, which at least to some extent overcomes the defect of insufficient environment perception accuracy of the self-moving device in the related art.

图1示出本公开实施例中自移动设备的环境感知方法的流程示意图,该自移动设备的环境感知方法的执行主体可以是自移动设备。FIG1 is a schematic flow chart of an environment perception method for a self-mobile device in an embodiment of the present disclosure. The execution subject of the environment perception method for a self-mobile device may be the self-mobile device.

参考图1,根据本公开的一个实施例的自移动设备的环境感知方法包括以下步骤:Referring to FIG. 1 , a method for environment perception of a mobile device according to an embodiment of the present disclosure includes the following steps:

步骤S110,在自移动设备的行进过程中,通过主深度相机采集每帧深度图像数据;主深度相机被配置为:在每个数据采集周期内,按照预设的M个曝光参数交替的采集M帧深度图像数据,M个曝光参数呈单调递增或单调递减变化趋势;M为大于1的整数;Step S110, during the movement of the self-mobile device, each frame of depth image data is collected by the main depth camera; the main depth camera is configured to: in each data collection cycle, alternately collect M frames of depth image data according to preset M exposure parameters, and the M exposure parameters show a monotonically increasing or monotonically decreasing trend; M is an integer greater than 1;

步骤S120,对每帧深度图像数据进行解码,获得每帧深度图像数据对应的每帧点云;Step S120, decoding each frame of depth image data to obtain each frame of point cloud corresponding to each frame of depth image data;

步骤S130,对每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将第一融合点云重新作为每帧的点云,以基于每帧的点云进行环境感知。Step S130, fusing the M frames of point clouds corresponding to each data collection cycle to obtain a first fused point cloud, and reusing the first fused point cloud as the point cloud of each frame to perform environmental perception based on the point cloud of each frame.

在图1所示实施例所提供的技术方案中,一方面,本公开通过在自移动设备的行进过程中,实时获取并处理每帧深度图像数据,能够实现实时的环境感知,为自移动设备的导航、避障、定位等功能提供可靠性保障;进一步的,通过将主深度相机配置为在每个数据采集周期内,按照预设的M个曝光参数(M个曝光参数呈单调递增或单调递减变化趋势)交替的采集M帧深度图像数据,能够实现在曝光参数较小的时候,使自移动设备能够看清近距离物体,在曝光参数较大的时候,使自移动设备能够看清远距离物体;另一方面,通过将每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将第一融合点云重新作为每帧的点云,能够将看清近距离物体的点云和看清远距离物体的点云融合在一起,从而,解决相关技术中当视野内存在不同距离的多个物体时,自移动设备无法同时看到远处物体和近处物体的问题,提高了自移动设备的环境感知精度,从而增强其导航、避障、定位等功能的准确性。In the technical solution provided by the embodiment shown in FIG1 , on the one hand, the present disclosure can realize real-time environmental perception by acquiring and processing each frame of depth image data in real time during the movement of the self-mobile device, thereby providing reliability guarantee for the navigation, obstacle avoidance, positioning and other functions of the self-mobile device; further, by configuring the main depth camera to alternately acquire M frames of depth image data according to the preset M exposure parameters (the M exposure parameters are monotonically increasing or monotonically decreasing) in each data acquisition cycle, it is possible to enable the self-mobile device to see close objects clearly when the exposure parameter is small, and to enable the self-mobile device to see distant objects clearly when the exposure parameter is large; on the other hand, by fusing the M frames of point clouds corresponding to each data acquisition cycle to obtain a first fused point cloud, and reusing the first fused point cloud as the point cloud of each frame, the point cloud for seeing close objects clearly and the point cloud for seeing distant objects clearly can be fused together, thereby solving the problem in the related art that when there are multiple objects at different distances in the field of view, the self-mobile device cannot see distant objects and near objects at the same time, thereby improving the environmental perception accuracy of the self-mobile device, thereby enhancing the accuracy of its navigation, obstacle avoidance, positioning and other functions.

以下对图1中的各个步骤的具体实现过程进行详细阐述:The specific implementation process of each step in Figure 1 is described in detail below:

首先,需要说明的是,本公开中的深度相机(主深度相机和辅助深度相机)可以是ToF相机。First of all, it should be noted that the depth cameras (main depth camera and auxiliary depth camera) in the present disclosure may be ToF cameras.

ToF相机全称为Time Of Flight Camera,是一种深度成像相机。它的工作原理是利用摄像头向目标物体发送光线,然后测量光在镜头和物体之间来回传播所需的时间,这个时间差可以用来计算物体的距离。通过这种方式,深度相机能够捕捉到深度信息,创建出3D(3Dimensions,三维空间)立体深度感应图。这种技术的应用可以在多种场合中提供准确的距离测量,如自动驾驶车辆中的避障、手机AR(Augmented Reality,增强现实)/VR(Virtual Reality,虚拟现实)中的应用等。The full name of ToF camera is Time Of Flight Camera, which is a depth imaging camera. It works by using the camera to send light to the target object, and then measuring the time required for the light to propagate back and forth between the lens and the object. This time difference can be used to calculate the distance of the object. In this way, the depth camera can capture depth information and create a 3D (3Dimensions, three-dimensional space) stereo depth sensing map. The application of this technology can provide accurate distance measurement in a variety of occasions, such as obstacle avoidance in autonomous vehicles, and applications in mobile phone AR (Augmented Reality)/VR (Virtual Reality).

在步骤S110中,在自移动设备的行进过程中,通过主深度相机采集每帧深度图像数据。In step S110 , during the movement of the self-mobile device, each frame of depth image data is collected by a main depth camera.

本步骤中,主深度相机可以是设置于自移动设备左侧的深度相机,以下实施例中可以用ToF1表示。主深度相机的帧率可以用fs1表示,相机的帧率,也被称为FPS(FramesPer Second),是指相机在一秒钟内能够拍摄并输出的图像数量。In this step, the main depth camera can be a depth camera set on the left side of the mobile device, which can be represented by ToF1 in the following embodiments. The frame rate of the main depth camera can be represented by fs1. The frame rate of the camera, also known as FPS (Frames Per Second), refers to the number of images that the camera can capture and output in one second.

主深度相机可以采用第一工作模式采集每帧深度图像数据,第一工作模式包括点阵模式。辅助深度相机按照点阵模式采集两帧深度图像数据之间的间隔时长可以是1000/fs1。The main depth camera may collect each frame of depth image data in a first working mode, wherein the first working mode includes a dot matrix mode. The interval between two frames of depth image data collected by the auxiliary depth camera in the dot matrix mode may be 1000/fs1.

点阵是指ToF传感器中使用的一个个独立的光点发射器,通常以矩阵的形式排列。每个光点发射器可以独立控制,用于向目标物体发射光线并测量返回时间,以计算物体的距离和位置信息,点阵的优点是精度高,可以测量更小的物体和细节。Dot matrix refers to the independent light point emitters used in ToF sensors, usually arranged in the form of a matrix. Each light point emitter can be controlled independently to emit light to the target object and measure the return time to calculate the distance and position information of the object. The advantage of dot matrix is high accuracy and can measure smaller objects and details.

本公开中的主深度相机被配置为:在每个数据采集周期内,按照预设的M个曝光参数(可以是曝光时长,曝光时长,也称为曝光时间,是指相机快门打开到关闭这段时间内光线照射到底片或感光器的时间)交替的采集M帧深度图像数据,M个曝光参数可以呈单调递增或单调递减变化趋势,M为大于1的整数。The main depth camera in the present disclosure is configured to: in each data acquisition cycle, alternately collect M frames of depth image data according to preset M exposure parameters (which may be exposure duration, exposure duration, also known as exposure time, refers to the time during which light is irradiated to the film or photoreceptor from the opening to the closing of the camera shutter), the M exposure parameters may show a monotonically increasing or monotonically decreasing trend, and M is an integer greater than 1.

以曝光参数处于第一参数区间(例如:0~t11区间)时可称作短曝光参数,曝光参数处于第二参数区间(第二参数区间的两个端点值大于第一参数区间的两个端点值)时可称作中曝光参数,曝光参数处于第三参数区间(例如:t22~t33区间,第三参数区间的两个端点值大于第二参数区间的两个端点值)时可称作长曝光参数为例进行说明,假设M取2,则本公开中的上述M个(2个)曝光参数可以设置为“短曝光参数-长曝光参数”,或者,设置为“长曝光参数-短曝光参数”,假设M取3,则本公开中的上述M个(3个)曝光参数可以设置为“短曝光参数-中曝光参数-长曝光参数”,或者,设置为“长曝光参数-中曝光参数-短曝光参数”,可以根据实际情况自行设定,本公开对此不作特殊限定。When the exposure parameter is in the first parameter interval (for example, 0-t11 interval), it can be called a short exposure parameter; when the exposure parameter is in the second parameter interval (the two endpoint values of the second parameter interval are greater than the two endpoint values of the first parameter interval), it can be called a medium exposure parameter; when the exposure parameter is in the third parameter interval (for example, t22-t33 interval, the two endpoint values of the third parameter interval are greater than the two endpoint values of the second parameter interval), it can be called a long exposure parameter. Assume that M is 2, then the above-mentioned M (2) exposure parameters in the present disclosure can be set to "short exposure parameter-long exposure parameter", or, set to "long exposure parameter-short exposure parameter". Assume that M is 3, then the above-mentioned M (3) exposure parameters in the present disclosure can be set to "short exposure parameter-medium exposure parameter-long exposure parameter", or, set to "long exposure parameter-medium exposure parameter-short exposure parameter". It can be set according to actual conditions, and the present disclosure does not make any special limitation on this.

通过设置M个曝光参数呈单调递增或单调递减变化趋势,能够使得自移动设备根据数值不同的M个曝光参数,能够在曝光参数较小的时候,使得自移动设备可以看清距离较近的物体,而在曝光参数较大的时候,使得自移动设备能够看清距离较远的物体。By setting the M exposure parameters to show a monotonically increasing or monotonically decreasing trend, the self-mobile device can, according to the M exposure parameters with different values, see clearly nearby objects when the exposure parameter is small, and see clearly distant objects when the exposure parameter is large.

优选的,上述M个曝光参数中的第一个曝光参数与第M个曝光参数之间的参数差可以大于预设参数差阈值,以此来确保第一个曝光参数与第M个曝光参数属于不同的类型,例如:第一个曝光参数属于短曝光参数,而第M个曝光参数属于长曝光参数,或者,第一个曝光参数属于长曝光参数,而第M个曝光参数属于短曝光参数。通过设置上述M个曝光参数中既包含短曝光参数又包含长曝光参数,能够增大不同曝光参数之间的数值差距,从而,在视野范围内存在距离差距较大的多个物体(例如:距离很近和距离很远的两个物体)时,也能够使用不同的曝光参数来捕捉并清晰的呈现上述多个物体。Preferably, the parameter difference between the first exposure parameter and the Mth exposure parameter in the M exposure parameters may be greater than a preset parameter difference threshold, so as to ensure that the first exposure parameter and the Mth exposure parameter are of different types, for example, the first exposure parameter is a short exposure parameter, while the Mth exposure parameter is a long exposure parameter, or the first exposure parameter is a long exposure parameter, while the Mth exposure parameter is a short exposure parameter. By setting the M exposure parameters to include both short exposure parameters and long exposure parameters, the numerical difference between different exposure parameters can be increased, so that when there are multiple objects with large distance differences within the field of view (for example, two objects at a very close distance and a very far distance), different exposure parameters can be used to capture and clearly present the multiple objects.

优选的,鉴于若将M个曝光参数设置为为呈单调递减变化趋势的时候,可能会引起较大的补偿误差,为了避免上述问题,本公开可以将上述M个曝光参数设置为呈单调递增的变化趋势,即可以参考上述“短曝光参数-长曝光参数”或者上述“短曝光参数-中曝光参数-长曝光参数”的变化趋势来配置上述M个曝光参数。Preferably, in view of the fact that a large compensation error may be caused if the M exposure parameters are set to show a monotonically decreasing trend, in order to avoid the above problem, the present invention can set the above M exposure parameters to show a monotonically increasing trend, that is, the above M exposure parameters can be configured with reference to the above "short exposure parameter-long exposure parameter" or the above "short exposure parameter-medium exposure parameter-long exposure parameter" change trend.

需要说明的是,在自移动设备的行进过程中,主深度相机可以先按照默认曝光参数采集预览深度图像数据,在获得上述预览深度图像数据之后,自移动设备可以对上述预览深度图像进行解码,在获得解码之后的预览点云时,可以通过上述主深度相机采集每帧深度图像数据。It should be noted that during the movement of the self-mobile device, the main depth camera can first collect preview depth image data according to the default exposure parameters. After obtaining the above preview depth image data, the self-mobile device can decode the above preview depth image. When the preview point cloud after decoding is obtained, each frame of depth image data can be collected by the above main depth camera.

其中,预览深度图像数据指的是主深度相机在未配置曝光参数的情况下采集到的初始深度图像数据,例如:开始工作的时候最初所采集到的初始深度图像数据。The preview depth image data refers to the initial depth image data collected by the main depth camera without configuring the exposure parameters, for example, the initial depth image data collected at the beginning of the work.

需要说明的是,相机采集到的深度图像数据是将来自于透镜的光源信号转换为数字化的原始图像数据,即raw图像数据。raw是未经处理、也未经压缩的格式,可以将其形象地称为“数字底片”。因而,需要对其进行解码处理。It should be noted that the depth image data collected by the camera is the original digital image data converted from the light source signal from the lens, that is, raw image data. Raw is an unprocessed and uncompressed format, which can be figuratively called a "digital negative". Therefore, it needs to be decoded.

以下对主深度相机如何采集每个数据采集周期内的M帧深度图像数据的具体实施方式进行说明:The following is a specific implementation of how the main depth camera collects M frames of depth image data in each data collection cycle:

参考图2,图2示出本公开实施例中主深度相机如何采集每个数据采集周期内M帧深度图像数据中的每帧深度图像数据的流程示意图,包含步骤S201-步骤S203:Referring to FIG. 2 , FIG. 2 shows a schematic flow chart of how the main depth camera collects each frame of depth image data in M frames of depth image data in each data collection cycle in an embodiment of the present disclosure, including steps S201 to S203:

在步骤S201中,通过主深度相机按照M个曝光参数中的第一个曝光参数采集每个数据采集周期中的第一帧深度图像数据。In step S201, the first frame of depth image data in each data collection cycle is collected by a main depth camera according to a first exposure parameter among M exposure parameters.

本步骤中,以M取2,M个曝光参数设置为“短曝光参数-长曝光参数”为例来进行解释说明,若上述短曝光参数用expo1_l表示,上述长曝光参数用expo1_h表示,则上述M个曝光参数可以组成一个序列:expo1_l-expo1_h,expo1_l为上述M个曝光参数中的第一个曝光参数,expo1_h为上述M个曝光参数中的第二个曝光参数,也是第M个曝光参数。In this step, an example is taken in which M is 2 and M exposure parameters are set to "short exposure parameter-long exposure parameter" for explanation. If the short exposure parameter is represented by expo1_l and the long exposure parameter is represented by expo1_h, the M exposure parameters can form a sequence: expo1_l-expo1_h, where expo1_l is the first exposure parameter among the M exposure parameters, and expo1_h is the second exposure parameter among the M exposure parameters, which is also the Mth exposure parameter.

从而,在每一个数据采集周期,可以先通过上述主深度相机按照M个曝光参数中的第一个曝光参数采集每个数据采集周期中的第一帧深度图像数据,即通过上述主深度相机按照expo1_l采集第一帧深度图像数据。Thus, in each data acquisition cycle, the first frame of depth image data in each data acquisition cycle can be acquired by the main depth camera according to the first exposure parameter of the M exposure parameters, that is, the first frame of depth image data is acquired by the main depth camera according to expo1_l.

在步骤S202中,在对第一帧深度图像数据解码完成获得第一帧点云时,通过主深度相机按照M个曝光参数中的第二个曝光参数采集每个数据采集周期中的第二帧深度图像数据。In step S202, when the first frame of depth image data is decoded to obtain the first frame of point cloud, the second frame of depth image data in each data acquisition cycle is collected by the main depth camera according to the second exposure parameter of the M exposure parameters.

本步骤中,在获得上述第一帧深度图像数据之后,可以对上述第一帧深度图像数据进行解码以获得第一帧点云,在对第一帧深度图像数据解码完成的时候,可以获取上述M个曝光参数中的第二个曝光参数,即expo1_h,通过上述主深度相机按照expo1_h采集第二帧深度图像数据。In this step, after obtaining the above-mentioned first frame depth image data, the above-mentioned first frame depth image data can be decoded to obtain the first frame point cloud. When the decoding of the first frame depth image data is completed, the second exposure parameter of the above-mentioned M exposure parameters, namely expo1_h, can be obtained, and the second frame depth image data is collected according to expo1_h through the above-mentioned main depth camera.

在步骤S203中,直至在对第M-1帧深度图像数据解码完成获得第M-1帧点云时,控制主深度相机按照M个曝光参数中的第M个曝光参数采集每个数据采集周期中的第M帧深度图像数据。In step S203, until the M-1th frame of depth image data is decoded to obtain the M-1th frame of point cloud, the main depth camera is controlled to collect the Mth frame of depth image data in each data collection cycle according to the Mth exposure parameter among the M exposure parameters.

本步骤中,鉴于M取2的时候,第M-1帧深度图像数据即第一帧深度图像数据,从而,该步骤对应于上述步骤S202中的:在对第一帧深度图像数据解码完成的时候,可以获取上述M个曝光参数中的第二个曝光参数,即expo1_h,通过上述主深度相机按照expo1_h采集第二帧深度图像数据。In this step, considering that when M is 2, the M-1th frame depth image data is the first frame depth image data, and thus, this step corresponds to the above step S202: when the decoding of the first frame depth image data is completed, the second exposure parameter of the above M exposure parameters, that is, expo1_h, can be obtained, and the second frame depth image data is collected according to expo1_h through the above main depth camera.

而在M大于2的时候,假设M取3,则该步骤中对应于在对第二帧深度图像数据解码完成获得第二帧点云数据的时候,通过主深度相机按照M个曝光参数中的第三个曝光参数采集每个数据采集周期中的第三帧深度图像数据。When M is greater than 2, assuming that M is 3, this step corresponds to collecting the third frame of depth image data in each data collection cycle through the main depth camera according to the third exposure parameter of the M exposure parameters when the second frame of depth image data is decoded to obtain the second frame of point cloud data.

通过在每个数据采集周期内,循环按照上述M个曝光参数来采集M帧深度图像数据,可以得到对应于每个数据采集周期的、按照类似“短曝光参数-长曝光参数”或者“短曝光参数-中曝光参数-长曝光参数”交替采集的M帧深度图像数据。By cyclically collecting M frames of depth image data according to the above-mentioned M exposure parameters in each data collection cycle, M frames of depth image data corresponding to each data collection cycle can be obtained, which are alternately collected according to "short exposure parameters-long exposure parameters" or "short exposure parameters-medium exposure parameters-long exposure parameters".

接着参考图1,在步骤S120中,对每帧深度图像数据进行解码,获得每帧深度图像数据对应的每帧点云。Next, referring to FIG. 1 , in step S120 , each frame of depth image data is decoded to obtain each frame of point cloud corresponding to each frame of depth image data.

本步骤中,在获得每帧深度图像数据之后,可以对每帧深度图像数据进行解码,获得每帧深度图像数据对应的每帧点云。In this step, after each frame of depth image data is obtained, each frame of depth image data may be decoded to obtain each frame of point cloud corresponding to each frame of depth image data.

点云是一种用来表示三维空间中物体形态的数据结构,用于动态存储通常来自激光雷达系统或其他测量仪器(如三维坐标测量机、三维激光扫描仪或照相式扫描仪)的数据点的集合,每个点都有对应的坐标值,表示该点在三维空间中物体形态的数据结构。这些点包含了丰富的信息,如三维坐标(X、Y、Z)、颜色、分类值、强度值、时间等。点云是在同一空间参考系下表达目标空间分布和目标表面特性的海量点集合。点云可以表示三维形状或对象,通过高精度的点云可以还原现实世界。Point cloud is a data structure used to represent the shape of objects in three-dimensional space. It is used to dynamically store a collection of data points usually from a lidar system or other measuring instruments (such as a 3D coordinate measuring machine, a 3D laser scanner, or a photographic scanner). Each point has a corresponding coordinate value, which represents the data structure of the object shape at that point in three-dimensional space. These points contain rich information, such as three-dimensional coordinates (X, Y, Z), color, classification value, intensity value, time, etc. Point cloud is a massive collection of points that express the spatial distribution and surface characteristics of the target in the same spatial reference system. Point cloud can represent three-dimensional shapes or objects, and the real world can be restored through high-precision point cloud.

需要说明的是,在对每帧深度图像数据解码完成获得每帧点云数据的时候,可以记录获得该每帧点云数据的时间戳。It should be noted that when each frame of depth image data is decoded to obtain each frame of point cloud data, the timestamp of obtaining each frame of point cloud data can be recorded.

在步骤S130中,对每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将第一融合点云重新作为每帧的点云,以基于每帧的点云进行环境感知。In step S130, the M frames of point clouds corresponding to each data acquisition cycle are fused to obtain a first fused point cloud, and the first fused point cloud is used again as the point cloud of each frame to perform environmental perception based on the point cloud of each frame.

本步骤中,可以对每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将第一融合点云重新作为每帧的点云,从而,自移动设备可以基于每帧的点云进行环境感知。In this step, the M frames of point clouds corresponding to each data collection cycle can be fused to obtain a first fused point cloud, and the first fused point cloud can be used as the point cloud of each frame again, so that the self-mobile device can perceive the environment based on the point cloud of each frame.

参考图3,图3示出本公开实施例中如何对每个数据采集周期对应的M帧点云进行融合获得第一融合点云的流程示意图,包含步骤S301-步骤S303:Referring to FIG. 3 , FIG. 3 shows a schematic flow chart of how to fuse M frames of point clouds corresponding to each data acquisition cycle to obtain a first fused point cloud in an embodiment of the present disclosure, including steps S301 to S303:

在步骤S301中,从M帧点云中确定待融合的第一点云和第二点云。In step S301, a first point cloud and a second point cloud to be fused are determined from M frames of point clouds.

本步骤中,鉴于主深度相机在采集深度图像数据的时候,是一帧一帧依次采集的,从而,可以根据上述M帧点云中每次获取到的当前帧点云的类型,对上述M帧点云进行融合。In this step, since the main depth camera collects depth image data frame by frame, the M frame point clouds can be fused according to the type of the current frame point cloud obtained each time in the M frame point clouds.

具体而言,可以参考图4,图4示出本公开实施例中如何从M帧点云中确定待融合的第一点云和第二点云的流程示意图,包含步骤S401-步骤S404:Specifically, reference may be made to FIG. 4 , which shows a schematic diagram of a process of determining a first point cloud and a second point cloud to be fused from M frame point clouds in an embodiment of the present disclosure, including steps S401 to S404:

在步骤S401中,在获取到M帧点云中的每个当前帧点云时,根据当前帧点云对应的类型标签,确定当前帧点云的类型。In step S401, when each current frame point cloud in the M frame point clouds is acquired, the type of the current frame point cloud is determined according to the type label corresponding to the current frame point cloud.

本步骤中,在获取到每个当前帧点云的时候,可以确定该当前帧点云的类型。示例性的,当前帧点云的类型是根据采集当前帧点云对应的当前帧深度图像数据时所采用的曝光参数确定的,当前帧点云的类型可以包括第一类型和第二类型,第一类型关联的曝光参数小于第二类型关联的曝光参数,示例性的,第一类型可以采用上述短曝光参数采集到的深度图像数据所对应的点云(可称作短曝光点云),第二类型可以是采用上述长曝光参数采集到的深度图像数据所对应的点云(可称作长曝光点云)。In this step, when each current frame point cloud is obtained, the type of the current frame point cloud can be determined. Exemplarily, the type of the current frame point cloud is determined according to the exposure parameters used when collecting the current frame depth image data corresponding to the current frame point cloud. The type of the current frame point cloud may include a first type and a second type. The exposure parameter associated with the first type is less than the exposure parameter associated with the second type. Exemplarily, the first type may be a point cloud corresponding to the depth image data collected using the above-mentioned short exposure parameters (which may be referred to as a short exposure point cloud), and the second type may be a point cloud corresponding to the depth image data collected using the above-mentioned long exposure parameters (which may be referred to as a long exposure point cloud).

在步骤S402中,若当前帧点云属于第一类型,则将当前帧点云暂存至数据队列中。In step S402, if the current frame point cloud belongs to the first type, the current frame point cloud is temporarily stored in the data queue.

本步骤中,若上述当前帧点云属于第一类型,即短曝光点云,则可以将上述当前帧点云暂存至数据队列中。In this step, if the current frame point cloud belongs to the first type, that is, the short exposure point cloud, the current frame point cloud may be temporarily stored in the data queue.

在步骤S403中,若当前帧点云属于第二类型,则将当前帧点云确定为第一点云。In step S403, if the current frame point cloud belongs to the second type, the current frame point cloud is determined as the first point cloud.

本步骤中,若上述当前帧点云属于上述第二类型,即长曝光点云,则可以将上述当前帧点云确定为上述第一点云。In this step, if the current frame point cloud belongs to the second type, that is, the long exposure point cloud, the current frame point cloud may be determined as the first point cloud.

在步骤S404中,根据第一点云对应的第一时间戳,从数据队列中筛选第二点云。In step S404, the second point cloud is filtered from the data queue according to the first timestamp corresponding to the first point cloud.

本步骤中,在确定出第一点云之后,可以根据第一点云对应的第一时间戳,从数据队列中筛选第二点云。具体而言,为了避免同一位置既采用短曝光参数采集了一帧深度图像数据,又采用长曝光参数采集了一帧深度图像数据的情况,可以先剔除数据队列中暂存的、与第一时间戳相同的无效点云,之后,再从数据队列中的剩余点云中筛选出与第一时间戳之间的时间间隔小于预设间隔阈值的目标剩余点云(数据队列中暂存的都是第二类型的短曝光点云),作为上述第二点云。In this step, after determining the first point cloud, the second point cloud can be screened from the data queue according to the first timestamp corresponding to the first point cloud. Specifically, in order to avoid the situation where a frame of depth image data is collected using both short exposure parameters and long exposure parameters at the same location, the invalid point cloud temporarily stored in the data queue with the same first timestamp can be removed first, and then the target remaining point clouds whose time interval with the first timestamp is less than the preset interval threshold can be screened from the remaining point clouds in the data queue (the short exposure point clouds of the second type temporarily stored in the data queue are all short exposure point clouds of the second type) as the above-mentioned second point cloud.

在确定出待融合的第一点云和第二点云之后,可以接着参考图3,在步骤S302中,根据第一点云对应的第一时间戳和第二点云对应的第二时间戳确定第一时间范围,并获取自移动设备在第一时间范围内的运动数据。After determining the first point cloud and the second point cloud to be fused, you can then refer to Figure 3. In step S302, determine the first time range according to the first timestamp corresponding to the first point cloud and the second timestamp corresponding to the second point cloud, and obtain motion data from the mobile device within the first time range.

本步骤中,示例性的,第一点云可以用Pts_l表示,第一点云对应的第一时间戳可以是tl;第二点云可以用Pts_s表示,第二点云对应的第二时间戳可以是ts。In this step, exemplarily, the first point cloud can be represented by Pts_l, and the first timestamp corresponding to the first point cloud can be tl; the second point cloud can be represented by Pts_s, and the second timestamp corresponding to the second point cloud can be ts.

从而,可以获取自移动设备在第一时间范围内(从ts时刻到tl时刻)的运动数据,上述运动数据可以包含惯性测量单元数据(即imu数据)和轮速计数据(即odom数据,odom数据是自移动设备导航和定位的一个重要组成部分,它能够帮助自移动设备在室内或室外环境中进行自主移动和位置识别)。Thus, the motion data of the self-mobile device within the first time range (from time ts to time tl) can be obtained. The above motion data may include inertial measurement unit data (i.e., IMU data) and wheel speed meter data (i.e., ODOM data, which is an important part of the navigation and positioning of the self-mobile device, and can help the self-mobile device to perform autonomous movement and position identification in indoor or outdoor environments).

其中,imu全称为inertial measurement unit,中文名为惯性测量单元。它是一种主要用来检测和测量加速度与旋转运动(例如:速度、倾斜、旋转甚至角度)的相机。imu通常由多种相机组成,包括但不限于倾角仪、加速度计、陀螺仪、磁力计、气压计等。这些相机共同作用,通过对数据的处理,可以获得物体的运动、航向、姿态(滚动角、俯仰角和偏航角)等信息。具体来说,倾角仪可以用来测量物体相对于地面的垂直方向倾角,加速度计则可以测量与惯性有关的加速度,包括旋转、重力和其他形式的线性加速度。陀螺仪能够测量三个轴的角速度,即俯仰、滚动和偏转。磁力计则用于测量磁场,并可以用于修正位姿。此外,imu还可以包括气压计,用于修正高度信息。通过这些相机的组合,可以实现对物体运动的精确测量和定位。Among them, IMU stands for Inertial Measurement Unit, and its Chinese name is Inertial Measurement Unit. It is a camera mainly used to detect and measure acceleration and rotational motion (for example: speed, tilt, rotation and even angle). IMU is usually composed of a variety of cameras, including but not limited to inclinometers, accelerometers, gyroscopes, magnetometers, barometers, etc. These cameras work together to obtain information such as the object's motion, heading, attitude (roll angle, pitch angle and yaw angle) through data processing. Specifically, the inclinometer can be used to measure the vertical inclination of the object relative to the ground, and the accelerometer can measure the acceleration related to inertia, including rotation, gravity and other forms of linear acceleration. The gyroscope can measure the angular velocity of three axes, namely pitch, roll and yaw. The magnetometer is used to measure the magnetic field and can be used to correct the posture. In addition, IMU can also include a barometer to correct the height information. Through the combination of these cameras, accurate measurement and positioning of the object's motion can be achieved.

在步骤S303中,基于第一时间范围内的运动数据,对第一点云和第二点云进行融合获得第一融合点云。In step S303, based on the motion data within the first time range, the first point cloud and the second point cloud are fused to obtain a first fused point cloud.

本步骤中,可以参考图5,图5示出本公开实施例中如何基于第一时间范围内的运动数据,对第一点云和第二点云进行融合获得第一融合点云的流程示意图,包含步骤S501-步骤S503:In this step, reference may be made to FIG5 , which shows a schematic diagram of a process of fusing a first point cloud and a second point cloud to obtain a first fused point cloud based on motion data within a first time range in an embodiment of the present disclosure, including steps S501 to S503:

在步骤S501中,利用差速模型基于第一时间范围内的运动数据确定自移动设备在第一时间范围内的位姿变换数据。In step S501, a differential model is used to determine position transformation data of a mobile device within a first time range based on motion data within a first time range.

本步骤中,差速模型用于估计机器人在给定方向上的移动,它基于机器人的速度和方向变化,以及在此期间轮子转过的距离,使用imu数据和odom数据,可以估算出从ts时刻到tl时刻机器人的位姿(位置和姿态)的变化。In this step, the differential model is used to estimate the movement of the robot in a given direction. It is based on the robot's speed and direction changes, as well as the distance the wheels have turned during this period. Using IMU data and ODOM data, the change in the robot's posture (position and attitude) from ts to tl can be estimated.

示例性的,上述差速模型可以基于上述imu数据和odom数据,推算出自移动设备从ts时刻到tl时刻的位姿变换数据,位姿变换数据可以是位姿变换矩阵,用Ts_l表示。Exemplarily, the above-mentioned differential model can be based on the above-mentioned imu data and odom data to infer the posture transformation data of the mobile device from time ts to time tl. The posture transformation data can be a posture transformation matrix, represented by Ts_l.

在步骤S502中,根据第一时间范围内的位姿变换数据对第二点云进行位姿变换,获得第一变换点云。In step S502, the second point cloud is transformed in posture according to the posture transformation data within the first time range to obtain a first transformed point cloud.

本步骤中,可以基于上述第一时间范围内的位姿变换数据对上述第二点云Pts_s进行位姿变换,以将其转换到tl时刻对应的第一变换点云Pts_s’,具体而言,Pts_s’=(Ts_l)-1×Pts_s,其中,(Ts_l)-1表示上述位姿变换矩阵Ts_l的逆矩阵。In this step, the second point cloud Pts_s can be transformed based on the posture transformation data within the first time range to convert it into the first transformed point cloud Pts_s' corresponding to time tl. Specifically, Pts_s' = (Ts_l) -1 × Pts_s, where (Ts_l) -1 represents the inverse matrix of the posture transformation matrix Ts_l.

在步骤S503中,将第一变换点云与第一点云进行合并,获得第一融合点云。In step S503, the first transformed point cloud is merged with the first point cloud to obtain a first fused point cloud.

本步骤中,可以将上述第一变换点云Pts_s’与上述第一点云Pts_l进行合并,获得第一融合点云pts_1。In this step, the first transformed point cloud Pts_s’ and the first point cloud Pts_l can be merged to obtain the first fused point cloud pts_1.

类似的,针对每个数据采集周期所包含的M帧深度图像数据,均可以基于上述流程获得对应的第一融合点云,在获得每个第一融合点云之后,可以将上述第一融合点云重新作为每帧的点云,从而,自移动设备可以基于每帧的点云进行环境感知。Similarly, for the M frames of depth image data contained in each data acquisition cycle, the corresponding first fused point cloud can be obtained based on the above process. After obtaining each first fused point cloud, the above first fused point cloud can be used as the point cloud of each frame again, so that the self-mobile device can perform environmental perception based on the point cloud of each frame.

本公开通过在生成第一融合点云的时候,基于imu和odom数据进行运动补偿,能够保证获得的第一融合点云无数据畸变。通过将第一融合点云重新作为每帧的点云,从而使得自移动设备可以基于每帧的点云看清视野范围内远近距离不同的物体,提升自移动设备的环境感知精度。The present disclosure can ensure that the first fused point cloud obtained has no data distortion by performing motion compensation based on IMU and ODOM data when generating the first fused point cloud. By reusing the first fused point cloud as the point cloud of each frame, the self-mobile device can see objects at different distances within the field of view based on the point cloud of each frame, thereby improving the environmental perception accuracy of the self-mobile device.

在一种可选的实施方式中,本公开中的自移动设备上还可以设置有辅助深度相机(用ToF2表示),上述辅助深度相机的帧率可以是fs2,且fs2=2fs1+1。In an optional implementation, the self-mobile device in the present disclosure may also be provided with an auxiliary depth camera (denoted by ToF2), and the frame rate of the auxiliary depth camera may be fs2, and fs2=2fs1+1.

主深度相机和辅助深度相机设置于自移动设备的不同方位,示例性的,辅助深度相机可以是设置于自移动设备前方的深度相机,以下实施例中可以用ToF2表示,从而,辅助深度相机与主深度相机之间具有交叉且不重合的视场角区间。The main depth camera and the auxiliary depth camera are arranged at different positions of the self-mobile device. Exemplarily, the auxiliary depth camera can be a depth camera arranged in front of the self-mobile device, which can be represented by ToF2 in the following embodiments. Thus, there is an intersecting and non-overlapping field of view angle interval between the auxiliary depth camera and the main depth camera.

相机的视场角指的是相机能够观察到的范围,视场角越大,观测范围越大。通过设置辅助深度相机与主深度相机之间具有交叉且不重合的视场角区间,使得主深度相机和辅助深度相机能同时对环境中的深度图像数据进行采集。The field of view of a camera refers to the range that the camera can observe. The larger the field of view, the larger the observation range. By setting the field of view interval between the auxiliary depth camera and the main depth camera to have an intersection and non-overlapping, the main depth camera and the auxiliary depth camera can simultaneously collect depth image data in the environment.

需要说明的是,上述辅助深度相机具有两种工作模式,即上述第一工作模式和第二工作模式,第二工作模式包括面阵模式。It should be noted that the auxiliary depth camera has two working modes, namely the first working mode and the second working mode, and the second working mode includes an area array mode.

其中,面阵是指ToF相机中使用的一个较大的面状发射器,通常覆盖整个传感器视场。面阵发射器可以向目标物体发射一整面的光线,并使用整个面作为光路来测量物体的距离和位置信息。面阵的优点是处理速度快,可以同时测量多个目标物体的距离和位置信息。面阵相机是以“面”为单位来进行图像采集的成像工具,可以在短时间内曝光,一次性获取完整的目标图像,具有测量图像直观的优势,它实现的是像素矩阵拍摄。Among them, area array refers to a larger planar emitter used in ToF cameras, which usually covers the entire sensor field of view. The area array emitter can emit a whole surface of light to the target object and use the entire surface as the light path to measure the distance and position information of the object. The advantage of the area array is its fast processing speed and the ability to measure the distance and position information of multiple target objects at the same time. The area array camera is an imaging tool that uses "surfaces" as units to capture images. It can be exposed in a short time and obtain a complete target image at one time. It has the advantage of intuitive image measurement, and it implements pixel matrix shooting.

辅助深度相机受控于上述主深度相机,工作于从模式,与上述主深度相机协同工作,以下对上述辅助深度相机基于点阵模式进行数据采集的过程进行说明。参考图6,图6示出本公开实施例中辅助深度相机如何采集每帧目标深度图像数据的流程示意图,包含步骤S601-步骤S604:The auxiliary depth camera is controlled by the main depth camera, works in slave mode, and works in conjunction with the main depth camera. The following is an explanation of the process of data collection based on the dot matrix mode by the auxiliary depth camera. Referring to Figure 6, Figure 6 shows a flow chart of how the auxiliary depth camera collects each frame of target depth image data in an embodiment of the present disclosure, including steps S601-S604:

在步骤S601中,在主深度相机采集到每帧深度图像数据之后,通过辅助深度相机按照第一工作模式采集每帧目标深度图像数据。In step S601, after the main depth camera collects each frame of depth image data, the auxiliary depth camera collects each frame of target depth image data according to the first working mode.

本步骤中,在主深度相机采集到每帧深度图像数据之后,主深度相机ToF1可以将辅助深度相机的触发信号置为有效电平(例如:高电平),以通过辅助深度相机按照点阵模式采集每帧目标深度图像数据,辅助深度相机按照点阵模式采集两帧目标深度图像数据之间的间隔时长可以是1000/fs1。In this step, after the main depth camera collects each frame of depth image data, the main depth camera ToF1 can set the trigger signal of the auxiliary depth camera to a valid level (for example, a high level) to collect each frame of target depth image data through the auxiliary depth camera in a dot matrix mode. The interval between the auxiliary depth camera collecting two frames of target depth image data in a dot matrix mode can be 1000/fs1.

其中,辅助深度相机采用点阵模式采集每帧目标深度图像数据的时候,其所采用的机制与上述主深度相机采集每帧深度图像数据时所采用的机制相同。具体而言,辅助深度相机被配置为:在每个数据采集周期内,按照预设的N(N为大于1的整数,N可以等于M,也可以不等于M,可以根据实际情况自行设定,本公开对此不作特殊限定)个曝光参数交替的采集N帧目标深度图像数据,该处的N个曝光参数也呈单调递增或单调递减变化趋势,该N个曝光参数的具体设置与上述M个曝光参数的具体设置类似,此处不再赘述。Among them, when the auxiliary depth camera adopts the dot matrix mode to collect each frame of target depth image data, the mechanism adopted is the same as the mechanism adopted by the above-mentioned main depth camera when collecting each frame of depth image data. Specifically, the auxiliary depth camera is configured to: in each data collection cycle, according to the preset N (N is an integer greater than 1, N can be equal to M, or not equal to M, can be set according to actual conditions, and the present disclosure does not make special restrictions on this) exposure parameters, alternately collect N frames of target depth image data, and the N exposure parameters here also show a monotonically increasing or monotonically decreasing trend. The specific settings of the N exposure parameters are similar to the specific settings of the above-mentioned M exposure parameters, and will not be repeated here.

优选的,参照上述步骤S110的相关解释,为了避免将N个曝光参数设置为呈单调递减变化趋势的时候,可能会引起较大的补偿误差的问题,本公开可以将上述N个曝光参数可以设置为呈单调递增的变化趋势。Preferably, referring to the relevant explanation of the above step S110, in order to avoid the problem of large compensation error caused by setting the N exposure parameters to show a monotonically decreasing trend, the present disclosure can set the above N exposure parameters to show a monotonically increasing trend.

优选的,参照上述步骤S110的相关解释,上述N个曝光参数中的第一个曝光参数与第N个曝光参数之间的参数差也可以设置为大于预设参数差阈值。Preferably, referring to the relevant explanation of the above step S110, the parameter difference between the first exposure parameter and the Nth exposure parameter among the above N exposure parameters may also be set to be greater than a preset parameter difference threshold.

示例性的,以N取2,N个曝光参数设置为“短曝光参数-长曝光参数”为例来进行解释说明,若上述短曝光参数用expo2_l表示,上述长曝光参数用expo2_h表示,则上述N个曝光参数组成的序列可以表示为:expo2_l-expo2_h,expo2_l为上述N个曝光参数中的第一个曝光参数,expo2_h为上述N个曝光参数中的第二个曝光参数,也是第N个曝光参数。Exemplarily, taking N as 2, and N exposure parameters being set as "short exposure parameter-long exposure parameter" as an example for explanation, if the above-mentioned short exposure parameter is represented by expo2_l, and the above-mentioned long exposure parameter is represented by expo2_h, then the sequence composed of the above-mentioned N exposure parameters can be expressed as: expo2_l-expo2_h, expo2_l is the first exposure parameter among the above-mentioned N exposure parameters, and expo2_h is the second exposure parameter among the above-mentioned N exposure parameters, which is also the Nth exposure parameter.

从而,在主深度相机ToF1将辅助深度相机的触发信号置为有效电平(高电平)之后,辅助深度相机ToF2可以先按照默认曝光参数采集预览深度图像数据,在对预览深度图像数据解码完成获得预览点云时,辅助深度相机可以基于点阵模式采集每帧目标深度图像数据。Thus, after the main depth camera ToF1 sets the trigger signal of the auxiliary depth camera to a valid level (high level), the auxiliary depth camera ToF2 can first collect preview depth image data according to the default exposure parameters. When the preview depth image data is decoded to obtain the preview point cloud, the auxiliary depth camera can collect each frame of target depth image data based on the dot matrix mode.

具体而言,在M取2的情况下,辅助深度相机ToF2可以先按照N个曝光参数中的第一个曝光参数(即expo2_l)采集每个数据采集周期中的第一帧目标深度图像数据,在对第一帧深度图像数据解码完成获得第一帧点云时,辅助深度相机ToF2可以按照N个曝光参数中的第二个曝光参数(即expo2_h)采集每个数据采集周期中的第二帧目标深度图像数据,以得到每个数据采集周期中的两帧目标深度图像数据。辅助深度相机ToF2采集每帧目标深度图像数据的具体过程可以参照上述步骤S110的相关解释,此处不再赘述。Specifically, when M is 2, the auxiliary depth camera ToF2 can first collect the first frame of target depth image data in each data collection cycle according to the first exposure parameter (i.e., expo2_l) among the N exposure parameters. When the first frame of depth image data is decoded to obtain the first frame of point cloud, the auxiliary depth camera ToF2 can collect the second frame of target depth image data in each data collection cycle according to the second exposure parameter (i.e., expo2_h) among the N exposure parameters to obtain two frames of target depth image data in each data collection cycle. The specific process of the auxiliary depth camera ToF2 collecting each frame of target depth image data can refer to the relevant explanation of the above step S110, which will not be repeated here.

在步骤S602中,对每帧目标深度图像数据进行解码,获得每帧目标深度图像数据对应的每帧目标点云。In step S602, each frame of target depth image data is decoded to obtain each frame of target point cloud corresponding to each frame of target depth image data.

本步骤中,可以参照上述步骤S120的相关解释,对每帧目标深度图像数据进行解码,获得每帧目标深度图像数据对应的每帧目标点云。In this step, the relevant explanation of the above step S120 can be referred to, and each frame of target depth image data is decoded to obtain each frame of target point cloud corresponding to each frame of target depth image data.

在步骤S603中,对每个数据采集周期对应的N帧目标点云进行融合获得第二融合点云。In step S603, N frames of target point clouds corresponding to each data acquisition cycle are fused to obtain a second fused point cloud.

本步骤中,可以参照上述步骤S130的相关解释,可以对每个数据采集周期对应的N帧目标点云进行融合获得第二融合点云。In this step, the relevant explanation of the above step S130 may be referred to, and the N frames of target point clouds corresponding to each data acquisition cycle may be fused to obtain a second fused point cloud.

具体而言,可以参考上述图3的相关解释,先从N帧目标点云中确定出待融合的第三点云(长曝光点云)和第四点云(短曝光点云),之后,可以根据第三点云对应的第五时间戳和第四点云对应的第六时间戳确定第三时间范围,并获取自移动设备在第三时间范围内的运动数据(即imu数据和odom数据),进一步的,可以利用差速模型基于第三时间范围内的运动数据确定自移动设备在第三时间范围内的位姿变换数据,之后,可以根据第三时间范围内的位姿变换数据对第四点云进行位姿变换,获得第三变换点云,最后,可以将上述第三变换点云与第三点云进行合并,获得上述第二融合点云pts_2。Specifically, you can refer to the relevant explanations in Figure 3 above, first determine the third point cloud (long exposure point cloud) and the fourth point cloud (short exposure point cloud) to be fused from the N frames of target point cloud, then determine the third time range according to the fifth timestamp corresponding to the third point cloud and the sixth timestamp corresponding to the fourth point cloud, and obtain the motion data (i.e., imu data and odom data) of the self-mobile device within the third time range, further, you can use the differential model to determine the posture transformation data of the self-mobile device within the third time range based on the motion data within the third time range, then, you can perform posture transformation on the fourth point cloud according to the posture transformation data within the third time range to obtain a third transformed point cloud, finally, you can merge the third transformed point cloud with the third point cloud to obtain the second fused point cloud pts_2.

在步骤S604中,对第一融合点云和第二融合点云进行再融合获得第三融合点云,并将第三融合点云重新作为每帧的最终点云,以基于每帧的最终点云进行环境感知。In step S604, the first fused point cloud and the second fused point cloud are re-fused to obtain a third fused point cloud, and the third fused point cloud is used as the final point cloud of each frame again to perform environment perception based on the final point cloud of each frame.

本步骤中,在获得ToF1对应的每帧第一融合点云pts_1和ToF2的点阵模式所对应的每帧第二融合点云pts_2之后,可以对第一融合点云和第二融合点云进行再融合获得第三融合点云。In this step, after obtaining the first fused point cloud pts_1 of each frame corresponding to ToF1 and the second fused point cloud pts_2 of each frame corresponding to the dot pattern of ToF2, the first fused point cloud and the second fused point cloud can be re-fused to obtain a third fused point cloud.

参考图7,图7示出本公开实施例中如何对第一融合点云和第二融合点云进行再融合获得第三融合点云的流程示意图,包含步骤S701-步骤S703:Referring to FIG. 7 , FIG. 7 shows a schematic diagram of a process of re-fusing the first fused point cloud and the second fused point cloud to obtain a third fused point cloud in an embodiment of the present disclosure, including steps S701 to S703:

在步骤S701中,获取第一融合点云对应的第三时间戳,以及,获取第二融合点云对应的第四时间戳。In step S701, a third timestamp corresponding to the first fused point cloud is obtained, and a fourth timestamp corresponding to the second fused point cloud is obtained.

本步骤中,可以获取第一融合点云pts_1对应的第三时间戳t4,以及,获取第二融合点云pts_2对应的第四时间戳t5。In this step, the third timestamp t4 corresponding to the first fused point cloud pts_1 may be obtained, and the fourth timestamp t5 corresponding to the second fused point cloud pts_2 may be obtained.

在步骤S702中,根据第三时间戳和第四时间戳确定第二时间范围,并获取自移动设备在第二时间范围内的运动数据。In step S702, a second time range is determined according to the third timestamp and the fourth timestamp, and motion data within the second time range is acquired from the mobile device.

本步骤中,可以根据上述第三时间戳和上述第四时间戳确定第二时间范围(即t4到t5时刻),从而,可以获取上述自移动设备从t4到t5时刻的运动数据(即imu数据和odom数据)。In this step, the second time range (i.e., time t4 to t5) can be determined based on the third timestamp and the fourth timestamp, thereby obtaining the motion data (i.e., imu data and odom data) of the mobile device from time t4 to t5.

在步骤S703中,基于第二时间范围内的运动数据,对第一融合点云和第二融合点云进行再融合获得第三融合点云。In step S703, based on the motion data within the second time range, the first fused point cloud and the second fused point cloud are re-fused to obtain a third fused point cloud.

本步骤中,示例性的,可以利用差速模型基于第二时间范围内的运动数据确定自移动设备在第二时间范围内的位姿变换数据,例如:位姿变换矩阵T4_5,进而,根据第二时间范围内的位姿变换数据对第一融合点云pts_1进行位姿变换,获得第二变换点云pts_1’,具体而言,pts_1’=(T4_5)-1*pts_1,其中,(T4_5)-1表示上述第二时间范围内的位姿变换矩阵的逆矩阵。进一步的,可以将上述第二变换点云pts_1’与上述第二融合点云pts_2进行合并,获得上述第三融合点云。In this step, exemplarily, the differential model can be used to determine the posture transformation data of the mobile device within the second time range based on the motion data within the second time range, for example: posture transformation matrix T4_5, and then, the first fused point cloud pts_1 is subjected to posture transformation according to the posture transformation data within the second time range to obtain the second transformed point cloud pts_1', specifically, pts_1' = (T4_5) -1 *pts_1, wherein (T4_5) -1 represents the inverse matrix of the posture transformation matrix within the second time range. Further, the second transformed point cloud pts_1' can be merged with the second fused point cloud pts_2 to obtain the third fused point cloud.

在获得上述第三融合点云之后,可以将第三融合点云重新作为每帧的最终点云,从而,自移动设备可以基于每帧的最终点云进行环境感知,例如:进行建图、定位和避障等。After obtaining the third fused point cloud, the third fused point cloud can be used as the final point cloud of each frame again, so that the self-mobile device can perform environmental perception based on the final point cloud of each frame, such as mapping, positioning and obstacle avoidance.

本公开通过将根据主深度相机采用点阵模式获得的第一融合点云和根据辅助深度相机采用点阵模式获得的第二融合点云进行再融合,获得每帧的最终点云,能够使得采集的数据更全面,运动畸变更小,从而,提升自移动设备的环境感知精度。The present invention obtains the final point cloud of each frame by re-fusing the first fused point cloud obtained by the main depth camera using the dot matrix mode and the second fused point cloud obtained by the auxiliary depth camera using the dot matrix mode, which can make the collected data more comprehensive and reduce motion distortion, thereby improving the environmental perception accuracy of the mobile device.

在一种可选的实施方式中,在上述辅助深度相机ToF2按照第一工作模式采集每帧目标深度图像数据之后,间隔指定时长,上述辅助深度相机可以按照面阵模式按照自动曝光参数采集每帧参考深度图像数据(辅助深度相机按照面阵模式采集两帧参考深度图像数据之间的间隔时长可以是1000/fs1)。示例性的,上述自动曝光参数可以取决于环境亮度。上述指定时长与辅助深度相机的帧率之间具备预设关联关系,示例性的,可以是1000/fs2。In an optional embodiment, after the auxiliary depth camera ToF2 collects each frame of target depth image data in the first working mode, the auxiliary depth camera can collect each frame of reference depth image data in the area array mode according to the automatic exposure parameters at intervals of a specified duration (the interval between the auxiliary depth camera collecting two frames of reference depth image data in the area array mode can be 1000/fs1). Exemplarily, the automatic exposure parameters can depend on the ambient brightness. There is a preset correlation between the above-mentioned specified duration and the frame rate of the auxiliary depth camera, which can be 1000/fs2 for example.

具体而言,在间隔上述指定时长(1000/fs2)之后,ToF2可以先基于面阵模式采集预览深度图像数据,在对预览深度图像数据解码完成获得预览点云时,可以获取自动曝光参数expo3_1,基于该自动曝光参数expo3_1采集第一帧参考深度图像数据,在对第一帧参考深度图像数据解码完成获得第一帧点云时,可以获取自动曝光参数expo3_2,基于该自动曝光参数expo3_2采集第二帧参考深度图像数据,类似的,以基于面阵模式采集多帧参考深度图像数据,从而,自移动设备可以基于每帧参考深度图像数据所对应的每帧参考点云进行环境感知,例如:进行避障处理。Specifically, after the above-specified time interval (1000/fs2), ToF2 can first collect preview depth image data based on the area array mode. When the preview depth image data is decoded to obtain the preview point cloud, the automatic exposure parameter expo3_1 can be obtained. Based on the automatic exposure parameter expo3_1, the first frame of reference depth image data is collected. When the first frame of reference depth image data is decoded to obtain the first frame of point cloud, the automatic exposure parameter expo3_2 can be obtained. Based on the automatic exposure parameter expo3_2, the second frame of reference depth image data is collected. Similarly, multiple frames of reference depth image data are collected based on the area array mode. Therefore, the self-mobile device can perform environmental perception based on each frame of reference point cloud corresponding to each frame of reference depth image data, for example, perform obstacle avoidance.

本公开通过将ToF2设置为点阵和面阵模式相结合的两种工作模式,使得点阵模式下采集的目标深度图像数据可以为主深度相机采集的深度图像数据进行融合,使得观测范围更加全面,既保证了远处点云的测距精准度,减少了因为多径(即信号在达到目标前经过多次反射)导致的各种问题,同时,面阵模式下采集的参考深度图像数据相对点阵模式而言,距离较近,从而,可以保证近处点云的数量,使得自移动设备能够更准确的感知和响应周围环境的变化,保证自移动设备的避障效果。The present disclosure sets ToF2 to two working modes combining dot matrix and planar matrix modes, so that the target depth image data collected in the dot matrix mode can be fused with the depth image data collected by the main depth camera, making the observation range more comprehensive, which not only ensures the ranging accuracy of distant point clouds, but also reduces various problems caused by multipath (that is, the signal undergoes multiple reflections before reaching the target). At the same time, the reference depth image data collected in the planar matrix mode is closer than that in the dot matrix mode, so that the number of nearby point clouds can be guaranteed, so that the self-moving device can more accurately perceive and respond to changes in the surrounding environment, thereby ensuring the obstacle avoidance effect of the self-moving device.

在一种可选的实施方式中,本公开中的自移动设备上还可以设置有RGB相机,RGB相机可以与上述辅助深度相机组合设置。In an optional implementation, the self-mobile device in the present disclosure may also be provided with an RGB camera, and the RGB camera may be provided in combination with the auxiliary depth camera.

上述RGB相机的帧率可以是fs1,与上述主深度相机的帧率相同,从而,RGB相机采集两帧图像数据之间的间隔时长可以是1000/fs1。RGB相机也受控于上述主深度相机,具体而言,在辅助深度相机ToF2按照点阵模式采集每帧目标深度图像数据之后,主深度相机可以将RGB相机的触发信号置为有效电平(高电平),以通过RGB相机采集自移动设备在行进过程中的每帧图像数据,从而,自移动设备可以基于每帧图像数据进行环境感知。从而,本公开能够避免因RGB相机的曝光时间比较长从而导致RGB相机采集图像时拍摄到ToF2采集深度图像数据时所产生的光斑的问题,确保RGB相机采集到的图像数据清晰可见。The frame rate of the RGB camera can be fs1, which is the same as the frame rate of the main depth camera. Thus, the interval between two frames of image data collected by the RGB camera can be 1000/fs1. The RGB camera is also controlled by the main depth camera. Specifically, after the auxiliary depth camera ToF2 collects each frame of target depth image data in a dot matrix mode, the main depth camera can set the trigger signal of the RGB camera to an effective level (high level) to collect each frame of image data from the mobile device during movement through the RGB camera, so that the mobile device can perform environmental perception based on each frame of image data. Thus, the present disclosure can avoid the problem of the light spot generated when the RGB camera collects images due to the long exposure time of the RGB camera, thereby ensuring that the image data collected by the RGB camera is clear and visible.

本公开通过RGB相机和ToF相机的组合设置,RGB相机可以捕捉可见光图像,提供彩色或灰度图像数据,这些图像数据对于识别物体的颜色、纹理和细节非常重要。而ToF相机可以提供场景中的深度信息,从而,二者可以提供互补的信息,从而在多种应用场景中实现更准确的物体识别、跟踪和定位。The present disclosure adopts a combination of an RGB camera and a ToF camera. The RGB camera can capture visible light images and provide color or grayscale image data, which are very important for identifying the color, texture and details of objects. The ToF camera can provide depth information in the scene, so the two can provide complementary information to achieve more accurate object recognition, tracking and positioning in a variety of application scenarios.

参考图8,图8示出本公开实施例中主深度相机、辅助深度相机和RGB相机的出图时序图(即主深度相机、辅助深度相机和RGB相机分别在什么时刻开始采集数据),如图8所示:Referring to FIG8 , FIG8 shows a timing diagram of the output of the main depth camera, the auxiliary depth camera, and the RGB camera in an embodiment of the present disclosure (ie, at what time the main depth camera, the auxiliary depth camera, and the RGB camera start collecting data, respectively), as shown in FIG8 :

主深度相机用ToF1表示,ToF1的帧率记为fs1,辅助深度相机用ToF2表示,ToF2的帧率记为fs2,fs2=2fs1+1;RGB相机的帧率记为fs1;The main depth camera is represented by ToF1, and the frame rate of ToF1 is recorded as fs1. The auxiliary depth camera is represented by ToF2, and the frame rate of ToF2 is recorded as fs2, fs2=2fs1+1; the frame rate of the RGB camera is recorded as fs1;

ToF1仅具有一种工作模式,即点阵模式;ToF1先开始工作,间隔t1时刻之后ToF1完成一帧深度图像数据的采集,ToF1输出两帧深度图像数据之间的间隔为1000/fs1;ToF1 has only one working mode, i.e. dot matrix mode. ToF1 starts working first, and after time t1, ToF1 completes the acquisition of one frame of depth image data. The interval between two frames of depth image data output by ToF1 is 1000/fs1.

ToF2包含两种工作模式,即点阵模式和面阵模式;在ToF1完成一帧深度图像数据的采集之后,即t1时刻,ToF1将ToF2的触发信号置为高电平,ToF2基于点阵模式开始工作,即ToF2开始基于点阵模式采集每帧目标深度图像数据,间隔t2时刻之后ToF2完成一帧目标深度图像数据的采集,ToF2输出两帧目标深度图像数据之间的间隔为1000/fs1;ToF2 includes two working modes, namely, dot matrix mode and plane matrix mode. After ToF1 completes the acquisition of one frame of depth image data, that is, at time t1, ToF1 sets the trigger signal of ToF2 to a high level, and ToF2 starts to work based on the dot matrix mode, that is, ToF2 starts to acquire each frame of target depth image data based on the dot matrix mode. After the interval t2, ToF2 completes the acquisition of one frame of target depth image data, and the interval between ToF2 outputting two frames of target depth image data is 1000/fs1.

在ToF2基于点阵模式采集每帧目标深度图像数据之后间隔1000/fs2,ToF2基于面阵模式开始工作,即ToF2基于面阵模式采集每帧参考深度图像数据;After ToF2 collects each frame of target depth image data based on the dot array mode, at an interval of 1000/fs2, ToF2 starts working based on the area array mode, that is, ToF2 collects each frame of reference depth image data based on the area array mode;

在ToF2基于点阵模式完成一帧目标深度图像数据的采集时,即t2时刻,ToF1将RGB相机的触发信号置为高电平,RGB相机开始工作,即RGB相机开始采集每帧图像数据,RGB相机输出两帧图像数据之间的间隔为1000/fs1。When ToF2 completes the acquisition of a frame of target depth image data based on the dot matrix mode, that is, at time t2, ToF1 sets the trigger signal of the RGB camera to a high level, and the RGB camera starts working, that is, the RGB camera starts to collect each frame of image data, and the interval between two frames of image data output by the RGB camera is 1000/fs1.

参考图9,图9示出本公开实施例中如何针对主深度相机(ToF1)和辅助深度相机(ToF2)设置曝光参数的时序图,如图9所示:Referring to FIG. 9 , FIG. 9 shows a timing diagram of how to set exposure parameters for the main depth camera (ToF1) and the auxiliary depth camera (ToF2) in an embodiment of the present disclosure, as shown in FIG. 9 :

以M和N都取2,则M个曝光参数为两个曝光参数(以第一个曝光参数为短曝光参数,第二个曝光参数为长曝光参数为例),N个曝光参数也为两个曝光参数(以第一个曝光参数为短曝光参数,第二个曝光参数为长曝光参数为例),则ToF1连续两帧解码完成时刻可以记为t4_1和t4_2,tof2点阵模式下连续两帧解码完成时刻可以记为t5_1和t5_2,tof2面阵模式下连续两帧解码完成时刻可以记为t6_1和t6_2;If both M and N are 2, then the M exposure parameters are two exposure parameters (taking the first exposure parameter as a short exposure parameter and the second exposure parameter as a long exposure parameter as an example), and the N exposure parameters are also two exposure parameters (taking the first exposure parameter as a short exposure parameter and the second exposure parameter as a long exposure parameter as an example), then the time when two consecutive frames of ToF1 are decoded and completed can be recorded as t4_1 and t4_2, the time when two consecutive frames of ToF2 are decoded and completed in dot matrix mode can be recorded as t5_1 and t5_2, and the time when two consecutive frames of ToF2 are decoded and completed in area matrix mode can be recorded as t6_1 and t6_2;

具体而言,ToF1短曝光参数记为expo1_l,长曝光参数记为expo1_h;ToF1的预览深度图像数据t4_1时刻解码完成时获取短曝光参数expo1_l,根据短曝光参数expo1_l采集第一帧深度图像数据,第一帧深度图像数据t4_2时刻解码完成时获取长曝光参数expo1_h,根据长曝光参数expo1_h采集第二帧深度图像数据,后续采集每帧深度图像数据时按照“短曝光参数-长曝光参数”的循环设置即可;Specifically, the short exposure parameter of ToF1 is recorded as expo1_l, and the long exposure parameter is recorded as expo1_h; when the preview depth image data of ToF1 is decoded at time t4_1, the short exposure parameter expo1_l is obtained, and the first frame of depth image data is collected according to the short exposure parameter expo1_l. When the first frame of depth image data is decoded at time t4_2, the long exposure parameter expo1_h is obtained, and the second frame of depth image data is collected according to the long exposure parameter expo1_h. When each frame of depth image data is subsequently collected, the "short exposure parameter-long exposure parameter" cycle setting can be followed;

Tof2点阵短曝光参数记为expo2_l,长曝光参数记为expo2_h;Tof2基于点阵模式采集的预览深度图像数据t5_1时刻解码完成时获取短曝光参数expo2_l,根据短曝光参数expo2_l采集第一帧目标深度图像数据,第一帧目标深度图像数据t5_2时刻解码完成时获取长曝光参数expo2_h,根据长曝光参数expo2_h采集第二帧目标深度图像数据,后续采集每帧目标深度图像数据时按照“短曝光参数-长曝光参数”的循环设置即可;The Tof2 dot matrix short exposure parameter is recorded as expo2_l, and the long exposure parameter is recorded as expo2_h; Tof2 obtains the short exposure parameter expo2_l when the preview depth image data collected based on the dot matrix mode is decoded at time t5_1, and the first frame of target depth image data is collected according to the short exposure parameter expo2_l. When the first frame of target depth image data is decoded at time t5_2, the long exposure parameter expo2_h is obtained, and the second frame of target depth image data is collected according to the long exposure parameter expo2_h. When each frame of target depth image data is subsequently collected, the "short exposure parameter-long exposure parameter" cycle setting can be followed;

Tof2面阵模式时连续两帧自动曝光参数记为expo3_1和expo3_2;Tof2基于面阵模式采集的预览深度图像数据t6_1时刻解码完成时获取自动曝光参数expo3_1,根据自动曝光参数expo3_1采集第一帧参考深度图像数据,第一帧参考深度图像数据t6_2时刻解码完成时获取自动曝光参数expo3_2,根据自动曝光参数expo3_2采集第二帧参考深度图像数据,后续采集每帧参考深度图像数据时根据计算出来的自动曝光参数依次设置。When Tof2 is in area array mode, the automatic exposure parameters of two consecutive frames are recorded as expo3_1 and expo3_2; Tof2 obtains the automatic exposure parameter expo3_1 when the preview depth image data collected in area array mode is decoded at time t6_1, and the first frame of reference depth image data is collected according to the automatic exposure parameter expo3_1. When the decoding of the first frame of reference depth image data is completed at time t6_2, the automatic exposure parameter expo3_2 is obtained, and the second frame of reference depth image data is collected according to the automatic exposure parameter expo3_2. When each frame of reference depth image data is subsequently collected, it is set in sequence according to the calculated automatic exposure parameters.

参考图10,图10示出本公开实施例中如何获得深度相机(主深度相机或辅助深度相机在点阵模式下)对应的每帧点云的流程示意图,包含步骤S1001-步骤S1004:Referring to FIG. 10 , FIG. 10 shows a schematic flow chart of how to obtain each frame of point cloud corresponding to a depth camera (a main depth camera or an auxiliary depth camera in a dot matrix mode) in an embodiment of the present disclosure, including steps S1001 to S1004:

在步骤S1001中,打开ToF相机;In step S1001, turn on the ToF camera;

在步骤S1002中,获取ToF raw图;In step S1002, a ToF raw image is obtained;

在步骤S1003中,ToF raw图解码,输出每帧点云;In step S1003, the ToF raw image is decoded and each frame of point cloud is output;

在步骤S1004中,M帧点云融合,将融合点云重新作为每帧点云。In step S1004, M frames of point clouds are fused, and the fused point cloud is used as each frame of point cloud again.

参考图11,图11示出本公开实施例中针对单个ToF相机如何获得每帧融合点云(即上述图10中的步骤S1004)的流程示意图,包含步骤S1101-步骤S1108:Referring to FIG. 11 , FIG. 11 shows a schematic flow chart of how a single ToF camera obtains a fused point cloud for each frame (i.e., step S1004 in FIG. 10 above) in an embodiment of the present disclosure, including steps S1101 to S1108:

在步骤S1101中,开始;In step S1101, start;

在步骤S1102中,获取当前帧点云;In step S1102, the point cloud of the current frame is obtained;

在步骤S1103中,判断是第一类型(长曝光点云)还是第二类型(短曝光点云);In step S1103, it is determined whether it is the first type (long exposure point cloud) or the second type (short exposure point cloud);

若是第二类型,则进入步骤S1104中,存在数据队列中;If it is the second type, then proceed to step S1104 and store it in the data queue;

若是第一类型,则进入步骤S1105中,判断数据队列中是否有暂存的点云,并且,暂存的点云的时间戳与上述当前帧点云的时间戳之间的时间间距小于预设间隔阈值;If it is the first type, then proceed to step S1105 to determine whether there is a temporarily stored point cloud in the data queue, and whether the time interval between the timestamp of the temporarily stored point cloud and the timestamp of the current frame point cloud is less than a preset interval threshold;

若否,则进入步骤S1106中,清空队列;If not, proceed to step S1106 to clear the queue;

若是,则进入步骤S1107中,结合imu+轮速计数据,融合点云;If yes, then go to step S1107 to combine the imu + wheel speed meter data to fuse the point cloud;

在步骤S1108中,将融合点云重新作为每帧的点云。In step S1108, the fused point cloud is restored as the point cloud of each frame.

参考图12,图12示出本公开实施例中自移动设备的环境感知方法的示意图,如图12所示:Referring to FIG. 12 , FIG. 12 is a schematic diagram showing an environment perception method of a mobile device in an embodiment of the present disclosure, as shown in FIG. 12 :

对于设置于左侧的主深度相机(点阵模式)而言,本公开可以将其短曝光点云与长曝光点云进行融合,获得每帧第一融合点云;For the main depth camera (dot matrix mode) disposed on the left side, the present disclosure can fuse its short exposure point cloud with the long exposure point cloud to obtain the first fused point cloud of each frame;

对于设置于前侧的辅助深度相机(点阵模式)而言,本公开可以将其短曝光点云与长曝光点云进行融合,获得每帧第二融合点云;For the auxiliary depth camera (dot matrix mode) disposed on the front side, the present disclosure can fuse its short exposure point cloud with the long exposure point cloud to obtain a second fused point cloud per frame;

之后,可以将每帧第一融合点云(来自主深度相机)和每帧第二融合点云(来自辅助深度相机)进行再融合,并将获得的第三融合点云作为每帧的最终点云,从而,自移动设备可以基于每帧的最终点云进行环境感知。Afterwards, the first fused point cloud of each frame (from the main depth camera) and the second fused point cloud of each frame (from the auxiliary depth camera) can be re-fused, and the obtained third fused point cloud can be used as the final point cloud of each frame. Thus, the self-mobile device can perceive the environment based on the final point cloud of each frame.

本公开还提供了一种自移动设备,所述自移动设备设置有主深度相机,图13示出本公开示例性实施例中自移动设备的结构示意图;如图13所示,自移动设备1300可以包括获取模块1310、解码模块1320和环境感知模块1330。其中:The present disclosure also provides a self-mobile device, which is provided with a main depth camera. FIG13 shows a schematic diagram of the structure of the self-mobile device in an exemplary embodiment of the present disclosure; as shown in FIG13 , the self-mobile device 1300 may include an acquisition module 1310, a decoding module 1320 and an environment perception module 1330. Among them:

获取模块1310,被配置为在所述自移动设备的行进过程中,通过所述主深度相机采集每帧深度图像数据;所述主深度相机被配置为:在每个数据采集周期内,按照预设的M个曝光参数交替的采集M帧深度图像数据,所述M个曝光参数呈单调递增或单调递减变化趋势;M为大于1的整数;The acquisition module 1310 is configured to collect each frame of depth image data through the main depth camera during the movement of the self-mobile device; the main depth camera is configured to: in each data collection cycle, alternately collect M frames of depth image data according to preset M exposure parameters, and the M exposure parameters show a monotonically increasing or monotonically decreasing trend; M is an integer greater than 1;

解码模块1320,被配置为对所述每帧深度图像数据进行解码,获得所述每帧深度图像数据对应的每帧点云;The decoding module 1320 is configured to decode each frame of the depth image data to obtain each frame of point cloud corresponding to each frame of the depth image data;

环境感知模块1330,被配置为对所述每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将所述第一融合点云重新作为每帧的点云,以基于所述每帧的点云进行环境感知。The environment perception module 1330 is configured to fuse the M frames of point clouds corresponding to each data acquisition cycle to obtain a first fused point cloud, and use the first fused point cloud as the point cloud of each frame to perform environment perception based on the point cloud of each frame.

在本公开的示例性实施例中,在通过所述主深度相机采集每帧深度图像数据之前,所述获取模块1310,被配置为:In an exemplary embodiment of the present disclosure, before collecting each frame of depth image data through the main depth camera, the acquisition module 1310 is configured to:

通过所述主深度相机按照默认曝光参数采集预览深度图像数据;Collecting preview depth image data by the main depth camera according to default exposure parameters;

在对所述预览深度图像数据解码完成获得预览点云时,通过所述主深度相机采集所述每帧深度图像数据。When the preview point cloud is obtained by decoding the preview depth image data, each frame of the depth image data is collected by the main depth camera.

在本公开的示例性实施例中,所述获取模块1310通过所述主深度相机采集所述每帧深度图像数据,包括:In an exemplary embodiment of the present disclosure, the acquisition module 1310 collects the depth image data of each frame through the main depth camera, including:

通过所述主深度相机按照所述M个曝光参数中的第一个曝光参数采集所述每个数据采集周期中的第一帧深度图像数据;Collecting, by the main depth camera, a first exposure parameter among the M exposure parameters, a first frame of depth image data in each data collection cycle;

在对所述第一帧深度图像数据解码完成获得第一帧点云时,通过所述主深度相机按照所述M个曝光参数中的第二个曝光参数采集所述每个数据采集周期中的第二帧深度图像数据;When the first frame of depth image data is decoded to obtain the first frame of point cloud, the second frame of depth image data in each data acquisition cycle is collected by the main depth camera according to the second exposure parameter of the M exposure parameters;

直至在对第M-1帧深度图像数据解码完成获得第M-1帧点云时,通过所述主深度相机按照所述M个曝光参数中的第M个曝光参数采集所述每个数据采集周期中的第M帧深度图像数据。Until the M-1th frame of depth image data is decoded to obtain the M-1th frame of point cloud, the M-th frame of depth image data in each data acquisition cycle is collected by the main depth camera according to the M-th exposure parameter among the M exposure parameters.

在本公开的示例性实施例中,所述环境感知模块1330对所述每个数据采集周期对应的M帧点云进行融合获得第一融合点云,包括:In an exemplary embodiment of the present disclosure, the environment perception module 1330 fuses the M frames of point clouds corresponding to each data collection period to obtain a first fused point cloud, including:

从所述M帧点云中确定待融合的第一点云和第二点云;Determine a first point cloud and a second point cloud to be fused from the M frame point clouds;

根据所述第一点云对应的第一时间戳和所述第二点云对应的第二时间戳确定第一时间范围,并获取所述自移动设备在所述第一时间范围内的运动数据;所述运动数据包含惯性测量单元数据和轮速计数据;Determine a first time range according to a first timestamp corresponding to the first point cloud and a second timestamp corresponding to the second point cloud, and obtain motion data of the self-mobile device within the first time range; the motion data includes inertial measurement unit data and wheel speed meter data;

基于所述第一时间范围内的运动数据,对所述第一点云和所述第二点云进行融合获得所述第一融合点云。Based on the motion data within the first time range, the first point cloud and the second point cloud are fused to obtain the first fused point cloud.

在本公开的示例性实施例中,所述环境感知模块1330从所述M帧点云中确定待融合的第一点云和第二点云,包括:In an exemplary embodiment of the present disclosure, the environment perception module 1330 determines the first point cloud and the second point cloud to be fused from the M frame point clouds, including:

在获取到所述M帧点云中的每个当前帧点云时,根据所述当前帧点云对应的类型标签,确定所述当前帧点云的类型;所述当前帧点云的类型是根据采集所述当前帧点云对应的当前帧深度图像数据时所采用的曝光参数确定的,所述当前帧点云的类型包括第一类型和第二类型,所述第一类型关联的曝光参数小于所述第二类型关联的曝光参数;When each current frame point cloud in the M frame point clouds is acquired, the type of the current frame point cloud is determined according to the type label corresponding to the current frame point cloud; the type of the current frame point cloud is determined according to the exposure parameters used when collecting the current frame depth image data corresponding to the current frame point cloud, and the type of the current frame point cloud includes a first type and a second type, and the exposure parameter associated with the first type is less than the exposure parameter associated with the second type;

若所述当前帧点云属于所述第一类型,则将所述当前帧点云暂存至数据队列中;If the current frame point cloud belongs to the first type, temporarily storing the current frame point cloud in a data queue;

若所述当前帧点云属于所述第二类型,则将所述当前帧点云确定为所述第一点云;If the current frame point cloud belongs to the second type, determining the current frame point cloud as the first point cloud;

根据所述第一点云对应的所述第一时间戳,从所述数据队列中筛选所述第二点云。The second point cloud is filtered from the data queue according to the first timestamp corresponding to the first point cloud.

在本公开的示例性实施例中,所述环境感知模块1330根据所述第一点云对应的所述第一时间戳,从所述数据队列中筛选所述第二点云,包括:In an exemplary embodiment of the present disclosure, the environment perception module 1330 selects the second point cloud from the data queue according to the first timestamp corresponding to the first point cloud, including:

剔除所述数据队列中暂存的、与所述第一时间戳相同的无效点云;Eliminating invalid point clouds temporarily stored in the data queue and having the same timestamp as the first timestamp;

从所述数据队列中的剩余点云中筛选出与所述第一时间戳之间的时间间隔小于预设间隔阈值的目标剩余点云,作为所述第二点云。Filter out target remaining point clouds whose time interval with the first timestamp is less than a preset interval threshold from the remaining point clouds in the data queue as the second point clouds.

在本公开的示例性实施例中,所述环境感知模块1330基于所述第一时间范围内的运动数据,对所述第一点云和所述第二点云进行融合获得所述第一融合点云,包括:In an exemplary embodiment of the present disclosure, the environment perception module 1330 fuses the first point cloud and the second point cloud to obtain the first fused point cloud based on the motion data within the first time range, including:

利用差速模型基于所述第一时间范围内的运动数据确定所述自移动设备在所述第一时间范围内的位姿变换数据;Determine the position transformation data of the self-moving device within the first time range by using a differential model based on the motion data within the first time range;

根据所述第一时间范围内的位姿变换数据对所述第二点云进行位姿变换,获得第一变换点云;Performing posture transformation on the second point cloud according to the posture transformation data within the first time range to obtain a first transformed point cloud;

将所述第一变换点云与所述第一点云进行合并,获得所述第一融合点云。The first transformed point cloud is merged with the first point cloud to obtain the first fused point cloud.

在本公开的示例性实施例中,所述主深度相机采用第一工作模式采集所述每帧深度图像数据,所述第一工作模式包括点阵模式。In an exemplary embodiment of the present disclosure, the main depth camera collects the depth image data of each frame in a first working mode, and the first working mode includes a dot matrix mode.

在本公开的示例性实施例中,所述自移动设备还设置有辅助深度相机,所述主深度相机和所述辅助深度相机设置于所述自移动设备的不同方位,所述辅助深度相机与所述主深度相机之间具有交叉且不重合的视场角区间。In an exemplary embodiment of the present disclosure, the self-mobile device is also provided with an auxiliary depth camera, the main depth camera and the auxiliary depth camera are arranged at different positions of the self-mobile device, and there is an intersecting and non-overlapping field of view angle interval between the auxiliary depth camera and the main depth camera.

在本公开的示例性实施例中,所述辅助深度相机具有所述第一工作模式和第二工作模式,所述第二工作模式包括面阵模式;In an exemplary embodiment of the present disclosure, the auxiliary depth camera has the first working mode and the second working mode, and the second working mode includes an area array mode;

在所述主深度相机采集到所述每帧深度图像数据之后,所述环境感知模块1330,被配置为:After the main depth camera collects each frame of depth image data, the environment perception module 1330 is configured to:

通过所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据;所述辅助深度相机被配置为:在每个数据采集周期内,按照预设的N个曝光参数交替的采集N帧目标深度图像数据,所述N个曝光参数呈单调递增或单调递减变化趋势;N为大于1的整数;The auxiliary depth camera collects each frame of target depth image data according to the first working mode; the auxiliary depth camera is configured to: in each data collection cycle, alternately collect N frames of target depth image data according to preset N exposure parameters, and the N exposure parameters show a monotonically increasing or monotonically decreasing trend; N is an integer greater than 1;

对所述每帧目标深度图像数据进行解码,获得所述每帧目标深度图像数据对应的每帧目标点云;Decoding each frame of target depth image data to obtain each frame of target point cloud corresponding to each frame of target depth image data;

对所述每个数据采集周期对应的N帧目标点云进行融合获得第二融合点云;Fusing the N frames of target point clouds corresponding to each data acquisition cycle to obtain a second fused point cloud;

对所述第一融合点云和所述第二融合点云进行再融合获得第三融合点云,并将所述第三融合点云重新作为每帧的最终点云,以基于所述每帧的最终点云进行环境感知。The first fused point cloud and the second fused point cloud are re-fused to obtain a third fused point cloud, and the third fused point cloud is used as the final point cloud of each frame again, so as to perform environment perception based on the final point cloud of each frame.

在本公开的示例性实施例中,所述环境感知模块1330通过所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据,包括:In an exemplary embodiment of the present disclosure, the environment perception module 1330 collects each frame of target depth image data through the auxiliary depth camera according to the first working mode, including:

通过所述主深度相机将所述辅助深度相机的触发信号置为有效电平,以通过所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据。The trigger signal of the auxiliary depth camera is set to a valid level through the main depth camera, so that each frame of target depth image data is collected through the auxiliary depth camera according to the first working mode.

在本公开的示例性实施例中,所述环境感知模块1330对所述第一融合点云和所述第二融合点云进行再融合获得第三融合点云,包括:In an exemplary embodiment of the present disclosure, the environment perception module 1330 re-integrates the first fused point cloud and the second fused point cloud to obtain a third fused point cloud, including:

获取所述第一融合点云对应的第三时间戳,以及,获取所述第二融合点云对应的第四时间戳;Obtaining a third timestamp corresponding to the first fused point cloud, and obtaining a fourth timestamp corresponding to the second fused point cloud;

根据所述第三时间戳和所述第四时间戳确定第二时间范围,并获取所述自移动设备在所述第二时间范围内的运动数据;Determine a second time range according to the third timestamp and the fourth timestamp, and obtain motion data of the mobile device within the second time range;

基于所述第二时间范围内的运动数据,对所述第一融合点云和所述第二融合点云进行再融合获得所述第三融合点云。Based on the motion data within the second time range, the first fused point cloud and the second fused point cloud are re-fused to obtain the third fused point cloud.

在本公开的示例性实施例中,所述环境感知模块1330基于所述第二时间范围内的运动数据,对所述第一融合点云和所述第二融合点云进行再融合获得所述第三融合点云,包括:In an exemplary embodiment of the present disclosure, the environment perception module 1330 re-fuses the first fused point cloud and the second fused point cloud based on the motion data within the second time range to obtain the third fused point cloud, including:

利用差速模型基于所述第二时间范围内的运动数据确定所述自移动设备在所述第二时间范围内的位姿变换数据;Determine the position transformation data of the self-mobile device within the second time range by using a differential model based on the motion data within the second time range;

根据所述第二时间范围内的位姿变换数据对所述第一融合点云进行位姿变换,获得第二变换点云;Performing posture transformation on the first fused point cloud according to the posture transformation data within the second time range to obtain a second transformed point cloud;

将所述第二变换点云与所述第二融合点云进行合并,获得所述第三融合点云。The second transformed point cloud and the second fused point cloud are merged to obtain the third fused point cloud.

在本公开的示例性实施例中,在通过所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据之后,所述环境感知模块1330,被配置为:In an exemplary embodiment of the present disclosure, after collecting each frame of target depth image data through the auxiliary depth camera according to the first working mode, the environment perception module 1330 is configured as follows:

间隔指定时长之后,通过所述辅助深度相机按照所述第二工作模式按照自动曝光参数采集每帧参考深度图像数据;After a specified time interval, collecting each frame of reference depth image data according to the second working mode and the automatic exposure parameters by the auxiliary depth camera;

对所述每帧参考深度图像数据进行解码,获得所述每帧参考深度图像数据对应的每帧参考点云,以基于所述每帧参考点云进行环境感知;所述自动曝光参数取决于环境亮度;Decoding each frame of the reference depth image data to obtain each frame of the reference point cloud corresponding to the each frame of the reference depth image data, so as to perform environmental perception based on the each frame of the reference point cloud; the automatic exposure parameter depends on the ambient brightness;

其中,所述指定时长与所述辅助深度相机的帧率之间具备预设关联关系。There is a preset correlation between the specified time length and the frame rate of the auxiliary depth camera.

在本公开的示例性实施例中,所述辅助深度相机的帧率大于所述主深度相机的帧率,且所述辅助深度相机的帧率与所述主深度相机的帧率满足预设数值关系。In an exemplary embodiment of the present disclosure, the frame rate of the auxiliary depth camera is greater than the frame rate of the main depth camera, and the frame rate of the auxiliary depth camera and the frame rate of the main depth camera satisfy a preset numerical relationship.

在本公开的示例性实施例中,所述自移动设备还设置有RGB相机,所述RGB相机与所述辅助深度相机组合设置,所述RGB相机的帧率与所述主深度相机的帧率相同;In an exemplary embodiment of the present disclosure, the self-mobile device is further provided with an RGB camera, the RGB camera is provided in combination with the auxiliary depth camera, and the frame rate of the RGB camera is the same as the frame rate of the main depth camera;

在所述辅助深度相机按照所述第一工作模式采集每帧目标深度图像数据之后,所述环境感知模块1330,被配置为:After the auxiliary depth camera collects each frame of target depth image data according to the first working mode, the environment perception module 1330 is configured as follows:

通过所述RGB相机采集所述自移动设备在行进过程中的每帧图像数据,以基于所述每帧图像数据进行环境感知。Each frame of image data of the self-mobile device during its movement is collected by the RGB camera, so as to perform environmental perception based on each frame of image data.

在本公开的示例性实施例中,所述环境感知模块1330通过所述RGB相机采集所述自移动设备在行进过程中的每帧图像数据,包括:In an exemplary embodiment of the present disclosure, the environment perception module 1330 collects each frame of image data of the self-mobile device during its movement through the RGB camera, including:

通过所述主深度相机将所述RGB相机的触发信号置为有效电平,以通过所述RGB相机采集所述自移动设备在行进过程中的每帧图像数据。The trigger signal of the RGB camera is set to a valid level through the main depth camera, so as to collect each frame of image data of the self-moving device during the moving process through the RGB camera.

上述自移动设备中各模块的具体细节已经在对应的自移动设备的环境感知方法中进行了详细的描述,因此此处不再赘述。The specific details of each module in the above self-mobile device have been described in detail in the corresponding environment perception method of the self-mobile device, so they will not be repeated here.

应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that, although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. In fact, according to the embodiments of the present disclosure, the features and functions of two or more modules or units described above can be embodied in one module or unit. On the contrary, the features and functions of one module or unit described above can be further divided into multiple modules or units to be embodied.

此外,尽管在附图中以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。In addition, although the steps of the method in the present disclosure are described in a specific order in the drawings, this does not require or imply that the steps must be performed in this specific order, or that all the steps shown must be performed to achieve the desired results. Additionally or alternatively, some steps may be omitted, multiple steps may be combined into one step, and/or one step may be decomposed into multiple steps, etc.

通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、移动终端、或者网络设备等)执行根据本公开实施方式的方法。Through the description of the above implementation methods, it is easy for those skilled in the art to understand that the example implementation methods described here can be implemented by software, or by combining software with necessary hardware. Therefore, the technical solution according to the implementation methods of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a USB flash drive, a mobile hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the implementation methods of the present disclosure.

本申请还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiment; or may exist independently without being assembled into the electronic device.

计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。Computer-readable storage media may be, for example, but not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or components, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memories (RAM), read-only memories (ROM), erasable programmable read-only memories (EPROM or flash memory), optical fibers, portable compact disk read-only memories (CD-ROMs), optical storage devices, magnetic storage devices, or any suitable combination thereof. In the present disclosure, computer-readable storage media may be any tangible medium containing or storing a program that may be used by or in conjunction with an instruction execution system, device, or device.

计算机可读存储介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。Computer-readable storage media can send, propagate or transmit programs for use by or in conjunction with an instruction execution system, apparatus or device. The program code contained on the computer-readable storage medium can be transmitted using any appropriate medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.

计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现如上述实施例中所述的方法。The computer-readable storage medium carries one or more programs. When the one or more programs are executed by an electronic device, the electronic device implements the method described in the above embodiments.

此外,在本公开实施例中还提供了一种能够实现上述方法的电子设备。In addition, an electronic device capable of implementing the above method is also provided in an embodiment of the present disclosure.

所属技术领域的技术人员能够理解,本公开的各个方面可以实现为系统、方法或程序产品。因此,本公开的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。Those skilled in the art will appreciate that various aspects of the present disclosure may be implemented as systems, methods or program products. Therefore, various aspects of the present disclosure may be specifically implemented in the following forms, namely: complete hardware implementation, complete software implementation (including firmware, microcode, etc.), or a combination of hardware and software implementations, which may be collectively referred to herein as "circuits", "modules" or "systems".

下面参照图14来描述根据本公开的这种实施方式的电子设备1400。图14显示的电子设备1400仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。The electronic device 1400 according to this embodiment of the present disclosure is described below with reference to Fig. 14. The electronic device 1400 shown in Fig. 14 is only an example and should not bring any limitation to the functions and scope of use of the embodiment of the present disclosure.

如图14所示,电子设备1400以通用计算设备的形式表现。电子设备1400的组件可以包括但不限于:上述至少一个处理单元1410、上述至少一个存储单元1420、连接不同系统组件(包括存储单元1420和处理单元1410)的总线1430以及显示单元1440。As shown in FIG14 , the electronic device 1400 is presented in the form of a general-purpose computing device. The components of the electronic device 1400 may include, but are not limited to: the at least one processing unit 1410, the at least one storage unit 1420, a bus 1430 connecting different system components (including the storage unit 1420 and the processing unit 1410), and a display unit 1440.

其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元1410执行,使得所述处理单元1410执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的步骤。例如,所述处理单元1410可以执行如图1中所示的:步骤S110,在自移动设备的行进过程中,通过主深度相机采集每帧深度图像数据;主深度相机被配置为:在每个数据采集周期内,按照预设的M个曝光参数交替的采集M帧深度图像数据,M个曝光参数呈单调递增或单调递减变化趋势;M为大于1的整数;步骤S120,对每帧深度图像数据进行解码,获得每帧深度图像数据对应的每帧点云;步骤S130,对每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将第一融合点云重新作为每帧的点云,以基于每帧的点云进行环境感知。The storage unit stores a program code, and the program code can be executed by the processing unit 1410, so that the processing unit 1410 executes the steps described in the "Exemplary Method" section of the present specification according to various exemplary embodiments of the present disclosure. For example, the processing unit 1410 can execute as shown in FIG1: Step S110, during the movement of the self-mobile device, each frame of depth image data is collected by the main depth camera; the main depth camera is configured to: in each data collection cycle, alternately collect M frames of depth image data according to preset M exposure parameters, and the M exposure parameters show a monotonically increasing or monotonically decreasing trend; M is an integer greater than 1; Step S120, decode each frame of depth image data to obtain each frame of point cloud corresponding to each frame of depth image data; Step S130, fuse the M frames of point cloud corresponding to each data collection cycle to obtain a first fused point cloud, and use the first fused point cloud as the point cloud of each frame again, so as to perform environmental perception based on the point cloud of each frame.

存储单元1420可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)14201和/或高速缓存存储单元14202,还可以进一步包括只读存储单元(ROM)14203。The storage unit 1420 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 14201 and/or a cache storage unit 14202 , and may further include a read-only storage unit (ROM) 14203 .

存储单元1420还可以包括具有一组(至少一个)程序模块14205的程序/实用工具14204,这样的程序模块14205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。The storage unit 1420 may also include a program/utility 14204 having a set (at least one) of program modules 14205, such program modules 14205 including but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination may include an implementation of a network environment.

总线1430可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。Bus 1430 may represent one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.

电子设备1400也可以与一个或多个外部设备1500(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备1400交互的设备通信,和/或与使得该电子设备1400能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口1450进行。并且,电子设备1400还可以通过网络适配器1460与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器1460通过总线1430与电子设备1400的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备1400使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。The electronic device 1400 may also communicate with one or more external devices 1500 (e.g., keyboards, pointing devices, Bluetooth devices, etc.), may also communicate with one or more devices that enable a user to interact with the electronic device 1400, and/or communicate with any device that enables the electronic device 1400 to communicate with one or more other computing devices (e.g., routers, modems, etc.). Such communication may be performed via an input/output (I/O) interface 1450. In addition, the electronic device 1400 may also communicate with one or more networks (e.g., local area networks (LANs), wide area networks (WANs), and/or public networks, such as the Internet) via a network adapter 1460. As shown, the network adapter 1460 communicates with other modules of the electronic device 1400 via a bus 1430. It should be understood that, although not shown in the figure, other hardware and/or software modules may be used in conjunction with the electronic device 1400, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems.

本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。Those skilled in the art will readily appreciate other embodiments of the present disclosure after considering the specification and practicing the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common knowledge or customary technical means in the art that are not disclosed in the present disclosure. The specification and examples are to be regarded as exemplary only, and the true scope and spirit of the present disclosure are indicated by the claims.

Claims (10)

1.一种自移动设备的环境感知方法,其特征在于,所述自移动设备设置有主深度相机,所述方法包括:1. A method for environmental perception of a self-moving device, characterized in that the self-moving device is provided with a main depth camera, and the method comprises: 在所述自移动设备的行进过程中,通过所述主深度相机采集每帧深度图像数据;所述主深度相机被配置为:在每个数据采集周期内,按照预设的M个曝光参数交替的采集M帧深度图像数据,所述M个曝光参数呈单调递增或单调递减变化趋势;M为大于1的整数;During the movement of the self-moving device, each frame of depth image data is collected by the main depth camera; the main depth camera is configured to: in each data collection cycle, alternately collect M frames of depth image data according to preset M exposure parameters, and the M exposure parameters show a monotonically increasing or monotonically decreasing trend; M is an integer greater than 1; 对所述每帧深度图像数据进行解码,获得所述每帧深度图像数据对应的每帧点云;Decoding each frame of the depth image data to obtain each frame of point cloud corresponding to each frame of the depth image data; 对所述每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将所述第一融合点云重新作为每帧的点云,以基于所述每帧的点云进行环境感知。The M frames of point clouds corresponding to each data acquisition cycle are fused to obtain a first fused point cloud, and the first fused point cloud is used as the point cloud of each frame again to perform environmental perception based on the point cloud of each frame. 2.根据权利要求1所述的方法,其特征在于,在通过所述主深度相机采集每帧深度图像数据之前,所述方法还包括:2. The method according to claim 1, characterized in that before collecting each frame of depth image data through the main depth camera, the method further comprises: 通过所述主深度相机按照默认曝光参数采集预览深度图像数据;Collecting preview depth image data by the main depth camera according to default exposure parameters; 在对所述预览深度图像数据解码完成获得预览点云时,通过所述主深度相机采集所述每帧深度图像数据。When the preview point cloud is obtained by decoding the preview depth image data, each frame of the depth image data is collected by the main depth camera. 3.根据权利要求2所述的方法,其特征在于,所述通过所述主深度相机采集所述每帧深度图像数据,包括:3. The method according to claim 2, characterized in that the collecting of each frame of depth image data by the main depth camera comprises: 通过所述主深度相机按照所述M个曝光参数中的第一个曝光参数采集所述每个数据采集周期中的第一帧深度图像数据;Collecting, by the main depth camera, a first exposure parameter among the M exposure parameters, a first frame of depth image data in each data collection cycle; 在对所述第一帧深度图像数据解码完成获得第一帧点云时,通过所述主深度相机按照所述M个曝光参数中的第二个曝光参数采集所述每个数据采集周期中的第二帧深度图像数据;When the first frame of depth image data is decoded to obtain the first frame of point cloud, the second frame of depth image data in each data acquisition cycle is collected by the main depth camera according to the second exposure parameter of the M exposure parameters; 直至在对第M-1帧深度图像数据解码完成获得第M-1帧点云时,通过所述主深度相机按照所述M个曝光参数中的第M个曝光参数采集所述每个数据采集周期中的第M帧深度图像数据。Until the M-1th frame of depth image data is decoded to obtain the M-1th frame of point cloud, the M-th frame of depth image data in each data acquisition cycle is collected by the main depth camera according to the M-th exposure parameter among the M exposure parameters. 4.根据权利要求3所述的方法,其特征在于,所述对所述每个数据采集周期对应的M帧点云进行融合获得第一融合点云,包括:4. The method according to claim 3, characterized in that fusing the M frames of point clouds corresponding to each data acquisition cycle to obtain a first fused point cloud comprises: 从所述M帧点云中确定待融合的第一点云和第二点云;Determine a first point cloud and a second point cloud to be fused from the M frame point clouds; 根据所述第一点云对应的第一时间戳和所述第二点云对应的第二时间戳确定第一时间范围,并获取所述自移动设备在所述第一时间范围内的运动数据;所述运动数据包含惯性测量单元数据和轮速计数据;Determine a first time range according to a first timestamp corresponding to the first point cloud and a second timestamp corresponding to the second point cloud, and obtain motion data of the self-mobile device within the first time range; the motion data includes inertial measurement unit data and wheel speed meter data; 基于所述第一时间范围内的运动数据,对所述第一点云和所述第二点云进行融合获得所述第一融合点云。Based on the motion data within the first time range, the first point cloud and the second point cloud are fused to obtain the first fused point cloud. 5.根据权利要求4所述的方法,其特征在于,所述从所述M帧点云中确定待融合的第一点云和第二点云,包括:5. The method according to claim 4, characterized in that the step of determining the first point cloud and the second point cloud to be fused from the M frame point clouds comprises: 在获取到所述M帧点云中的每个当前帧点云时,根据所述当前帧点云对应的类型标签,确定所述当前帧点云的类型;所述当前帧点云的类型是根据采集所述当前帧点云对应的当前帧深度图像数据时所采用的曝光参数确定的,所述当前帧点云的类型包括第一类型和第二类型,所述第一类型关联的曝光参数小于所述第二类型关联的曝光参数;When each current frame point cloud in the M frame point clouds is acquired, the type of the current frame point cloud is determined according to the type label corresponding to the current frame point cloud; the type of the current frame point cloud is determined according to the exposure parameters used when collecting the current frame depth image data corresponding to the current frame point cloud, and the type of the current frame point cloud includes a first type and a second type, and the exposure parameter associated with the first type is less than the exposure parameter associated with the second type; 若所述当前帧点云属于所述第一类型,则将所述当前帧点云暂存至数据队列中;If the current frame point cloud belongs to the first type, temporarily storing the current frame point cloud in a data queue; 若所述当前帧点云属于所述第二类型,则将所述当前帧点云确定为所述第一点云;If the current frame point cloud belongs to the second type, determining the current frame point cloud as the first point cloud; 根据所述第一点云对应的所述第一时间戳,从所述数据队列中筛选所述第二点云。The second point cloud is filtered from the data queue according to the first timestamp corresponding to the first point cloud. 6.根据权利要求5所述的方法,其特征在于,所述根据所述第一点云对应的所述第一时间戳,从所述数据队列中筛选所述第二点云,包括:6. The method according to claim 5, characterized in that the step of selecting the second point cloud from the data queue according to the first timestamp corresponding to the first point cloud comprises: 剔除所述数据队列中暂存的、与所述第一时间戳相同的无效点云;Eliminating invalid point clouds temporarily stored in the data queue and having the same timestamp as the first timestamp; 从所述数据队列中的剩余点云中筛选出与所述第一时间戳之间的时间间隔小于预设间隔阈值的目标剩余点云,作为所述第二点云。Filter out target remaining point clouds whose time interval with the first timestamp is less than a preset interval threshold from the remaining point clouds in the data queue as the second point clouds. 7.根据权利要求4所述的方法,其特征在于,所述基于所述第一时间范围内的运动数据,对所述第一点云和所述第二点云进行融合获得所述第一融合点云,包括:7. The method according to claim 4, characterized in that the fusing the first point cloud and the second point cloud based on the motion data within the first time range to obtain the first fused point cloud comprises: 利用差速模型基于所述第一时间范围内的运动数据确定所述自移动设备在所述第一时间范围内的位姿变换数据;Determine the position transformation data of the self-moving device within the first time range by using a differential model based on the motion data within the first time range; 根据所述第一时间范围内的位姿变换数据对所述第二点云进行位姿变换,获得第一变换点云;Performing posture transformation on the second point cloud according to the posture transformation data within the first time range to obtain a first transformed point cloud; 将所述第一变换点云与所述第一点云进行合并,获得所述第一融合点云。The first transformed point cloud is merged with the first point cloud to obtain the first fused point cloud. 8.一种自移动设备,其特征在于,所述自移动设备设置有主深度相机,所述自移动设备包括:8. A self-moving device, characterized in that the self-moving device is provided with a main depth camera, and the self-moving device comprises: 获取模块,被配置为在所述自移动设备的行进过程中,通过所述主深度相机采集每帧深度图像数据;所述主深度相机被配置为:在每个数据采集周期内,按照预设的M个曝光参数交替的采集M帧深度图像数据,所述M个曝光参数呈单调递增或单调递减变化趋势;M为大于1的整数;The acquisition module is configured to collect each frame of depth image data through the main depth camera during the movement of the self-mobile device; the main depth camera is configured to: in each data collection cycle, alternately collect M frames of depth image data according to preset M exposure parameters, and the M exposure parameters show a monotonically increasing or monotonically decreasing trend; M is an integer greater than 1; 解码模块,被配置为对所述每帧深度图像数据进行解码,获得所述每帧深度图像数据对应的每帧点云;A decoding module, configured to decode each frame of the depth image data to obtain each frame of point cloud corresponding to each frame of the depth image data; 环境感知模块,被配置为对所述每个数据采集周期对应的M帧点云进行融合获得第一融合点云,并将所述第一融合点云重新作为每帧的点云,以基于所述每帧的点云进行环境感知。The environment perception module is configured to fuse the M frames of point clouds corresponding to each data acquisition cycle to obtain a first fused point cloud, and use the first fused point cloud as the point cloud of each frame to perform environment perception based on the point cloud of each frame. 9.一种计算机存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1~7中任意一项所述的自移动设备的环境感知方法。9. A computer storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the method for environmental perception of a mobile device as claimed in any one of claims 1 to 7 is implemented. 10.一种电子设备,其特征在于,包括:10. An electronic device, comprising: 处理器;以及Processor; and 存储器,用于存储所述处理器的可执行指令;A memory, configured to store executable instructions of the processor; 其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1~7中任意一项所述的自移动设备的环境感知方法。The processor is configured to execute the environment perception method for a mobile device as described in any one of claims 1 to 7 by executing the executable instructions.
CN202410304322.9A 2024-03-15 2024-03-15 Environmental perception method of self-mobile device, self-mobile device, medium, device Pending CN118612555A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410304322.9A CN118612555A (en) 2024-03-15 2024-03-15 Environmental perception method of self-mobile device, self-mobile device, medium, device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410304322.9A CN118612555A (en) 2024-03-15 2024-03-15 Environmental perception method of self-mobile device, self-mobile device, medium, device

Publications (1)

Publication Number Publication Date
CN118612555A true CN118612555A (en) 2024-09-06

Family

ID=92550569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410304322.9A Pending CN118612555A (en) 2024-03-15 2024-03-15 Environmental perception method of self-mobile device, self-mobile device, medium, device

Country Status (1)

Country Link
CN (1) CN118612555A (en)

Similar Documents

Publication Publication Date Title
CN108335353B (en) Three-dimensional reconstruction method, device and system, server and medium of dynamic scene
JP7236565B2 (en) POSITION AND ATTITUDE DETERMINATION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM
JP6552729B2 (en) System and method for fusing the outputs of sensors having different resolutions
WO2019242262A1 (en) Augmented reality-based remote guidance method and device, terminal, and storage medium
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
CN109211277B (en) State determination method and device of visual inertial odometer and electronic equipment
CN106898022A (en) A kind of hand-held quick three-dimensional scanning system and method
KR20200110120A (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN113034594A (en) Pose optimization method and device, electronic equipment and storage medium
CN112344855B (en) Obstacle detection method and device, storage medium and drive test equipment
CN112348886A (en) Visual positioning method, terminal and server
CN110276774A (en) Object drawing method, device, terminal and computer-readable storage medium
CN112270702A (en) Volume measurement method and apparatus, computer readable medium and electronic device
CN113378605A (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN118470474A (en) Multi-sensor fusion SLAM method, equipment and medium
CN116228974A (en) Three-dimensional model construction method, device, computer equipment and storage medium
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN114972514A (en) SLAM positioning method, device, electronic equipment and readable storage medium
WO2022246812A1 (en) Positioning method and apparatus, electronic device, and storage medium
CN118612555A (en) Environmental perception method of self-mobile device, self-mobile device, medium, device
CN114820953B (en) Data processing method, device, equipment and storage medium
WO2020107487A1 (en) Image processing method and unmanned aerial vehicle
JP7075090B1 (en) Information processing system and information processing method
TWI793584B (en) Mapping and localization system for automated valet parking and method thereof
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination