[go: up one dir, main page]

CN115032650A - High-precision positioning and deviation rectifying method, system and medium integrating vision and laser - Google Patents

High-precision positioning and deviation rectifying method, system and medium integrating vision and laser Download PDF

Info

Publication number
CN115032650A
CN115032650A CN202210563402.7A CN202210563402A CN115032650A CN 115032650 A CN115032650 A CN 115032650A CN 202210563402 A CN202210563402 A CN 202210563402A CN 115032650 A CN115032650 A CN 115032650A
Authority
CN
China
Prior art keywords
cleaning
robot
model
vision
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210563402.7A
Other languages
Chinese (zh)
Inventor
张晨博
郭震
杨俊�
周洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jingwu Trade Technology Development Co Ltd
Original Assignee
Shanghai Jingwu Trade Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jingwu Trade Technology Development Co Ltd filed Critical Shanghai Jingwu Trade Technology Development Co Ltd
Priority to CN202210563402.7A priority Critical patent/CN115032650A/en
Publication of CN115032650A publication Critical patent/CN115032650A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明提供了一种视觉与激光融合的高精度定位纠偏方法、系统及介质,包括:步骤S1:清洁机器人利用激光雷达移动至指定清洁点位;步骤S2:清洁机器人利用相机扫描清洁物体获取物体的3D模型;步骤S3:将扫描得到的3D模型与提前建立的满足预设要求的清洁物体3D模型进行匹配,计算出匹配度最好的视觉点位;步骤S4:利用视觉与机器人底盘中心对应位置关系计算机器人底盘中心相对于清洁物体的距离和角度,控制机器人进行相应移动;步骤S5:机器人移动完成后,重新获得清洁物体的3D模型,并计算当前清洁物体的3D模型与满足预设要求的清洁物体3D模型的匹配度,判断误差是否满足预设要求,当不满足预设要求时,则重复触发步骤S2至步骤S5,直至误差满足预设要求。

Figure 202210563402

The present invention provides a high-precision positioning and deviation correction method, system and medium integrating vision and laser, including: step S1: the cleaning robot uses laser radar to move to a designated cleaning point; step S2: the cleaning robot uses a camera to scan the cleaning object to obtain the object Step S3: Match the 3D model obtained by scanning with the 3D model of the clean object established in advance that meets the preset requirements, and calculate the visual point with the best matching degree; Step S4: Use vision to correspond to the center of the robot chassis The position relationship calculates the distance and angle of the center of the robot chassis relative to the cleaning object, and controls the robot to move accordingly; Step S5: After the robot moves, re-acquire the 3D model of the cleaning object, and calculate the 3D model of the current cleaning object to meet the preset requirements. The matching degree of the 3D model of the cleaning object is determined, and whether the error meets the preset requirement is judged. When the preset requirement is not met, the triggering of steps S2 to S5 is repeated until the error meets the preset requirement.

Figure 202210563402

Description

High-precision positioning and deviation rectifying method, system and medium integrating vision and laser
Technical Field
The invention relates to the technical field of cleaning robot software algorithms, in particular to a vision and laser fusion high-precision positioning and deviation rectifying method, a system and a medium, and more particularly relates to a vision and laser fusion high-precision positioning and deviation rectifying method and a system based on a hotel cleaning robot.
Background
The invention discloses a hotel cleaning robot, which is a service robot for cleaning a hotel room toilet, realizes intelligent cleaning combining autonomous navigation and autonomous cleaning functions, has accurate cleaning capability which cannot be achieved by people, can reasonably plan cleaning paths and cleaning steps for different objects or different cleaning areas, and has a certain arrival error due to navigation, so that the error can influence the overall cleaning precision of the robot to a certain extent.
Patent document CN114158984A (application number: 202111581831.9) discloses a cleaning robot including: the robot comprises a controller function module, a mechanical arm function module, a navigation system module, a vision system module, a clamping jaw function module, a sweeper function module and a UI (user interface) module; the controller function module comprises a master controller and a plurality of slave controllers, the master controller is connected with the plurality of slave controllers, and the controller is connected with the other function modules; the mechanical arm functional module is used for executing cleaning action; the navigation system module is used for taking charge of the advancing and positioning functions of the robot; the visual system module is used for realizing mode recognition, indoor environment reconstruction and auxiliary positioning; the clamping jaw function module is used for installing and fixing a cleaning tool and is arranged at the tail end of the mechanical arm function module; the sweeper functional module is used for executing sweeping actions; the UI interface module is used for providing a visual operation interface.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a high-precision positioning and deviation rectifying method, system and medium with vision and laser fusion.
The invention provides a high-precision positioning and deviation rectifying method based on vision and laser fusion, which comprises the following steps:
step S1: the cleaning robot moves to a designated cleaning point position by using the laser radar;
step S2: the cleaning robot scans a cleaning object by using a camera to obtain a 3D model of the object;
step S3: matching the 3D model obtained by scanning with a previously established 3D model of the cleaning object meeting the preset requirement, and calculating the visual point position with the best matching degree;
step S4: controlling the robot to correspondingly move by using the corresponding position relation between the vision and the center of the chassis of the robot;
step S5: after the robot finishes moving, acquiring the 3D model of the cleaning object again, calculating the matching degree of the 3D model of the current cleaning object and the 3D model of the cleaning object meeting the preset requirement, judging whether the error meets the preset requirement, and when the error does not meet the preset requirement, repeatedly triggering the steps S2 to S5 until the error meets the preset requirement;
the 3D model of the cleaning object meeting the preset requirement is obtained by placing the cleaning robot at a preset position through a cleaning robot camera.
Preferably, the step S1 adopts:
step S1.1: establishing a 2d grid map used for corresponding navigation by using a laser radar;
step S1.2: and based on the established 2d grid map, positioning by using a laser radar of the cleaning robot to reach the designated cleaning point.
Preferably, the step S3 adopts: and iteratively completing the matching of the 3D model obtained by current scanning and the previously established 3D model of the cleaning object meeting the preset requirement based on an icp registration algorithm, thereby obtaining the visual point location with the best matching degree.
Preferably, the step S4 adopts: and calculating the distance and the angle of the center of the robot chassis relative to the cleaning object by using the corresponding position relation between the vision and the center of the robot chassis, and controlling the robot to move correspondingly.
The invention provides a vision and laser fused high-precision positioning and deviation rectifying system, which comprises:
module M1: the cleaning robot moves to a designated cleaning point position by using the laser radar;
module M2: the cleaning robot scans a cleaning object by using a camera to obtain a 3D model of the object;
module M3: matching the 3D model obtained by scanning with a previously established 3D model of the cleaning object meeting the preset requirement, and calculating the visual point position with the best matching degree;
module M4: controlling the robot to move correspondingly by using the corresponding position relation between the vision and the center of the robot chassis;
module M5: after the robot finishes moving, the 3D model of the cleaning object is obtained again, the matching degree of the 3D model of the current cleaning object and the 3D model of the cleaning object meeting the preset requirement is calculated, whether the error meets the preset requirement or not is judged, and when the error does not meet the preset requirement, the module M2 to the module M5 are triggered repeatedly until the error meets the preset requirement;
the 3D model of the cleaning object meeting the preset requirement is obtained by placing the cleaning robot at a preset position through a cleaning robot camera.
Preferably, the module M1 employs:
module M1.1: establishing a 2d grid map used for corresponding navigation by using a laser radar;
module M1.2: and based on the established 2d grid map, positioning by using a laser radar of the cleaning robot to reach the designated cleaning point.
Preferably, the module M3 adopts: and iteratively completing the matching of the 3D model obtained by current scanning and the previously established 3D model of the clean object meeting the preset requirement based on an icp registration algorithm, thereby obtaining the visual point location with the best matching degree.
Preferably, the module M4 employs: and calculating the distance and the angle of the center of the robot chassis relative to the cleaning object by using the corresponding position relation between the vision and the center of the robot chassis, and controlling the robot to move correspondingly.
According to the present invention, a computer-readable storage medium is provided, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method as described above.
The present invention provides a cleaning robot including: a controller;
the controller comprises the computer readable storage medium storing the computer program, or the controller comprises the vision and laser fused high-precision positioning deviation correcting system.
Compared with the prior art, the invention has the following beneficial effects:
1. the visual positioning and deviation rectifying device can also avoid the collision problem caused by the fact that the 2D laser navigation cannot detect the space obstacle;
2. the arrival precision of the cleaning target point is improved, and the overall stability and accuracy of cleaning by the cleaning robot are greatly improved;
3. the defect of lack of 2D laser characteristic points is made up by using abundant image characteristics of vision, and the auxiliary positioning before the robot arrives at a point is facilitated;
4. the 3D modeling can be carried out on the object to be cleaned in advance through vision, so that the navigation to the point is facilitated, the current object to be cleaned can be quickly identified and matched, and the final robot positioning and deviation correction can be realized.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a flow chart of a high-precision positioning and deviation rectifying method based on vision and laser fusion of a hotel cleaning robot.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example 1
The invention provides a high-precision positioning and deviation rectifying method integrating vision and laser aiming at the problem that a certain point arriving error exists when a hotel cleaning robot navigates to a point, and the method makes use of rich image characteristics of the vision to make up for the deficiency of 2D laser characteristic points and is beneficial to auxiliary positioning before the robot arrives at the point;
the walking position of the robot can be effectively controlled through vision so as to avoid unnecessary collision in the space;
the 3D modeling can be carried out on the object needing to be cleaned in advance through vision, so that the object needing to be cleaned can be rapidly identified and matched after the object is navigated to a point, and the final robot positioning and deviation correction can be realized.
The invention provides a high-precision positioning and deviation rectifying method based on vision and laser fusion, which comprises the following steps of:
step 1: establishing a 2d grid map used for corresponding navigation by using a two-dimensional laser radar;
step 2: marking each room point location and each cleaning point location in the room on a map;
and 3, step 3: with the issuing of the cleaning task, the robot starts to go from a linen room, goes to a target guest room toilet and reaches a designated cleaning point;
and 4, step 4: scanning a clean object by using a camera, matching the scanned 3D model with a previously established 3D model of the clean object, and calculating a visual point location with the best matching degree, so that the distance and the angle between the center of the chassis of the robot and the clean object are calculated by using the corresponding position relation between vision and the center of the chassis of the robot, and finally the robot is controlled to move correspondingly;
specifically, calculating by an icp registration algorithm to obtain a visual point with the best matching degree; the ICP is an Iterative Closest Point algorithm (ICP), and the aim is to perform Point cloud registration, where Point cloud registration refers to inputting two Point clouds (source) and (target), and outputting a rotational-translational transformation to make the coincidence degree of the two Point clouds as high as possible.
Specifically, the calculated visual point position with the best matching degree is used for calculating the position deviation between the current camera and the camera for acquiring the 3D model of the cleaning object built in advance, and then the position deviation is converted to the center of the chassis to calculate the position deviation of the center of the chassis.
And 5: due to the movement error of the robot chassis, the cleaning requirement can be met only by moving the robot for many times, and therefore, the operation of the step 4 is carried out again after the robot moves until the error reaches a certain error threshold range.
The invention provides a vision and laser fused high-precision positioning and deviation rectifying system, which comprises:
module 1: establishing a 2d grid map used for corresponding navigation by using a two-dimensional laser radar;
and (3) module 2: marking each room point location and each cleaning point location in the room on a map;
and a module 3: with the issuing of the cleaning task, the robot starts to go from a linen room, goes to a target guest room toilet and reaches a designated cleaning point;
and a module 4: scanning a clean object by using a camera, matching the scanned 3D model with a previously established 3D model of the clean object, and calculating a visual point location with the best matching degree, so that the distance and the angle between the center of the chassis of the robot and the clean object are calculated by using the corresponding position relation between vision and the center of the chassis of the robot, and finally the robot is controlled to move correspondingly;
specifically, calculating by an icp registration algorithm to obtain a visual point location with the best matching degree; ICP is an Iterative Closest Point (ICP) algorithm, which aims to perform Point cloud registration, and Point cloud registration refers to inputting two Point clouds (source) and (target) and outputting a rotational-translational transformation so that the degree of coincidence of the two Point clouds is as high as possible.
Specifically, the calculated visual point position with the best matching degree is used for calculating the position deviation between the current camera and the camera for acquiring the 3D model of the cleaning object built in advance, and then the position deviation is converted to the center of the chassis to calculate the position deviation of the center of the chassis.
And a module 5: due to the movement error of the robot chassis, the cleaning requirement can be met only by moving the robot for many times, so that the robot can operate the module 4 again after the movement is completed until a certain error threshold range is reached.
It is known to those skilled in the art that, in addition to implementing the system, apparatus and its various modules provided by the present invention in pure computer readable program code, the system, apparatus and its various modules provided by the present invention can be implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like by completely programming the method steps. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1.一种视觉与激光融合的高精度定位纠偏方法,其特征在于,包括:1. a high-precision positioning correction method of vision and laser fusion, is characterized in that, comprises: 步骤S1:清洁机器人利用激光雷达移动至指定清洁点位;Step S1: the cleaning robot moves to the designated cleaning point by using the lidar; 步骤S2:清洁机器人利用相机扫描清洁物体获取物体的3D模型;Step S2: the cleaning robot scans the cleaning object with a camera to obtain a 3D model of the object; 步骤S3:将扫描得到的3D模型与提前建立的满足预设要求的清洁物体3D模型进行匹配,计算出匹配度最好的视觉点位;Step S3: Match the 3D model obtained by scanning with the 3D model of the clean object established in advance that meets the preset requirements, and calculate the visual point with the best matching degree; 步骤S4:利用视觉与机器人底盘中心对应位置关系控制机器人进行相应移动;Step S4: control the robot to move accordingly by using the corresponding positional relationship between the vision and the center of the robot chassis; 步骤S5:机器人移动完成后,重新获得清洁物体的3D模型,并计算当前清洁物体的3D模型与满足预设要求的清洁物体3D模型的匹配度,判断误差是否满足预设要求,当不满足预设要求时,则重复触发步骤S2至步骤S5,直至误差满足预设要求;Step S5: After the robot moves, the 3D model of the cleaning object is re-obtained, and the matching degree between the 3D model of the current cleaning object and the 3D model of the cleaning object that meets the preset requirements is calculated, and whether the error meets the preset requirements is judged. When the requirement is set, the triggering steps S2 to S5 are repeated until the error meets the preset requirement; 所述满足预设要求的清洁物体3D模型是将清洁机器人放置于预设位置通过清洁机器人相机获得的。The 3D model of the cleaning object that meets the preset requirements is obtained by placing the cleaning robot at a preset position through the cleaning robot camera. 2.根据权利要求1所述的视觉与激光融合的高精度定位纠偏方法,其特征在于,所述步骤S1采用:2. the high-precision positioning correction method of vision and laser fusion according to claim 1, is characterized in that, described step S1 adopts: 步骤S1.1:利用激光雷达建立相应的导航所使用的2d栅格地图;Step S1.1: Use lidar to establish a 2d grid map used for corresponding navigation; 步骤S1.2:基于建立的2d栅格地图,利用清洁机器人的激光雷达进行定位到达指定清洁点位。Step S1.2: Based on the established 2d grid map, use the lidar of the cleaning robot to locate the designated cleaning point. 3.根据权利要求1所述的视觉与激光融合的高精度定位纠偏方法,其特征在于,所述步骤S3采用:基于icp配准算法迭代完成当前扫描得到的3D模型与提前建立的满足预设要求的清洁物体3D模型的匹配,从而得到匹配度最好的视觉点位。3. The high-precision positioning and deviation correction method of vision and laser fusion according to claim 1, wherein the step S3 adopts: based on the icp registration algorithm iteratively completes the 3D model obtained by the current scan and the pre-established 3D model that meets the preset requirements. Match the required 3D model of the cleaning object, so as to obtain the visual point with the best matching degree. 4.根据权利要求1所述的视觉与激光融合的高精度定位纠偏方法,其特征在于,所述步骤S4采用:利用视觉与机器人底盘中心对应位置关系计算机器人底盘中心相对于清洁物体的距离和角度,控制机器人进行相应移动。4. the high-precision positioning and deviation correction method of vision and laser fusion according to claim 1, is characterized in that, described step S4 adopts: utilizes vision and the corresponding positional relationship of robot chassis center to calculate the distance and distance of robot chassis center relative to cleaning object. angle, and control the robot to move accordingly. 5.一种视觉与激光融合的高精度定位纠偏系统,其特征在于,包括:5. A high-precision positioning and deviation correction system of vision and laser fusion is characterized in that, comprising: 模块M1:清洁机器人利用激光雷达移动至指定清洁点位;Module M1: The cleaning robot uses lidar to move to the designated cleaning point; 模块M2:清洁机器人利用相机扫描清洁物体获取物体的3D模型;Module M2: The cleaning robot uses the camera to scan the cleaning object to obtain the 3D model of the object; 模块M3:将扫描得到的3D模型与提前建立的满足预设要求的清洁物体3D模型进行匹配,计算出匹配度最好的视觉点位;Module M3: Match the scanned 3D model with the pre-established 3D model of the clean object that meets the preset requirements, and calculate the visual point with the best matching degree; 模块M4:利用视觉与机器人底盘中心对应位置关系控制机器人进行相应移动;Module M4: Control the robot to move accordingly by using the corresponding positional relationship between vision and the center of the robot chassis; 模块M5:机器人移动完成后,重新获得清洁物体的3D模型,并计算当前清洁物体的3D模型与满足预设要求的清洁物体3D模型的匹配度,判断误差是否满足预设要求,当不满足预设要求时,则重复触发模块M2至模块M5,直至误差满足预设要求;Module M5: After the robot moves, the 3D model of the cleaning object is obtained again, and the matching degree between the 3D model of the current cleaning object and the 3D model of the cleaning object that meets the preset requirements is calculated, and whether the error meets the preset requirements is judged. When the requirements are set, the modules M2 to M5 are triggered repeatedly until the error meets the preset requirements; 所述满足预设要求的清洁物体3D模型是将清洁机器人放置于预设位置通过清洁机器人相机获得的。The 3D model of the cleaning object that meets the preset requirements is obtained by placing the cleaning robot at a preset position through the cleaning robot camera. 6.根据权利要求5所述的视觉与激光融合的高精度定位纠偏系统,其特征在于,所述模块M1采用:6. The high-precision positioning and deviation correction system of vision and laser fusion according to claim 5, is characterized in that, described module M1 adopts: 模块M1.1:利用激光雷达建立相应的导航所使用的2d栅格地图;Module M1.1: Use lidar to build a 2d grid map used for corresponding navigation; 模块M1.2:基于建立的2d栅格地图,利用清洁机器人的激光雷达进行定位到达指定清洁点位。Module M1.2: Based on the established 2d grid map, use the lidar of the cleaning robot to locate the designated cleaning point. 7.根据权利要求5所述的视觉与激光融合的高精度定位纠偏系统,其特征在于,所述模块M3采用:基于icp配准算法迭代完成当前扫描得到的3D模型与提前建立的满足预设要求的清洁物体3D模型的匹配,从而得到匹配度最好的视觉点位。7. The high-precision positioning and deviation correction system of vision and laser fusion according to claim 5, characterized in that, the module M3 adopts: based on the icp registration algorithm iteratively completes the 3D model obtained by the current scan and the 3D model established in advance to meet the preset requirements. Match the required 3D model of the cleaning object, so as to obtain the visual point with the best matching degree. 8.根据权利要求5所述的视觉与激光融合的高精度定位纠偏系统,其特征在于,所述模块M4采用:利用视觉与机器人底盘中心对应位置关系计算机器人底盘中心相对于清洁物体的距离和角度,控制机器人进行相应移动。8. the high-precision positioning and deviation correction system of vision and laser fusion according to claim 5, is characterized in that, described module M4 adopts: utilizes vision and the corresponding positional relationship of robot chassis center to calculate the distance and distance of robot chassis center relative to cleaning object. angle, and control the robot to move accordingly. 9.一种存储有计算机程序的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时实现权利要求1至4中任一项所述的方法的步骤。9 . A computer-readable storage medium storing a computer program, wherein, when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 4 are implemented. 10 . 10.一种清洁机器人,其特征在于,包括:控制器;10. A cleaning robot, comprising: a controller; 所述控制器包括权利要求9所述的存储有计算机程序的计算机可读存储介质,或者,所述控制器包括权利要求5至8中任一项所述的视觉与激光融合的高精度定位纠偏系统。The controller includes the computer-readable storage medium storing the computer program according to claim 9, or the controller includes the high-precision positioning and deviation correction of vision and laser fusion according to any one of claims 5 to 8 system.
CN202210563402.7A 2022-05-20 2022-05-20 High-precision positioning and deviation rectifying method, system and medium integrating vision and laser Pending CN115032650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210563402.7A CN115032650A (en) 2022-05-20 2022-05-20 High-precision positioning and deviation rectifying method, system and medium integrating vision and laser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210563402.7A CN115032650A (en) 2022-05-20 2022-05-20 High-precision positioning and deviation rectifying method, system and medium integrating vision and laser

Publications (1)

Publication Number Publication Date
CN115032650A true CN115032650A (en) 2022-09-09

Family

ID=83120418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210563402.7A Pending CN115032650A (en) 2022-05-20 2022-05-20 High-precision positioning and deviation rectifying method, system and medium integrating vision and laser

Country Status (1)

Country Link
CN (1) CN115032650A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104932534A (en) * 2015-05-22 2015-09-23 广州大学 Method for cloud robot to clean items
US20160147230A1 (en) * 2014-11-26 2016-05-26 Irobot Corporation Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems
CN108596084A (en) * 2018-04-23 2018-09-28 宁波Gqy视讯股份有限公司 A kind of charging pile automatic identifying method and device
CN108759844A (en) * 2018-06-07 2018-11-06 科沃斯商用机器人有限公司 Robot relocates and environmental map construction method, robot and storage medium
CN111358362A (en) * 2018-12-26 2020-07-03 珠海市一微半导体有限公司 Cleaning control method, device, chip for visual robot and cleaning robot
CN113108773A (en) * 2021-04-22 2021-07-13 哈尔滨理工大学 Grid map construction method integrating laser and visual sensor
CN113192138A (en) * 2021-04-28 2021-07-30 坎德拉(深圳)科技创新有限公司 Robot autonomous relocation method and device, robot and storage medium
CN113503876A (en) * 2021-07-09 2021-10-15 深圳华芯信息技术股份有限公司 Multi-sensor fusion laser radar positioning method, system and terminal
WO2021246732A1 (en) * 2020-06-01 2021-12-09 삼성전자주식회사 Cleaning robot and method for controlling same
CN114158984A (en) * 2021-12-22 2022-03-11 上海景吾酷租科技发展有限公司 Cleaning robot
WO2022088430A1 (en) * 2020-10-29 2022-05-05 上海高仙自动化科技发展有限公司 Inspection and cleaning method and apparatus of robot, robot, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160147230A1 (en) * 2014-11-26 2016-05-26 Irobot Corporation Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems
CN104932534A (en) * 2015-05-22 2015-09-23 广州大学 Method for cloud robot to clean items
CN108596084A (en) * 2018-04-23 2018-09-28 宁波Gqy视讯股份有限公司 A kind of charging pile automatic identifying method and device
CN108759844A (en) * 2018-06-07 2018-11-06 科沃斯商用机器人有限公司 Robot relocates and environmental map construction method, robot and storage medium
CN111358362A (en) * 2018-12-26 2020-07-03 珠海市一微半导体有限公司 Cleaning control method, device, chip for visual robot and cleaning robot
WO2021246732A1 (en) * 2020-06-01 2021-12-09 삼성전자주식회사 Cleaning robot and method for controlling same
WO2022088430A1 (en) * 2020-10-29 2022-05-05 上海高仙自动化科技发展有限公司 Inspection and cleaning method and apparatus of robot, robot, and storage medium
CN113108773A (en) * 2021-04-22 2021-07-13 哈尔滨理工大学 Grid map construction method integrating laser and visual sensor
CN113192138A (en) * 2021-04-28 2021-07-30 坎德拉(深圳)科技创新有限公司 Robot autonomous relocation method and device, robot and storage medium
CN113503876A (en) * 2021-07-09 2021-10-15 深圳华芯信息技术股份有限公司 Multi-sensor fusion laser radar positioning method, system and terminal
CN114158984A (en) * 2021-12-22 2022-03-11 上海景吾酷租科技发展有限公司 Cleaning robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张旭辉 等: "基于视觉测量的快速掘进机器人纠偏控制研究", 工矿自动化, vol. 46, no. 9, 10 September 2020 (2020-09-10) *
李秀智 等: "基于多源融合式SLAM的机器人三维环境建模", 北京理工大学学报, no. 03, 15 March 2015 (2015-03-15) *

Similar Documents

Publication Publication Date Title
CN105354875B (en) A kind of indoor environment is two-dimentional with the construction method and system of three-dimensional conjunctive model
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
CN111665826B (en) Depth map acquisition method based on laser radar and monocular camera and sweeping robot
CN111923011B (en) Execution method and device for live work and live work system
CN111240331A (en) Intelligent trolley positioning and navigation method and system based on laser radar and odometer SLAM
CN113888691B (en) Method, device and storage medium for constructing building scene semantic map
CN106444768B (en) Robot welting walking method and system
GB2580690A (en) Mapping an environment using a state of a robotic device
WO2018163450A1 (en) Robot control device and calibration method
CN111679663A (en) Three-dimensional map construction method, cleaning robot and electronic equipment
WO2022142078A1 (en) Method and apparatus for action learning, medium, and electronic device
WO2018209863A1 (en) Intelligent moving method and device, robot and storage medium
US20190160677A1 (en) Method Of Building A Geometric Representation Over A Working Space Of A Robot
CN113778096B (en) Positioning and model building method and system for indoor robot
CN112182122A (en) Method and device for acquiring navigation map of working environment of mobile robot
Steckel et al. Spatial sampling strategy for a 3D sonar sensor supporting BatSLAM
CN119458364A (en) A humanoid robot grasping method based on three-dimensional vision
CN115668293B (en) Carpet detection method, motion control method and mobile machine using the same
CN115032650A (en) High-precision positioning and deviation rectifying method, system and medium integrating vision and laser
CN109960254A (en) Robot and its path planning method
CN113886903B (en) Method, device and storage medium for constructing global SLAM map
CN114754781A (en) Map updating method, device, robot and medium
CN115590407A (en) Cleaning robotic arm planning control system, method and cleaning robot
JP2017094482A (en) Robot control system and robot control method
JP2021196489A (en) Map correction system, and map correction program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20260123