[go: up one dir, main page]

WO2025067332A1 - Cleaning device control method and apparatus, storage medium, controller, and device - Google Patents

Cleaning device control method and apparatus, storage medium, controller, and device Download PDF

Info

Publication number
WO2025067332A1
WO2025067332A1 PCT/CN2024/121422 CN2024121422W WO2025067332A1 WO 2025067332 A1 WO2025067332 A1 WO 2025067332A1 CN 2024121422 W CN2024121422 W CN 2024121422W WO 2025067332 A1 WO2025067332 A1 WO 2025067332A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
obstacle avoidance
area
trajectory
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/121422
Other languages
French (fr)
Chinese (zh)
Inventor
刘力格
彭吉祥
李鑫
程冉
孙涛
韩冲
王消为
赵大成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Robozone Technology Co Ltd
Original Assignee
Midea Robozone Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202311278568.5A external-priority patent/CN119758985A/en
Priority claimed from CN202410322675.1A external-priority patent/CN118924195A/en
Application filed by Midea Robozone Technology Co Ltd filed Critical Midea Robozone Technology Co Ltd
Publication of WO2025067332A1 publication Critical patent/WO2025067332A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers

Definitions

  • the present disclosure relates to the technical field of cleaning equipment, and in particular to a control method, device, storage medium, controller, and equipment for cleaning equipment.
  • cleaning equipment on the market such as sweepers
  • laser instruments such as lidar
  • the information obtained by laser instruments is relatively limited. For example, based on this information, it is impossible to accurately distinguish between uneven surfaces and low obstacles, nor can it distinguish between the type and location of stains, etc., resulting in the inability of cleaning equipment to apply more complex cleaning strategies.
  • the present disclosure aims to solve one of the technical problems in the related art at least to a certain extent.
  • the purpose of the present disclosure is to provide a control method, device and storage medium, controller, and device for cleaning equipment to improve the working efficiency and accuracy of the cleaning equipment in complex environments.
  • the first aspect of the present disclosure proposes a control method for a cleaning device, comprising: acquiring an image of an area to be cleaned; performing image segmentation and edge processing on the image of the area to be cleaned; constructing a target map of the area to be cleaned based on the segmentation results and the edge processing results, wherein the target map includes obstacle information; and controlling the cleaning device based on the target map.
  • control method of the cleaning device of the embodiment of the present disclosure may also have the following additional technical features:
  • a pre-trained segmentation model is used to perform image segmentation on the image of the area to be cleaned, and the training process of the segmentation model includes: acquiring a plurality of work scene images; dividing the plurality of work scene images into a training set and a test set, and respectively annotating the targets to be inspected in the work scene images in the training set and the test set, wherein the targets to be inspected include the background, the ground, and traversable obstacles on the ground; constructing a segmentation model, and pre-training the segmentation model using the training set and its corresponding annotation information, and testing the pre-trained segmentation model using the test set and its corresponding annotation information to obtain a final trained segmentation model.
  • constructing a target map of the area to be cleaned based on the segmentation results and the edge processing results includes: performing corrosion processing on the ground area in the segmentation results to obtain a first image; obtaining an edge line where the background and the ground are in contact in the image of the area to be cleaned based on the edge processing results and the first image; and constructing the target map based on the edge line and traversable obstacles on the ground in the segmentation results.
  • the image of the area to be cleaned is acquired by using an image acquisition element installed on the cleaning device, and the edge line where the background and the ground are in contact in the image of the area to be cleaned is obtained based on the edge processing result and the first image, including: removing the edge in the ground area in the edge processing result based on the first image; and determining the edge line in the edge processing result after removing the edge in the ground area according to the line of sight of the image acquisition element from near to far.
  • the segmentation result is constructed based on the edge line and the surmountable obstacles on the ground.
  • the target map includes: determining a traversable area according to the edge line, and obtaining the category and position of the traversable obstacles in the traversable area according to the traversable area and the traversable obstacles on the ground in the segmentation result; and constructing the target map according to the category and position of the traversable obstacles in the traversable area.
  • controlling the cleaning device according to the target map includes: updating a global map according to the target map; determining a target cleaning area and a target cleaning strategy according to the updated global map; and controlling the cleaning device to clean the target cleaning area according to the target cleaning strategy.
  • the second aspect embodiment of the present disclosure proposes a control device for cleaning equipment, including: an acquisition module for acquiring an image of an area to be cleaned; a processing module for performing image segmentation and edge processing on the image of the area to be cleaned; a construction module for constructing a target map of the area to be cleaned based on the edge processing results and the segmentation results; and a control module for controlling the cleaning equipment based on the target map.
  • a third aspect of the present disclosure provides a computer-readable storage medium having a computer program stored thereon.
  • the computer program is executed by a processor, the control method of the cleaning device described in the first aspect of the present disclosure is implemented.
  • the fourth aspect of the present disclosure proposes a controller, including a memory, a processor and a computer program stored in the memory.
  • the computer program is executed by the processor, the control method of the cleaning equipment described in the first aspect of the present disclosure is implemented.
  • a fifth aspect of the present disclosure proposes a cleaning device, including the above-mentioned controller.
  • the control method, device, storage medium, controller, and device of the cleaning equipment of the disclosed embodiment firstly, an image of the area to be cleaned is acquired; then, the image of the area to be cleaned is segmented and edge processed; then, a target map containing obstacle information of the area to be cleaned is constructed based on the segmentation result and edge processing result; finally, the cleaning equipment is controlled based on the target map.
  • the target map is constructed by fusing the edge processing result with the image segmentation result to control the cleaning equipment, thereby improving the working efficiency and accuracy of the cleaning equipment in complex environments.
  • FIG1 is a flow chart of a method for controlling a cleaning device according to an embodiment of the present disclosure
  • FIG2 is a flow chart of a method for controlling a cleaning device according to an embodiment of the present disclosure
  • FIG3 is a training flowchart of a segmentation model according to an embodiment of the present disclosure
  • FIG4 is a flowchart of constructing a target map according to a specific embodiment of the present disclosure.
  • FIG5( a ) is an image of an area to be cleaned according to an example of the present disclosure
  • FIG5(b) is a segmentation result diagram of the image in FIG4(a);
  • FIG5(c) is a schematic diagram of edge lines obtained based on FIG4(a);
  • Figure 5(d) is the target map constructed based on Figure 4(a);
  • FIG6 is a schematic flow chart of a control method for a cleaning device provided in an embodiment of the present disclosure.
  • FIG7 is a schematic diagram of a grid map provided by an embodiment of the present disclosure.
  • FIG8 is a schematic diagram of an updated grid map provided by an embodiment of the present disclosure.
  • FIG9 is a schematic diagram of an obstacle avoidance trajectory of a sweeping robot provided by an embodiment of the present disclosure.
  • FIG10 is a schematic diagram of marking a circle track in an obstacle circumvention track of a sweeping robot provided by an embodiment of the present disclosure
  • FIG11 is a schematic flow chart of another method for controlling a cleaning device provided in an embodiment of the present disclosure.
  • FIG12 is a schematic flow chart of another method for controlling a cleaning device provided in an embodiment of the present disclosure.
  • FIG13 is a schematic diagram of the structure of a control device for a cleaning device according to an embodiment of the present disclosure.
  • FIG14 is a structural block diagram of a controller according to an embodiment of the present disclosure.
  • FIG. 15 is a structural block diagram of a cleaning device according to an embodiment of the present disclosure.
  • FIG. 1 is a flow chart of a method for controlling a cleaning device according to an embodiment of the present disclosure.
  • control method of the cleaning device includes:
  • the data of the area to be cleaned may include: an image of the area to be cleaned, laser radar data or other sensor data.
  • the following is a detailed description taking the cleaning area data as an image of the area to be cleaned as an example.
  • FIG2 is a flow chart of a control method of a cleaning device according to an embodiment of the present disclosure. As shown in FIG2 , the control method of the cleaning device includes:
  • the image of the area to be cleaned may be acquired by using an image acquisition element installed on the cleaning device (such as a monocular camera installed in front of the cleaning device).
  • an image acquisition element installed on the cleaning device such as a monocular camera installed in front of the cleaning device.
  • the image of the area to be cleaned can be collected by the monocular camera, and the collected image of the area to be cleaned can be stored in the memory equipped with the cleaning equipment for easy retrieval, wherein the image of the area to be cleaned can be an RGB image.
  • the monocular camera is installed in front of the cleaning equipment and can effectively observe the ground area within a certain range. Compared with the scene information obtained by laser instruments such as lidar, the monocular camera can obtain richer scene information, which is convenient for more accurate judgment of low obstacles in the future, and allows the cleaning equipment to adopt a more optimized cleaning plan based on the obstacle category information.
  • the monocular camera has a simple structure and low cost, and the obtained image is easy to calibrate and identify.
  • the canny edge processing algorithm, the sobel edge processing algorithm, etc. may be used to perform edge processing on the image of the area to be cleaned.
  • the Canny algorithm is a classic algorithm for edge processing.
  • the processing process includes: first, perform a Gaussian blur on the image to suppress the noise belonging to the high-frequency signal; then use the Sobel operator to calculate the gradient size and direction; then perform non-maximum pixel gradient suppression on the image, the purpose is to eliminate the stray response caused by edge processing.
  • the basic method is to compare the gradient intensity of the current pixel with the gradient intensity of the adjacent pixels along the positive and negative gradient directions. If it is an extreme value, the edge point of the pixel is retained. If not, it is suppressed and not regarded as an edge point; then the image is thresholded and a high threshold and a low threshold are defined.
  • Pixels with gradient intensity lower than the low threshold are suppressed and not regarded as edge points. Pixels with gradient intensity higher than the high threshold are defined as strong edges and retained as edge points. Pixels between the high and low thresholds are defined as weak edges and are left for further processing; finally, the image is isolated and weak edge suppressed. For each weak edge in the result of the previous step, if one of the eight adjacent pixels is a strong edge, the weak edge is considered to be a strong edge, otherwise it is considered to be an isolated point and discarded.
  • a pre-trained segmentation model is used to perform image segmentation on the image of the area to be cleaned.
  • the training process of the segmentation model includes: acquiring multiple work scene images; dividing the multiple work scene images into a training set and a test set, and marking the objects to be inspected in the work scene images in the training set and the test set, respectively, wherein the objects to be inspected include the background, the ground, and the ground.
  • the method further comprises the following steps: first, determining obstacles that can be crossed (such as carpets, stains, etc.) on the image; second, constructing a segmentation model, and pre-training the segmentation model using the training set and its corresponding annotation information, and testing the pre-trained segmentation model using the test set and its corresponding annotation information to obtain a final trained segmentation model.
  • the DeepLab v3+ segmentation model can be used for image segmentation.
  • the segmentation model training process is shown in Figure 3.
  • image acquisition of work scenes (such as home scenes) is performed, and the acquired images are screened to ensure the quantitative balance between categories and the correct division of training sets and test sets to a certain extent.
  • the screened images are labeled by category (i.e., the objects to be inspected are labeled).
  • the acquired images can cover a variety of house types, a variety of target types, a variety of ambient lighting, a variety of shooting distances, and a variety of shooting angles.
  • the screened images can also be preprocessed, such as image enhancement, cropping, denoising, etc.
  • an image segmentation model such as the DeepLab v3+ segmentation model mentioned above.
  • build a model training environment and use the training set to train the image segmentation model, and then use the test set to evaluate and optimize the image segmentation model. Repeat the above process to complete the image segmentation model training and obtain the segmentation model required for the cleaning equipment, that is, the final trained segmentation model.
  • DeepLab v3+ is a semantic segmentation algorithm that can assign a category to each pixel in an image, but objects in the same category will not be distinguished.
  • the biggest feature of DeepLab v3+ is the introduction of dilated convolution, which extracts more effective image features without losing information, so that each convolution output contains a wider range of information.
  • image segmentation and edge processing can be implemented through one integrated processing module or through two separate processing modules; they can be performed simultaneously or sequentially.
  • the target map may include obstacle information.
  • a target map of the area to be cleaned is constructed based on the segmentation results and the edge processing results, including: performing corrosion processing on the ground area in the segmentation results to obtain a first image; obtaining an edge line where the background and the ground are in contact in the image of the area to be cleaned based on the edge processing results and the first image; and constructing a target map based on the edge line and traversable obstacles on the ground in the segmentation results.
  • erosion processing is a basic morphological operation, which is aimed at the highlight part in the image, that is, the highlight part in the original image is eroded to obtain an area smaller than the original image.
  • the processing process includes: covering each pixel of the original image with the center of a structural element, and taking the minimum value of the covered part of the pixels in the original image to replace the original image pixel value covered by the center of the structural element.
  • the segmentation model can preliminarily segment the field of view area in front of the cleaning equipment, but due to labeling errors and inference errors, it is still impossible to obtain a relatively accurate edge of the area.
  • the present disclosure performs corrosion processing on the ground area in the image segmentation result to obtain a first image to expose more edge areas and avoid the ground segmentation result covering the edge. Then, the edge processing result of removing the edge in the ground area can be obtained in combination with the edge processing result, and then a more accurate contact edge between the insurmountable obstacles in the background and the ground can be obtained to obtain the corresponding edge line.
  • a target map can be constructed, which can show the cleanable area of the cleaning equipment, the category of obstacles that can be surmounted in the cleanable area, etc.
  • an edge line where the background and the ground meet in the image of the area to be cleaned is obtained based on the edge processing result and the first image, including: removing the edge in the ground area in the edge processing result based on the first image; and determining the edge line in the edge processing result after removing the edge in the ground area according to the line of sight of the image acquisition element from near to far.
  • the edge processing result can be a binary edge map, and the edge in the ground area in the binary edge map can be removed according to the first image.
  • the binary edge map after removing the edge in the ground area can be queried from near to far along the line of sight of the image acquisition element arranged at the front end of the cleaning device until the edge pixel position is found.
  • the process includes: in each column of the binary edge map after removing the edge in the ground area, the search starts from the point closest to the cleaning device to the point far away, and the first non-zero pixel found is the edge pixel.
  • the line connecting the edge pixels found is the above-mentioned edge line.
  • constructing a target map according to edge lines and traversable obstacles on the ground in the segmentation results includes: The passable area is determined according to the edge line, and the category and position of the passable obstacles in the passable area are obtained according to the passable area and the passable obstacles on the ground in the segmentation result; and the target map is constructed according to the category and position of the passable obstacles in the passable area.
  • the area where the edge line in the image of the area to be cleaned is close to the image acquisition element is a passable area
  • the category and position of the passable obstacles in the passable area are obtained according to the passable area and the traversable obstacles on the ground in the segmentation result (such as carpets, stains, etc.).
  • the coordinates of the traversable obstacles in the passable area are converted from the image coordinate system to the world coordinate system to construct a target map.
  • the first conversion matrix between the image coordinate system and the camera coordinate system can be used to convert the coordinates of the traversable obstacles from the image coordinate system to the camera coordinate system
  • the second conversion matrix between the camera coordinate system and the world coordinate system can be used to convert the coordinates of the traversable obstacles from the camera coordinate system to the world coordinate system.
  • the first conversion matrix can be obtained according to the intrinsic parameters, extrinsic parameters, center, distortion, etc. of the image acquisition element
  • the second conversion matrix can be obtained according to the camera posture calculated by the laser radar.
  • controlling a cleaning device according to a target map includes: updating a global map according to the target map; determining a target cleaning area and a target cleaning strategy according to the updated global map; and controlling the cleaning device to clean the target cleaning area according to the target cleaning strategy.
  • the first global map can be custom-built or fused according to multiple target maps corresponding to the entire work scene.
  • the cleaning equipment can navigate according to the global map, such as from the bedroom to the kitchen.
  • the above steps S11-S13 can be used to build a target map during navigation, and the global map can be updated according to the target map.
  • the target cleaning area (such as the currently uncleaned and passable area) and the target cleaning strategy (such as avoiding carpets and stains during navigation; lifting the mop on the carpet to increase suction; focusing on cleaning stains; avoiding obstacles that cannot be identified by sensors such as lidar and line lasers, etc.) can be determined according to the updated global map, and the cleaning equipment is controlled to clean the target cleaning area according to the target cleaning strategy. In this way, the working efficiency and accuracy of the cleaning equipment can be improved.
  • the target cleaning area such as the currently uncleaned and passable area
  • the target cleaning strategy such as avoiding carpets and stains during navigation; lifting the mop on the carpet to increase suction; focusing on cleaning stains; avoiding obstacles that cannot be identified by sensors such as lidar and line lasers, etc.
  • the image is acquired, and the image of the area to be cleaned is shown in Figure 5(a).
  • the image of the area to be cleaned is segmented by the image segmentation model, and the segmentation result is shown in Figure 5(b), where area B represents the ground, area A represents the carpet, and the unmarked part represents the background (including obstacles that cannot be crossed).
  • the ground area in Figure 5(b) is eroded to expose more edge areas and avoid the ground segmentation result covering the edge.
  • the Canny edge processing algorithm is used to process the edge of the image of the area to be cleaned, and the edge processing result is obtained.
  • edge of the ground area in the edge processing result is removed in combination with the corrosion processing result, and the edge pixel positions of the remaining edges in the edge processing result are found from near to far along the line of sight of the image acquisition element to obtain the edge line of the removed ground area, as shown by the thick white line in Figure 5(c).
  • the pixel coordinates of the contact edge between the obstacle and the ground are determined, and the obstacle category is determined.
  • the ground area before reaching the edge line is regarded as a passable area (i.e., the ground without insurmountable obstacles).
  • the segmentation result of the segmentation model already contains the category of surmountable obstacles and the coordinates in the image coordinate system.
  • the coordinates of the surmountable obstacles in the image coordinate system are converted to the world coordinate system, as shown in Figure 5(d), which shows the carpet position (i.e., the black block part in Figure 5(d)) and the insurmountable obstacle position (i.e., the line part in Figure 5(d)).
  • a BEV (Bird's Eye View), i.e., the target map, is constructed.
  • the target cleaning area and target cleaning strategy can be further determined in combination with the global map, and the cleaning equipment can be controlled to perform cleaning work according to the target cleaning area and target cleaning strategy.
  • a bird's-eye view is a perspective of viewing an object or scene from above.
  • data obtained by sensors are usually converted into BEV representations to better perform tasks such as object detection and path planning.
  • BEV representations to better perform tasks such as object detection and path planning.
  • the advantages of BEV include: the ability to simplify complex three-dimensional environments into two-dimensional images, which can save a lot of resources in computing and storage; it provides a unique visual effect that makes the objects and spatial relationships in the scene more clearly visible; processing tasks such as object detection, tracking and classification in BEV is much easier than in direct vision. It is much simpler to process the original 3D data directly; the image acquisition component will detect objects that are larger when they are closer and smaller when they are farther away. There is almost no scale difference between similar targets in BEV, so it is easier to learn the consistency of feature scale.
  • An embodiment of the present disclosure provides a method for setting an obstacle avoidance trajectory. As shown in FIG. 6 , the method includes the following steps:
  • Step 101 Determine a first obstacle circumvention trajectory of a mobile device.
  • mobile devices include, but are not limited to, robot cleaners, window cleaning robots, and campus delivery robots, and the present disclosure does not limit this.
  • the mobile device is a sweeping robot as an example.
  • the sweeping robot is also called an automatic sweeper, smart vacuum cleaner, robot vacuum cleaner, etc. It is a kind of smart home appliance that can automatically complete the floor cleaning work in the room.
  • the brushing and vacuuming methods are used to absorb the debris on the ground into its own garbage storage box, thereby completing the function of floor cleaning.
  • robots that complete the work of sweeping, vacuuming, and mopping are also uniformly classified as sweeping robots.
  • the body of the sweeping robot is a movable device of automation technology, and a vacuum cleaner with a dust collection box cooperates with the body to set a control path, and walks repeatedly in the room, such as sweeping along the edge, centralized sweeping, random sweeping, straight line sweeping and other path sweeping, and is supplemented by side brushes, central main brush rotation, rags and other methods to enhance the cleaning effect, so as to complete the anthropomorphic home cleaning effect.
  • a sweeping robot is usually equipped with multiple types of sensors, which may include lidar, camera, line laser, position sensitive detector (PSD), collision plate and inertial measurement unit (IMU); lidar can measure the distance and angle of obstacles around the robot; line laser and PSD can detect obstacles and edges on the ground; collision plate can detect collisions near the robot; IMU can detect the position and posture of the robot; the data collected by the sensor is processed to build a map of the environment where the sweeping robot is located; illustratively, the collected sensor data can be used for obstacle recognition through artificial intelligence (AI), the obstacles in the environment where the sweeping robot is located can be classified, and the obstacle information can be integrated into the probability grid map of the simultaneous localization and mapping (SLAM) system to generate an environment map with obstacle markings, and the navigation trajectory of the sweeping robot can be set according to the environment map, so that the sweeping robot can avoid obstacles.
  • sensors may include lidar, camera, line laser, position sensitive detector (PSD), collision plate and inertial measurement unit (IMU); lidar can measure the distance
  • the obstacles in the environment of the sweeping robot will change, so the navigation trajectory of the sweeping robot will also change with the changes of the obstacles.
  • the first obstacle avoidance trajectory can be a navigation trajectory set for the sweeping robot according to the marked first obstacle.
  • the first obstacle can include but is not limited to obstacles with large volume and relatively fixed position; for example: walls, cabinets, refrigerators and water dispensers, etc.
  • determining the first obstacle circumventing trajectory of the mobile device can be understood as determining the navigation trajectory of the cleaning robot to avoid the first obstacle according to the environment map.
  • Step 102 If the first obstacle avoidance trajectory meets the update condition, the first obstacle avoidance trajectory is updated to obtain a second obstacle avoidance trajectory; wherein a second obstacle included in the second obstacle avoidance trajectory has different object features from the first obstacle included in the first obstacle avoidance trajectory.
  • the update condition can be set according to the actual situation, and the present disclosure does not specifically limit this.
  • the actual situation may include that the sweeping robot determines that there is a second obstacle during the cleaning process according to the first obstacle avoidance trajectory; the actual situation may also include adding the second obstacle to the first obstacle avoidance trajectory, for example, displaying the first obstacle avoidance trajectory in a human-computer interaction interface and adding the second obstacle to the first obstacle avoidance trajectory.
  • the second obstacle may be a physical obstacle with a small volume and a flexible position, which is not limited in the present disclosure.
  • the second obstacle may be a columnar physical obstacle, such as a table leg.
  • a second obstacle when the sweeping robot is moving along the first obstacle avoidance trajectory, a second obstacle is detected and the second obstacle can be marked in the environment map; the second obstacle can also be added to the first obstacle avoidance trajectory through the human-computer interaction interface to update the first obstacle avoidance trajectory and obtain the second obstacle avoidance trajectory;
  • Figure 7 is a grid map constructed according to the environment in which the sweeping robot is located, and
  • Figure 8 is a grid map after marking the second obstacle on the basis of Figure 7.
  • Step 103 Control the mobile device to move along the second obstacle avoidance trajectory so that the mobile device avoids the first obstacle and the second obstacle during the movement. obstacle.
  • controlling the mobile device to move along the second obstacle avoidance trajectory can be understood as controlling the sweeping robot to move along the second obstacle avoidance trajectory to avoid the first obstacle and the second obstacle while completing the cleaning process.
  • the obstacle avoidance trajectory setting method determines a first obstacle avoidance trajectory of a mobile device, and if the first obstacle avoidance trajectory meets the update condition, updates the first obstacle avoidance trajectory to obtain a second obstacle avoidance trajectory, so that the mobile device can avoid the first obstacle and the second obstacle during movement.
  • step 102 updates the first obstacle avoidance trajectory to obtain the second obstacle avoidance trajectory, which can be implemented by the following steps:
  • the first obstacle avoidance trajectory is updated based on the second obstacle to obtain a second obstacle avoidance trajectory.
  • the sweeping robot when the sweeping robot moves along the first obstacle avoidance trajectory, the sweeping robot will automatically avoid the first obstacle.
  • the sweeping robot can detect obstacles through multiple types of sensors configured by itself. If a second obstacle is detected, the first obstacle avoidance trajectory is updated to obtain a second obstacle avoidance trajectory.
  • the first obstacle includes an obstacle determined based on first sensing data of a first sensor
  • the second obstacle includes an obstacle determined based on second sensing data of a second sensor
  • the first sensor may be a laser radar; the first sensing data may include obstacle data detected by the laser radar of the sweeping robot; the second sensor may be a sensor other than the laser radar, such as: line laser, PSD, collision plate and camera, etc.; the second sensing data may include obstacle data detected by sensors other than radar laser of the sweeping robot.
  • the method when controlling the mobile device to move along the first obstacle avoidance trajectory, the method further includes:
  • the obstacle avoidance data satisfies the obstacle avoidance condition, it is determined that the obstacle avoidance object corresponding to the obstacle avoidance data is the second obstacle.
  • the obstacle avoidance data is data related to the obstacle avoidance of the sweeping robot during movement, which can be collected by multiple types of sensors of the sweeping robot itself;
  • the obstacle avoidance object is the obstacle that the sweeping robot avoids during movement;
  • the obstacle avoidance condition can be set according to actual needs, which is not limited in this disclosure.
  • the actual need can be the type of obstacle that needs to be avoided.
  • obtaining the obstacle avoidance data of the mobile device during the movement of the mobile device along the first obstacle avoidance trajectory can be understood as obtaining the obstacle avoidance data collected by the sweeping robot during the movement of the first obstacle avoidance trajectory, processing the obstacle avoidance data, confirming the type of obstacles encountered by the sweeping robot in this process, and if there is an obstacle type that meets the obstacle avoidance condition, determining that the obstacle that meets the obstacle avoidance condition is the second obstacle.
  • the obstacle avoidance data includes posture data, and the obstacle avoidance data satisfies the obstacle avoidance conditions, including:
  • the posture data includes but is not limited to the moving direction and coordinate position of the sweeping robot; the yaw trajectory can be understood as the moving trajectory of the sweeping robot deviating from the first obstacle circumvention trajectory; the trajectory feature can be set according to the actual situation; for example, the trajectory feature can be a circle trajectory, that is, a circular trajectory, and the circle trajectory of the sweeping robot during movement can be specifically identified by the Hough circle detection algorithm.
  • Figure 9 is a schematic diagram of the moving trajectory of the sweeping robot along the first obstacle circumvention trajectory
  • Figure 10 is a schematic diagram of the circle trajectory marked on the basis of Figure 9.
  • determining the yaw trajectory of the mobile device during its movement along the first obstacle circumvention trajectory based on the posture data can be understood as determining the yaw trajectory that deviates from the first obstacle circumvention trajectory during the movement according to the moving direction and coordinate position of the sweeping robot; if the yaw trajectory is a circular trajectory, determining that the obstacle corresponding to the circular trajectory meets the obstacle avoidance condition.
  • the obstacle avoidance data includes collision data, and the obstacle avoidance data satisfies the obstacle avoidance conditions, including:
  • the collision angle meets the preset angle and the collision intensity meets the preset intensity, it is determined that the obstacle avoidance data meets the obstacle avoidance condition.
  • the collision data is the data collected by the collision plate of the sweeping robot during movement; the preset angle and preset force can be set according to actual conditions, and the present disclosure does not limit this.
  • the actual conditions may be the hardness of the obstacle material and the curvature of the obstacle surface.
  • the angle and intensity of the collision of the sweeping robot with different obstacles during movement are determined based on the data collected by the collision plate, and it is determined whether there is an obstacle whose collision angle meets the preset angle and whose collision intensity meets the preset intensity. If so, it is determined that the obstacle meets the obstacle avoidance condition.
  • the obstacle avoidance data includes the area and contour corresponding to the obstacle, and the obstacle avoidance data satisfies the obstacle avoidance conditions, including:
  • the obstacle avoidance data meets the obstacle avoidance condition.
  • the area and contour corresponding to the obstacle can be calculated based on the trajectory of the sweeping robot to avoid the obstacle; the preset shape and preset area can be set according to the type of obstacle to be marked, and the present disclosure does not limit this.
  • the preset shape can be circular and the preset area is 100 cm2 .
  • the method when controlling the mobile device to move along the first obstacle avoidance trajectory, the method further includes:
  • the second obstacle is a columnar physical obstacle
  • the preset threshold value can be set according to the columnar physical obstacle, for example, the preset area is set to 10 cm 2 .
  • the area corresponding to the obstacle avoided by the mobile device during the movement along the first obstacle avoidance trajectory is calculated to determine whether there is an obstacle with a corresponding area smaller than a preset threshold. If so, the obstacle is determined to be a columnar solid obstacle.
  • step 101 of determining a first obstacle circumvention trajectory of a mobile device may be implemented by the following steps:
  • An environment map is constructed based on the first obstacle information, and a first obstacle avoidance trajectory is generated.
  • the first obstacle information is collected by the sensor of the sweeping robot, the first obstacle information is integrated into the probability grid map of the same SLAM system, an environment map marked with the first obstacle is generated, and the first obstacle avoidance trajectory of the sweeping robot is set according to the environment map.
  • step 102 updates the first obstacle avoidance trajectory to obtain the second obstacle avoidance trajectory, which can also be implemented by the following steps:
  • the first obstacle avoidance trajectory is updated based on the second obstacle to obtain a second obstacle avoidance trajectory, and the second obstacle avoidance trajectory is displayed on the human-computer interaction interface.
  • the human-computer interaction interface may include but is not limited to a mobile phone interface and a tablet interface, for example: an interaction interface of an application in a mobile phone or tablet.
  • a first obstacle avoidance trajectory of the sweeping robot can be displayed on the interactive interface of the mobile phone application. While the mobile device is moving along the first obstacle avoidance trajectory, marking information of the second obstacle can be received through the mobile phone application. After receiving the marking information, the first obstacle avoidance trajectory is updated to obtain a second obstacle avoidance trajectory. The second obstacle avoidance trajectory of the sweeping robot is displayed on the interactive interface of the mobile phone application to update the obstacle avoidance trajectory of the sweeping robot in real time.
  • the setting of the obstacle avoidance trajectory of the sweeping robot can be achieved by the following steps:
  • S601 Constructing indoor mobile robot maps through SLAM.
  • S602 Combines collision plate, LiDAR, line laser, PSD sensor and AI obstacle recognition to set the navigation trajectory of the sweeping robot.
  • S603 Obtain the position and posture of the robot when circumventing obstacles, and obtain the obstacle circumvention trajectory.
  • Obstacle avoidance centers of several obstacle avoidance trajectories are obtained by using graphics algorithms such as binarization, Gaussian blur, and Hough circle detection.
  • S605 Draw an obstacle mark at the obstacle avoidance center, and display the map, track and obstacle on the APP to avoid the track bypassing the empty track.
  • An embodiment of the present disclosure provides a method for setting an obstacle avoidance trajectory. As shown in FIG. 12 , the method includes the following steps:
  • the first map may be any map that can mark obstacles; for example: a grid map constructed by a SLAM system; the first map may include the first obstacle that the sweeping robot has identified.
  • S702 Obtain obstacle information when the mobile device moves according to the first map.
  • the sweeping robot can obtain different obstacle information according to various sensors provided on the robot during movement.
  • S703 updating obstacles on the first map according to the obstacle information, and displaying the updated obstacles.
  • the sweeping robot can determine obstacles different from the first obstacle based on obstacle information collected by the sensor during movement, thereby updating the obstacles on the first map; the updated obstacles can be displayed on the first map through the human-computer interaction interface.
  • the updated obstacle includes a first obstacle before updating and a newly added second obstacle, and the second obstacle has different object features from the first obstacle.
  • the sweeping robot determines different categories of obstacles according to obstacle information during movement, thereby distinguishing a second obstacle different from the first obstacle, and updates the second obstacle on the first map.
  • step S703 updates obstacles on the first map according to the obstacle information, which can be achieved by the following steps:
  • the obstacle information indicates that a second obstacle exists when the mobile device moves according to the first map
  • the obstacle on the first map is updated.
  • the method for determining the second obstacle according to the obstacle information is the same as the above method and will not be described in detail here.
  • the first obstacle includes an obstacle determined based on first sensing data of a first sensor
  • the second obstacle includes an obstacle determined based on second sensing data of a second sensor
  • the first sensor may be a laser radar; the first sensing data may include obstacle data detected by the laser radar of the sweeping robot; the second sensor may be a sensor other than the laser radar, such as: line laser, PSD, collision plate and camera, etc.; the second sensing data may include obstacle data detected by sensors other than radar laser of the sweeping robot.
  • step S702 of obtaining obstacle information when the mobile device moves according to the first map can be implemented by the following steps:
  • a marked second obstacle is received on the human-computer interaction interface to obtain obstacle information.
  • the method of displaying the first obstacle and receiving obstacle information is the same as the above method, which will not be described in detail here.
  • step S703 updates obstacles on the first map according to obstacle information
  • the method includes:
  • the mobile device is controlled to avoid the first obstacle and the second obstacle during movement.
  • the sweeping robot can avoid the first obstacle and the second obstacle during movement according to the map markings.
  • the present disclosure also proposes a control device for a cleaning device.
  • FIG. 13 is a schematic diagram of the structure of a control device of a cleaning device according to an embodiment of the present disclosure.
  • the control device 500 of the cleaning device includes: an acquisition module 501, a detection module 502, a segmentation module 503, a construction module 504, and a control module 505.
  • the acquisition module 501 is used to acquire the image of the area to be cleaned; the processing module 502 is used to perform image segmentation and edge processing on the image of the area to be cleaned; the construction module 504 is used to construct a target map of the area to be cleaned according to the edge processing results and the segmentation results, and the target map includes obstacle information; the control module 505 is used to control the cleaning equipment according to the target map.
  • a pre-trained segmentation model is used to perform image segmentation on the image of the area to be cleaned, and the training process of the segmentation model includes: acquiring a plurality of work scene images; dividing the plurality of work scene images into a training set and a test set, and respectively annotating the targets to be inspected in the work scene images in the training set and the test set, wherein the targets to be inspected include the background, the ground, and traversable obstacles on the ground; constructing a segmentation model, and pre-training the segmentation model using the training set and its corresponding annotation information, and testing the pre-trained segmentation model using the test set and its corresponding annotation information to obtain a final trained segmentation model.
  • a target map of the area to be cleaned is constructed based on edge processing results and segmentation results, including: performing corrosion processing on the ground area in the segmentation results to obtain a first image; obtaining edge lines where the background and the ground are in contact in the image of the area to be cleaned based on the edge processing results and the first image; and constructing a target map based on the edge lines and traversable obstacles on the ground in the segmentation results.
  • the image of the area to be cleaned is acquired by using an image acquisition element installed on the cleaning equipment, and the edge line where the background and the ground meet in the image of the area to be cleaned is obtained based on the edge processing result and the first image, including: based on the edge of the background and the ground in the edge processing result, the edge line is determined in the first image from near to far according to the line of sight of the image acquisition element.
  • a target map is constructed based on edge lines and traversable obstacles on the ground in segmentation results, including: determining a traversable area based on the edge lines, and obtaining the categories and positions of the traversable obstacles in the traversable area based on the traversable area and the traversable obstacles on the ground in the segmentation results; and constructing a target map based on the categories and positions of the traversable obstacles in the traversable area.
  • controlling a cleaning device according to a target map includes: updating a global map according to the target map; determining a target cleaning area and a target cleaning strategy according to the updated global map; and controlling the cleaning device to clean the target cleaning area according to the target cleaning strategy.
  • control device 500 of the cleaning equipment in the embodiment of the present disclosure, reference may be made to the specific implementations of the control method of the cleaning equipment in the above-mentioned embodiment of the present disclosure.
  • the present disclosure also proposes a computer-readable storage medium.
  • a computer program is stored thereon, and when the computer program is executed by the processor, the above-mentioned control method of the cleaning device is implemented.
  • the present disclosure also proposes a controller.
  • FIG. 14 is a structural block diagram of a controller according to an embodiment of the present disclosure.
  • the controller 600 includes a processor 601, a memory 603 and a computer program stored in the memory.
  • the computer program is executed by the processor, the control method of the cleaning device described above is implemented.
  • Processor 601 may be a CPU (Central Processing Unit), a general-purpose processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. It may implement or execute various exemplary logic blocks, modules and circuits described in conjunction with the disclosure of the present invention. Processor 601 may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, etc.
  • the bus 602 may include a path for transmitting information between the above components.
  • the bus 602 may be a PCI (Peripheral Component Interconnect) bus or an EISA (Extended Industry Standard Architecture) bus, etc.
  • the bus 602 may be divided into an address bus, a data bus, a control bus, etc.
  • FIG. 14 only uses one thick line, but does not mean that there is only one bus or one type of bus.
  • the memory 603 is used to store a computer program corresponding to the control method of the cleaning device of the above embodiment of the present disclosure, and the computer program is controlled and executed by the processor 601.
  • the processor 601 is used to execute the computer program stored in the memory 603 to implement the contents shown in the above method embodiment.
  • the controller 600 includes but is not limited to: mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), etc., and fixed terminals such as desktop computers, etc.
  • mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), etc.
  • fixed terminals such as desktop computers, etc.
  • the controller 600 shown in FIG. 14 is only an example and should not bring any limitation to the functions and scope of use of the embodiments of the present disclosure.
  • FIG. 15 is a structural block diagram of a cleaning device according to an embodiment of the present disclosure.
  • a cleaning device 700 includes the controller 600 of the above embodiment.
  • control method, device, storage medium, controller, and device of the cleaning equipment of the disclosed embodiment perform image segmentation and edge processing on the image of the area to be cleaned acquired by the monocular image acquisition element, and detect the open areas on the ground and key obstacles (such as stains, carpets, etc.) in the working scene by combining the segmentation results and edge processing results, and construct a target map, thereby realizing the control of the cleaning equipment according to the target map.
  • the working efficiency and accuracy of the automatic cleaning equipment in complex environments are improved.
  • computer-readable media include the following: an electrical connection portion with one or more wirings (electronic device), a portable computer disk box (magnetic device), a random access memory (RAM), a read-only memory (ROM), an erasable and programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disk read-only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program is printed, since the program may be obtained electronically, for example, by optically scanning the paper or other medium and then editing, interpreting or processing in other suitable ways if necessary, and then stored in a computer memory.
  • first and second are used for descriptive purposes only and should not be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Therefore, the features defined as “first” and “second” may explicitly or implicitly include at least one
  • “plurality” means at least two, such as two, three, etc., unless otherwise clearly and specifically defined.
  • the terms “installed”, “connected”, “connected”, “fixed” and the like should be understood in a broad sense, for example, it can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, it can be the internal connection of two elements or the interaction relationship between two elements, unless otherwise clearly defined.
  • installed can be a fixed connection, a detachable connection, or an integral connection
  • it can be a mechanical connection or an electrical connection
  • it can be a direct connection or an indirect connection through an intermediate medium, it can be the internal connection of two elements or the interaction relationship between two elements, unless otherwise clearly defined.
  • the specific meanings of the above terms in the present disclosure can be understood according to specific circumstances.
  • a first feature being “above” or “below” a second feature may mean that the first and second features are in direct contact, or the first and second features are in indirect contact through an intermediate medium.
  • a first feature being “above”, “above” or “above” a second feature may mean that the first feature is directly above or obliquely above the second feature, or simply means that the first feature is higher in level than the second feature.
  • a first feature being “below”, “below” or “below” a second feature may mean that the first feature is directly below or obliquely below the second feature, or simply means that the first feature is lower in level than the second feature.

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A cleaning device control method and apparatus, a storage medium, a controller, and a device. The method comprises: acquiring an image of a region to be cleaned (S11); performing image segmentation and edge processing on the image of said region (S12); constructing a target map of said region on the basis of the segmentation result and the edge processing result, the target map comprising obstacle information (S13); and controlling a cleaning device on the basis of the target map (S14).

Description

清洁设备的控制方法、装置及存储介质、控制器、设备Control method, device, storage medium, controller, and equipment for cleaning equipment

相关公开的交叉引用Cross-references to related publications

本公开要求于2024年03月20日提交的,申请号为202410322675.1的中国专利申请以及2023年09月28日提交的,申请号为202311278568.5的中国专利申请的优先权,其全部内容通过引用的方式并入本公开中。The present disclosure claims priority to Chinese patent application No. 202410322675.1 filed on March 20, 2024 and No. 202311278568.5 filed on September 28, 2023, the entire contents of which are incorporated into the present disclosure by reference.

技术领域Technical Field

本公开涉及清洁设备技术领域,尤其涉及一种清洁设备的控制方法、装置及存储介质、控制器、设备。The present disclosure relates to the technical field of cleaning equipment, and in particular to a control method, device, storage medium, controller, and equipment for cleaning equipment.

背景技术Background Art

目前,市面上的清洁设备,如扫地机,主要依赖激光雷达等激光仪器采集周围环境信息,并根据周围环境信息确定障碍物位置,进而根据障碍物位置进行清洁控制。然而,激光仪器所得到的信息较为有限,例如根据该信息无法准确区分地面的凹凸不平和低矮障碍物,也无法区分污渍类别和位置等,导致清洁设备无法应用更复杂的清洁策略。Currently, cleaning equipment on the market, such as sweepers, mainly rely on laser instruments such as lidar to collect information about the surrounding environment, and determine the location of obstacles based on the surrounding environment information, and then perform cleaning control based on the location of obstacles. However, the information obtained by laser instruments is relatively limited. For example, based on this information, it is impossible to accurately distinguish between uneven surfaces and low obstacles, nor can it distinguish between the type and location of stains, etc., resulting in the inability of cleaning equipment to apply more complex cleaning strategies.

公开内容Public Content

本公开旨在至少在一定程度上解决相关技术中的技术问题之一。为此,本公开的目的在于提出一种清洁设备的控制方法、装置及存储介质、控制器、设备,以提高清洁设备在复杂环境中的工作效率和准确性。The present disclosure aims to solve one of the technical problems in the related art at least to a certain extent. To this end, the purpose of the present disclosure is to provide a control method, device and storage medium, controller, and device for cleaning equipment to improve the working efficiency and accuracy of the cleaning equipment in complex environments.

为达到上述目的,本公开第一方面实施例提出了一种清洁设备的控制方法,包括:获取待清洁区域图像;对所述待清洁区域图像进行图像分割和边缘处理;根据分割结果和边缘处理结果,构建所述待清洁区域的目标地图,所述目标地图包括障碍物信息;根据所述目标地图对所述清洁设备进行控制。To achieve the above-mentioned objectives, the first aspect of the present disclosure proposes a control method for a cleaning device, comprising: acquiring an image of an area to be cleaned; performing image segmentation and edge processing on the image of the area to be cleaned; constructing a target map of the area to be cleaned based on the segmentation results and the edge processing results, wherein the target map includes obstacle information; and controlling the cleaning device based on the target map.

另外,本公开实施例的清洁设备的控制方法还可以具有如下附加的技术特征:In addition, the control method of the cleaning device of the embodiment of the present disclosure may also have the following additional technical features:

根据本公开的一个实施例,利用预先训练好的分割模型对所述待清洁区域图像进行图像分割,所述分割模型的训练过程包括:采集得到多个工作场景图像;将多个所述工作场景图像划分为训练集和测试集,并分别对所述训练集和所述测试集中的工作场景图像中的待检目标进行标注,其中,所述待检目标包括背景、地面,以及地面上的可越过障碍物;构建分割模型,并利用所述训练集及其对应的标注信息对所述分割模型进行预训练,以及利用所述测试集及其对应的标注信息对预训练好的分割模型进行测试,得到最终训练好的分割模型。According to one embodiment of the present disclosure, a pre-trained segmentation model is used to perform image segmentation on the image of the area to be cleaned, and the training process of the segmentation model includes: acquiring a plurality of work scene images; dividing the plurality of work scene images into a training set and a test set, and respectively annotating the targets to be inspected in the work scene images in the training set and the test set, wherein the targets to be inspected include the background, the ground, and traversable obstacles on the ground; constructing a segmentation model, and pre-training the segmentation model using the training set and its corresponding annotation information, and testing the pre-trained segmentation model using the test set and its corresponding annotation information to obtain a final trained segmentation model.

根据本公开的一个实施例,所述根据分割结果和边缘处理结果,构建所述待清洁区域的目标地图,包括:对所述分割结果中的地面区域进行腐蚀处理,得到第一图像;根据所述边缘处理结果和所述第一图像,得到所述待清洁区域图像中背景与地面接触的边缘线;根据所述边缘线和所述分割结果中地面上的可越过障碍物,构建所述目标地图。According to an embodiment of the present disclosure, constructing a target map of the area to be cleaned based on the segmentation results and the edge processing results includes: performing corrosion processing on the ground area in the segmentation results to obtain a first image; obtaining an edge line where the background and the ground are in contact in the image of the area to be cleaned based on the edge processing results and the first image; and constructing the target map based on the edge line and traversable obstacles on the ground in the segmentation results.

根据本公开的一个实施例,所述待清洁区域图像是利用安装于所述清洁设备上的图像采集元件采集得到的,所述根据所述边缘处理结果和所述第一图像,得到所述待清洁区域图像中背景与地面接触的边缘线,包括:根据所述第一图像去掉所述边缘处理结果中处于地面区域的边缘;按照所述图像采集元件的视线由近及远在去掉处于地面区域的边缘后的边缘处理结果中确定出所述边缘线。According to one embodiment of the present disclosure, the image of the area to be cleaned is acquired by using an image acquisition element installed on the cleaning device, and the edge line where the background and the ground are in contact in the image of the area to be cleaned is obtained based on the edge processing result and the first image, including: removing the edge in the ground area in the edge processing result based on the first image; and determining the edge line in the edge processing result after removing the edge in the ground area according to the line of sight of the image acquisition element from near to far.

根据本公开的一个实施例,所述根据所述边缘线和所述分割结果中地面上的可越过障碍物,构建所 述目标地图,包括:根据所述边缘线确定可通行区域,并根据所述可通行区域和所述分割结果中地面上的可越过障碍物,得到所述可通行区域中可越过障碍物的类别和位置;根据所述可通行区域中可越过障碍物的类别和位置,构建所述目标地图。According to an embodiment of the present disclosure, the segmentation result is constructed based on the edge line and the surmountable obstacles on the ground. The target map includes: determining a traversable area according to the edge line, and obtaining the category and position of the traversable obstacles in the traversable area according to the traversable area and the traversable obstacles on the ground in the segmentation result; and constructing the target map according to the category and position of the traversable obstacles in the traversable area.

根据本公开的一个实施例,所述根据所述目标地图对所述清洁设备进行控制,包括:根据所述目标地图更新全局地图;根据更新后的全局地图确定目标清洁区域和目标清洁策略;按照所述目标清洁策略控制所述清洁设备对所述目标清洁区域进行清洁。According to one embodiment of the present disclosure, controlling the cleaning device according to the target map includes: updating a global map according to the target map; determining a target cleaning area and a target cleaning strategy according to the updated global map; and controlling the cleaning device to clean the target cleaning area according to the target cleaning strategy.

为达到上述目的,本公开第二方面实施例提出了一种清洁设备的控制装置,包括:获取模块,用于获取待清洁区域图像;处理模块,用于对所述待清洁区域图像进行图像分割和边缘处理;构建模块,用于根据边缘处理结果和分割结果,构建所述待清洁区域的目标地图;控制模块,用于根据所述目标地图对所述清洁设备进行控制。To achieve the above-mentioned objectives, the second aspect embodiment of the present disclosure proposes a control device for cleaning equipment, including: an acquisition module for acquiring an image of an area to be cleaned; a processing module for performing image segmentation and edge processing on the image of the area to be cleaned; a construction module for constructing a target map of the area to be cleaned based on the edge processing results and the segmentation results; and a control module for controlling the cleaning equipment based on the target map.

为达到上述目的,本公开第三方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现上述第一方面实施例所述的清洁设备的控制方法。To achieve the above objectives, a third aspect of the present disclosure provides a computer-readable storage medium having a computer program stored thereon. When the computer program is executed by a processor, the control method of the cleaning device described in the first aspect of the present disclosure is implemented.

为达到上述目的,本公开第四方面实施例提出了一种控制器,包括存储器、处理器和存储在所述存储器上的计算机程序,所述计算机程序被所述处理器执行时,实现上述第一方面实施例所述的清洁设备的控制方法。To achieve the above objectives, the fourth aspect of the present disclosure proposes a controller, including a memory, a processor and a computer program stored in the memory. When the computer program is executed by the processor, the control method of the cleaning equipment described in the first aspect of the present disclosure is implemented.

为达到上述目的,本公开第五方面实施例提出了一种清洁设备,包括所述的控制器。In order to achieve the above-mentioned purpose, a fifth aspect of the present disclosure proposes a cleaning device, including the above-mentioned controller.

根据本公开实施例的清洁设备的控制方法、装置及存储介质、控制器、设备,首先获取待清洁区域图像;接着对待清洁区域图像进行图像分割和边缘处理;然后根据分割结果和边缘处理结果,构建待清洁区域的包含障碍物信息的目标地图;最后根据目标地图对清洁设备进行控制。由此,通过图像分割结果融合边缘处理结果构建目标地图,以控制清洁设备,提高了清洁设备在复杂环境中的工作效率和准确性。According to the control method, device, storage medium, controller, and device of the cleaning equipment of the disclosed embodiment, firstly, an image of the area to be cleaned is acquired; then, the image of the area to be cleaned is segmented and edge processed; then, a target map containing obstacle information of the area to be cleaned is constructed based on the segmentation result and edge processing result; finally, the cleaning equipment is controlled based on the target map. Thus, the target map is constructed by fusing the edge processing result with the image segmentation result to control the cleaning equipment, thereby improving the working efficiency and accuracy of the cleaning equipment in complex environments.

本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。Additional aspects and advantages of the present disclosure will be given in part in the following description and in part will be obvious from the following description or learned through practice of the present disclosure.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本公开实施例的清洁设备的控制方法的流程图;FIG1 is a flow chart of a method for controlling a cleaning device according to an embodiment of the present disclosure;

图2是本公开一个实施例的清洁设备的控制方法的流程图;FIG2 is a flow chart of a method for controlling a cleaning device according to an embodiment of the present disclosure;

图3是本公开实施例的分割模型的训练流程图;FIG3 is a training flowchart of a segmentation model according to an embodiment of the present disclosure;

图4是本公开一个具体实施例的构建目标地图的流程图;FIG4 is a flowchart of constructing a target map according to a specific embodiment of the present disclosure;

图5(a)是本公开一个示例的待清洁区域图像;FIG5( a ) is an image of an area to be cleaned according to an example of the present disclosure;

图5(b)是对图4(a)中图像的分割结果图;FIG5(b) is a segmentation result diagram of the image in FIG4(a);

图5(c)是基于图4(a)得到边缘线的示意图;FIG5(c) is a schematic diagram of edge lines obtained based on FIG4(a);

图5(d)是基于图4(a)构建的目标地图;Figure 5(d) is the target map constructed based on Figure 4(a);

图6为本公开实施例提供的一种清洁设备的控制方法的流程示意图;FIG6 is a schematic flow chart of a control method for a cleaning device provided in an embodiment of the present disclosure;

图7为本公开实施例提供的一种栅格地图的示意图;FIG7 is a schematic diagram of a grid map provided by an embodiment of the present disclosure;

图8为本公开实施例提供的一种更新后的栅格地图的示意图;FIG8 is a schematic diagram of an updated grid map provided by an embodiment of the present disclosure;

图9为本公开实施例提供的一种扫地机器人的绕障轨迹的示意图;FIG9 is a schematic diagram of an obstacle avoidance trajectory of a sweeping robot provided by an embodiment of the present disclosure;

图10为本公开实施例提供的一种在扫地机器人的绕障轨迹中标记出绕圈轨迹的示意图;FIG10 is a schematic diagram of marking a circle track in an obstacle circumvention track of a sweeping robot provided by an embodiment of the present disclosure;

图11为本公开实施例提供的又一种清洁设备的控制方法的流程示意图;FIG11 is a schematic flow chart of another method for controlling a cleaning device provided in an embodiment of the present disclosure;

图12为本公开实施例提供的又一种清洁设备的控制方法的流程示意图; FIG12 is a schematic flow chart of another method for controlling a cleaning device provided in an embodiment of the present disclosure;

图13是本公开实施例的清洁设备的控制装置的结构示意图;FIG13 is a schematic diagram of the structure of a control device for a cleaning device according to an embodiment of the present disclosure;

图14是本公开实施例的控制器的结构框图;FIG14 is a structural block diagram of a controller according to an embodiment of the present disclosure;

图15是本公开一个实施例的清洁设备的结构框图。FIG. 15 is a structural block diagram of a cleaning device according to an embodiment of the present disclosure.

具体实施方式DETAILED DESCRIPTION

下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。Embodiments of the present disclosure are described in detail below, examples of which are shown in the accompanying drawings, wherein the same or similar reference numerals throughout represent the same or similar elements or elements having the same or similar functions. The embodiments described below with reference to the accompanying drawings are exemplary and are intended to be used to explain the present disclosure, and should not be construed as limiting the present disclosure.

下面参考附图描述本公开实施例的清洁设备的控制方法、装置及存储介质、控制器、设备。The following describes the control method, device and storage medium, controller, and device of the cleaning device according to the embodiments of the present disclosure with reference to the accompanying drawings.

图1是本公开实施例的清洁设备的控制方法的流程图。FIG. 1 is a flow chart of a method for controlling a cleaning device according to an embodiment of the present disclosure.

如图1所示,清洁设备的控制方法包括:As shown in FIG1 , the control method of the cleaning device includes:

S1,获取待清洁区域数据。S1, obtaining data of the area to be cleaned.

S2,对待清洁区域数据进行处理。S2, processing the data of the area to be cleaned.

S3,根据数据处理结果,构建待清洁区域的目标地图,目标地图包括障碍物信息。S3, constructing a target map of the area to be cleaned according to the data processing result, wherein the target map includes obstacle information.

S4,根据目标地图对清洁设备进行控制。S4, controlling the cleaning equipment according to the target map.

在一些实施例中,待清洁区域数据可包括:待清洁区域图像、激光雷达数据或其他传感器数据,下面以清洁区域数据为待清洁区域图像为例进行细化说明。In some embodiments, the data of the area to be cleaned may include: an image of the area to be cleaned, laser radar data or other sensor data. The following is a detailed description taking the cleaning area data as an image of the area to be cleaned as an example.

图2为本公开一个实施例的清洁设备的控制方法的流程图,如图2所示,清洁设备的控制方法包括:FIG2 is a flow chart of a control method of a cleaning device according to an embodiment of the present disclosure. As shown in FIG2 , the control method of the cleaning device includes:

S11,获取待清洁区域图像。S11, acquiring an image of the area to be cleaned.

其中,待清洁区域图像可以是利用安装于清洁设备上的图像采集元件(如安装于清洁设备前方的单目摄像头)采集得到的。The image of the area to be cleaned may be acquired by using an image acquisition element installed on the cleaning device (such as a monocular camera installed in front of the cleaning device).

具体地,以单目摄像头获取图像为例,可通过单目摄像头采集待清洁区域图像,并可将采集到的待清洁区域图像存储至清洁设备配有存储器中,以便调用,其中,待清洁区域图像可以是RGB图像。单目摄像头安装在清洁设备的前方,可以有效观察到一定范围内的地面区域。相较于依赖激光雷达等激光仪器获取的场景信息,单目摄像头能获得更丰富的场景信息,便于后续能更准确地判断低矮障碍物,并依据障碍物类别信息让清洁设备采用更优化的清洁方案,且单目摄像头结构简单、成本低,获得的图像便于标定和识别。Specifically, taking the acquisition of images by a monocular camera as an example, the image of the area to be cleaned can be collected by the monocular camera, and the collected image of the area to be cleaned can be stored in the memory equipped with the cleaning equipment for easy retrieval, wherein the image of the area to be cleaned can be an RGB image. The monocular camera is installed in front of the cleaning equipment and can effectively observe the ground area within a certain range. Compared with the scene information obtained by laser instruments such as lidar, the monocular camera can obtain richer scene information, which is convenient for more accurate judgment of low obstacles in the future, and allows the cleaning equipment to adopt a more optimized cleaning plan based on the obstacle category information. The monocular camera has a simple structure and low cost, and the obtained image is easy to calibrate and identify.

S12,对待清洁区域图像进行图像分割和边缘处理。S12, performing image segmentation and edge processing on the image of the area to be cleaned.

具体地,可采用canny边缘处理算法、sobel边缘处理算法等对待清洁区域图像进行边缘处理。Specifically, the canny edge processing algorithm, the sobel edge processing algorithm, etc. may be used to perform edge processing on the image of the area to be cleaned.

其中,canny算法是边缘处理的经典算法,处理过程包括:首先对图像进行一次高斯模糊,抑制属于高频信号的噪声;接着使用Sobel算子计算梯度大小和方向;然后对图像进行非极大值像素梯度抑制,目的在于消除边缘处理带来的杂散响应,基本方法是将当前像素梯度强度与沿正负梯度方向上的相邻像素的梯度强度进行比较,若为极值,则保留该像素边缘点,若不是,则对其进行抑制,不将其作为边缘点;随后对图像做阈值滞后处理,定义一个高阈值和一个低阈值。梯度强度低于低阈值的像素点被抑制,不作为边缘点,高于高阈值的像素点被定义为强边缘,保留为边缘点,处于高低阈值之间的定义为弱边缘,留待进一步处理;最后对图像进行孤立弱边缘抑制,对上一步结果中的每一个弱边缘判断,如果周围8个邻接像素有一个是强边缘,则认为该弱边缘也是强边缘,否则视作孤立点抛弃。Among them, the Canny algorithm is a classic algorithm for edge processing. The processing process includes: first, perform a Gaussian blur on the image to suppress the noise belonging to the high-frequency signal; then use the Sobel operator to calculate the gradient size and direction; then perform non-maximum pixel gradient suppression on the image, the purpose is to eliminate the stray response caused by edge processing. The basic method is to compare the gradient intensity of the current pixel with the gradient intensity of the adjacent pixels along the positive and negative gradient directions. If it is an extreme value, the edge point of the pixel is retained. If not, it is suppressed and not regarded as an edge point; then the image is thresholded and a high threshold and a low threshold are defined. Pixels with gradient intensity lower than the low threshold are suppressed and not regarded as edge points. Pixels with gradient intensity higher than the high threshold are defined as strong edges and retained as edge points. Pixels between the high and low thresholds are defined as weak edges and are left for further processing; finally, the image is isolated and weak edge suppressed. For each weak edge in the result of the previous step, if one of the eight adjacent pixels is a strong edge, the weak edge is considered to be a strong edge, otherwise it is considered to be an isolated point and discarded.

在本公开的一些实施例中,利用预先训练好的分割模型对待清洁区域图像进行图像分割,分割模型的训练过程包括:采集得到多个工作场景图像;将多个工作场景图像划分为训练集和测试集,并分别对训练集和测试集中的工作场景图像中的待检目标进行标注,其中,待检目标包括背景、地面,以及地面 上的可越过障碍物(如地毯、污渍等);构建分割模型,并利用训练集及其对应的标注信息对分割模型进行预训练,以及利用测试集及其对应的标注信息对预训练好的分割模型进行测试,得到最终训练好的分割模型。In some embodiments of the present disclosure, a pre-trained segmentation model is used to perform image segmentation on the image of the area to be cleaned. The training process of the segmentation model includes: acquiring multiple work scene images; dividing the multiple work scene images into a training set and a test set, and marking the objects to be inspected in the work scene images in the training set and the test set, respectively, wherein the objects to be inspected include the background, the ground, and the ground The method further comprises the following steps: first, determining obstacles that can be crossed (such as carpets, stains, etc.) on the image; second, constructing a segmentation model, and pre-training the segmentation model using the training set and its corresponding annotation information, and testing the pre-trained segmentation model using the test set and its corresponding annotation information to obtain a final trained segmentation model.

具体地,综合考虑准确性和轻量性,可采用DeepLab v3+分割模型进行图像分割,分割模型训练流程如图3所示。首先进行工作场景(如家居场景)的图像采集,并对采集到的图像进行筛选,以在某种程度上保证类别间的数量平衡和正确划分训练集和测试集,之后对筛选出的图像进行类别标注(即标注出待检目标)。其中,为确保待检目标和工作场景的多样性,采集的图像可覆盖多种户型、多种目标种类、多种环境光照、多种拍摄距离和多种拍摄角度等。另外,为提高检测效率,还可对筛选出的图像进行预处理,如图像增强处理、裁剪处理、去噪处理等。Specifically, considering accuracy and lightness, the DeepLab v3+ segmentation model can be used for image segmentation. The segmentation model training process is shown in Figure 3. First, image acquisition of work scenes (such as home scenes) is performed, and the acquired images are screened to ensure the quantitative balance between categories and the correct division of training sets and test sets to a certain extent. Then, the screened images are labeled by category (i.e., the objects to be inspected are labeled). Among them, in order to ensure the diversity of the objects to be inspected and work scenes, the acquired images can cover a variety of house types, a variety of target types, a variety of ambient lighting, a variety of shooting distances, and a variety of shooting angles. In addition, in order to improve the detection efficiency, the screened images can also be preprocessed, such as image enhancement, cropping, denoising, etc.

然后,选择或构建图像分割模型,如上述的DeepLab v3+分割模型。之后,搭建模型训练环境,并利用训练集对图像分割模型进行训练,接着利用测试集对图像分割模型进行评估和优化。重复上述过程,从而完成对图像分割模型训练,获得清洁设备所需的分割模型,即最终训练好的分割模型。Then, select or build an image segmentation model, such as the DeepLab v3+ segmentation model mentioned above. After that, build a model training environment, and use the training set to train the image segmentation model, and then use the test set to evaluate and optimize the image segmentation model. Repeat the above process to complete the image segmentation model training and obtain the segmentation model required for the cleaning equipment, that is, the final trained segmentation model.

其中,DeepLab v3+为一种语义分割算法,可以将图像中的每个像素分配一个类别,但是同一类别之间的对象不会区分。与传统的语义分割算法相比,DeepLab v3+最大的特点在于引入了空洞卷积,在不损失信息的情况下,提取更有效的图像特征,让每个卷积输出都包含较大范围的信息。Among them, DeepLab v3+ is a semantic segmentation algorithm that can assign a category to each pixel in an image, but objects in the same category will not be distinguished. Compared with traditional semantic segmentation algorithms, the biggest feature of DeepLab v3+ is the introduction of dilated convolution, which extracts more effective image features without losing information, so that each convolution output contains a wider range of information.

需要说明的是,上述的图像分割和边缘处理可通过一个集成处理模块实现,也可通过单独的两个处理模块实现;可同时进行,也可先后依次进行。It should be noted that the above-mentioned image segmentation and edge processing can be implemented through one integrated processing module or through two separate processing modules; they can be performed simultaneously or sequentially.

S13,根据分割结果和边缘处理结果,构建待清洁区域的目标地图。S13, constructing a target map of the area to be cleaned according to the segmentation results and the edge processing results.

其中,目标地图可包含障碍物信息。Among them, the target map may include obstacle information.

在本公开的一些实施例中,根据分割结果和边缘处理结果,构建待清洁区域的目标地图,包括:对分割结果中的地面区域进行腐蚀处理,得到第一图像;根据边缘处理结果和第一图像,得到待清洁区域图像中背景与地面接触的边缘线;根据边缘线和分割结果中地面上的可越过障碍物,构建目标地图。In some embodiments of the present disclosure, a target map of the area to be cleaned is constructed based on the segmentation results and the edge processing results, including: performing corrosion processing on the ground area in the segmentation results to obtain a first image; obtaining an edge line where the background and the ground are in contact in the image of the area to be cleaned based on the edge processing results and the first image; and constructing a target map based on the edge line and traversable obstacles on the ground in the segmentation results.

其中,腐蚀处理是基本的形态学运算,是针对图像中的高亮部分而言的,即原图像中高亮部分被蚕食,得到比原图更小的区域。处理过程包括:用一个结构元素的中心覆盖原图像的每个像素,取原图像中被覆盖部分像素的最小值替换被结构元素中心覆盖的原图像像素值。Among them, erosion processing is a basic morphological operation, which is aimed at the highlight part in the image, that is, the highlight part in the original image is eroded to obtain an area smaller than the original image. The processing process includes: covering each pixel of the original image with the center of a structural element, and taking the minimum value of the covered part of the pixels in the original image to replace the original image pixel value covered by the center of the structural element.

具体地,分割模型可初步对清洁设备前方视野区域进行分割,但由于标注误差和推理误差,仍无法获得较为准确的区域边缘。为此,本公开对图像分割结果中的地面区域进行腐蚀处理,得到第一图像,以暴露更多边缘区域并避免地面分割结果覆盖边缘。接着,结合边缘处理结果可得到去掉地面区域中边缘的边缘处理结果,进而可获得更准确的背景中的不可越过障碍物与地面的接触边缘,得到相应的边缘线。之后,根据边缘线和分割结果中地面上的可越过障碍物,可构建得到目标地图,该目标地图中可示出清洁设备的可清洁区域、可清洁区域中可越过障碍物的类别等。Specifically, the segmentation model can preliminarily segment the field of view area in front of the cleaning equipment, but due to labeling errors and inference errors, it is still impossible to obtain a relatively accurate edge of the area. To this end, the present disclosure performs corrosion processing on the ground area in the image segmentation result to obtain a first image to expose more edge areas and avoid the ground segmentation result covering the edge. Then, the edge processing result of removing the edge in the ground area can be obtained in combination with the edge processing result, and then a more accurate contact edge between the insurmountable obstacles in the background and the ground can be obtained to obtain the corresponding edge line. Afterwards, based on the edge lines and the surmountable obstacles on the ground in the segmentation results, a target map can be constructed, which can show the cleanable area of the cleaning equipment, the category of obstacles that can be surmounted in the cleanable area, etc.

在本公开的一些实施例中,根据边缘处理结果和第一图像,得到待清洁区域图像中背景与地面接触的边缘线,包括:根据第一图像去掉边缘处理结果中处于地面区域的边缘;按照所述图像采集元件的视线由近及远在去掉处于地面区域的边缘后的边缘处理结果中确定出边缘线。In some embodiments of the present disclosure, an edge line where the background and the ground meet in the image of the area to be cleaned is obtained based on the edge processing result and the first image, including: removing the edge in the ground area in the edge processing result based on the first image; and determining the edge line in the edge processing result after removing the edge in the ground area according to the line of sight of the image acquisition element from near to far.

具体地,边缘处理结果可以是二值化边缘图,根据第一图像可去掉二值化边缘图中处于地面区域的边缘。之后,可沿设置在清洁设备前端的图像采集元件视线方向,由近及远查询去掉处于地面区域的边缘后的二值化边缘图,直至找到边缘像素点位置,过程包括:在去掉处于地面区域的边缘后的二值化边缘图中每一列从距离清洁设备最近的点向远处的点开始查找,找到的第一个非零像素就是边缘像素点。所找到的边缘像素点的连线,即为上述边缘线。Specifically, the edge processing result can be a binary edge map, and the edge in the ground area in the binary edge map can be removed according to the first image. After that, the binary edge map after removing the edge in the ground area can be queried from near to far along the line of sight of the image acquisition element arranged at the front end of the cleaning device until the edge pixel position is found. The process includes: in each column of the binary edge map after removing the edge in the ground area, the search starts from the point closest to the cleaning device to the point far away, and the first non-zero pixel found is the edge pixel. The line connecting the edge pixels found is the above-mentioned edge line.

在本公开的一些实施例中,根据边缘线和分割结果中地面上的可越过障碍物,构建目标地图,包括: 根据边缘线确定可通行区域,并根据可通行区域和分割结果中地面上的可越过障碍物,得到可通行区域中可越过障碍物的类别和位置;根据可通行区域中可越过障碍物的类别和位置,构建目标地图。In some embodiments of the present disclosure, constructing a target map according to edge lines and traversable obstacles on the ground in the segmentation results includes: The passable area is determined according to the edge line, and the category and position of the passable obstacles in the passable area are obtained according to the passable area and the passable obstacles on the ground in the segmentation result; and the target map is constructed according to the category and position of the passable obstacles in the passable area.

具体地,待清洁区域图像中边缘线靠近图像采集元件侧的区域为可通行区域,进而根据可通行区域和分割结果中地面上的可越过障碍物(如地毯、污渍等),得到可通行区域中可越过障碍物的类别和位置。之后,将可通行区域中可越过障碍物坐标从图像坐标系转换至世界坐标系下,构建得到目标地图。在进行坐标转换时,可先利用图像坐标系与相机坐标系之间的第一转换矩阵,将可越过障碍物坐标从图像坐标系转换至相机坐标系,再利用相机坐标系与世界坐标系之间的第二转换矩阵,将可越过障碍物坐标从相机坐标系转换至世界坐标系。其中,第一转换矩阵可根据图像采集元件的内参、外参、中心、畸变等得到,第二转换矩阵可根据通过激光雷达算出的相机位姿得到。Specifically, the area where the edge line in the image of the area to be cleaned is close to the image acquisition element is a passable area, and then the category and position of the passable obstacles in the passable area are obtained according to the passable area and the traversable obstacles on the ground in the segmentation result (such as carpets, stains, etc.). After that, the coordinates of the traversable obstacles in the passable area are converted from the image coordinate system to the world coordinate system to construct a target map. When performing coordinate conversion, the first conversion matrix between the image coordinate system and the camera coordinate system can be used to convert the coordinates of the traversable obstacles from the image coordinate system to the camera coordinate system, and then the second conversion matrix between the camera coordinate system and the world coordinate system can be used to convert the coordinates of the traversable obstacles from the camera coordinate system to the world coordinate system. Among them, the first conversion matrix can be obtained according to the intrinsic parameters, extrinsic parameters, center, distortion, etc. of the image acquisition element, and the second conversion matrix can be obtained according to the camera posture calculated by the laser radar.

S14,根据目标地图对清洁设备进行控制。S14, controlling the cleaning equipment according to the target map.

在本公开的一些实施例中,根据目标地图对清洁设备进行控制,包括:根据目标地图更新全局地图;根据更新后的全局地图确定目标清洁区域和目标清洁策略;按照目标清洁策略控制清洁设备对目标清洁区域进行清洁。In some embodiments of the present disclosure, controlling a cleaning device according to a target map includes: updating a global map according to the target map; determining a target cleaning area and a target cleaning strategy according to the updated global map; and controlling the cleaning device to clean the target cleaning area according to the target cleaning strategy.

具体地,首个全局地图可以是自定义构建的,也可以是根据整个工作场景对应的多个目标地图融合得到的。清洁设备可根据全局地图进行导航行进,如从卧室到厨房,同时可在导航行进中采用上述步骤S11-S13构建目标地图,并根据目标地图更新全局地图。之后,可根据更新后的全局地图确定目标清洁区域(如当前未清洁的且可通行区域)和目标清洁策略(如导航过程中避开地毯、污渍;在地毯上抬升拖布,加大吸力;对污渍进行重点清洁;避开激光雷达、线激光等传感器未能识别的障碍物等),按照目标清洁策略控制清洁设备对目标清洁区域进行清洁。由此,可提高清洁设备的工作效率和准确性。Specifically, the first global map can be custom-built or fused according to multiple target maps corresponding to the entire work scene. The cleaning equipment can navigate according to the global map, such as from the bedroom to the kitchen. At the same time, the above steps S11-S13 can be used to build a target map during navigation, and the global map can be updated according to the target map. Afterwards, the target cleaning area (such as the currently uncleaned and passable area) and the target cleaning strategy (such as avoiding carpets and stains during navigation; lifting the mop on the carpet to increase suction; focusing on cleaning stains; avoiding obstacles that cannot be identified by sensors such as lidar and line lasers, etc.) can be determined according to the updated global map, and the cleaning equipment is controlled to clean the target cleaning area according to the target cleaning strategy. In this way, the working efficiency and accuracy of the cleaning equipment can be improved.

为便于理解,下面结合图4所示的具体实施例流程,以及图5(a)-图5(d)所示的具体场景示例,描述本公开实施例的清洁设备的控制方法:For ease of understanding, the control method of the cleaning device of the present disclosure embodiment is described below in conjunction with the specific embodiment process shown in FIG. 4 and the specific scenario examples shown in FIG. 5( a) to FIG. 5( d ):

如图4所示,首先进行图像获取,得到的待清洁区域图像如图5(a)所示)。然后,一方面通过图像分割模型对待清洁区域图像进行分割,得到的分割结果如图5(b)所示,其中B区域表示地面,A区域表示地毯,未标注部分表示背景(包括不可越过障碍物)。之后,对图5(b)中的地面区域进行腐蚀处理,以暴露更多边缘区域并避免地面分割结果覆盖边缘。另一方面,利用Canny边缘处理算法对待清洁区域图像进行边缘处理,得到边缘处理结果。As shown in Figure 4, firstly, the image is acquired, and the image of the area to be cleaned is shown in Figure 5(a). Then, on the one hand, the image of the area to be cleaned is segmented by the image segmentation model, and the segmentation result is shown in Figure 5(b), where area B represents the ground, area A represents the carpet, and the unmarked part represents the background (including obstacles that cannot be crossed). After that, the ground area in Figure 5(b) is eroded to expose more edge areas and avoid the ground segmentation result covering the edge. On the other hand, the Canny edge processing algorithm is used to process the edge of the image of the area to be cleaned, and the edge processing result is obtained.

之后,结合腐蚀处理结果去除边缘处理结果中地面区域的边缘,并沿图像采集元件视线方向由近及远找到边缘处理结果中剩余边缘的边缘像素点位置,得到去除地面区域边缘线,如图5(c)中的粗白线所示。Afterwards, the edge of the ground area in the edge processing result is removed in combination with the corrosion processing result, and the edge pixel positions of the remaining edges in the edge processing result are found from near to far along the line of sight of the image acquisition element to obtain the edge line of the removed ground area, as shown by the thick white line in Figure 5(c).

接着,确定障碍物和地面接触边缘像素坐标,并确定障碍物类别。将到达边缘线前的地面区域视为可通行区域(即无不可越过障碍物的地面),分割模型的分割结果中已包含可越过障碍物类别和图像坐标系下的坐标,通过查询标定文件(包括转换矩阵),将图像坐标系下的可越过障碍物坐标转换到世界坐标系下,如图5(d)所示,该图中示出了地毯位置(即图5(d)中黑色块状部分)和不可越过障碍物位置(即图5(d)线条部分),由此构建得到BEV(Bird's Eye View,鸟瞰图),即目标地图。之后,可结合全局地图,进一步确定目标清洁区域和目标清洁策略,并根据目标清洁区域和目标清洁策略控制清洁设备进行清洁工作。Next, the pixel coordinates of the contact edge between the obstacle and the ground are determined, and the obstacle category is determined. The ground area before reaching the edge line is regarded as a passable area (i.e., the ground without insurmountable obstacles). The segmentation result of the segmentation model already contains the category of surmountable obstacles and the coordinates in the image coordinate system. By querying the calibration file (including the conversion matrix), the coordinates of the surmountable obstacles in the image coordinate system are converted to the world coordinate system, as shown in Figure 5(d), which shows the carpet position (i.e., the black block part in Figure 5(d)) and the insurmountable obstacle position (i.e., the line part in Figure 5(d)). Thus, a BEV (Bird's Eye View), i.e., the target map, is constructed. After that, the target cleaning area and target cleaning strategy can be further determined in combination with the global map, and the cleaning equipment can be controlled to perform cleaning work according to the target cleaning area and target cleaning strategy.

其中,鸟瞰图是一种从上方观看对象或场景的视角。在避障领域,通过传感器(如图像采集元件)获取的数据通常会被转换成BEV表示,以便更好地进行物体检测、路径规划等任务。BEV的优点包括:能够将复杂的三维环境简化为二维图像,可以在计算和存储上节省大量资源;提供了一种独特的视觉效果,使得场景中的物体和空间关系更加清晰可见;在BEV中处理物体检测、跟踪和分类等任务相较于直 接在原始3D数据中处理要简单得多;图像采集元件检测会出现近大远小的情况,BEV同类目标尺度差异几乎没有,更容易学习特征尺度一致性。Among them, a bird's-eye view is a perspective of viewing an object or scene from above. In the field of obstacle avoidance, data obtained by sensors (such as image acquisition components) are usually converted into BEV representations to better perform tasks such as object detection and path planning. The advantages of BEV include: the ability to simplify complex three-dimensional environments into two-dimensional images, which can save a lot of resources in computing and storage; it provides a unique visual effect that makes the objects and spatial relationships in the scene more clearly visible; processing tasks such as object detection, tracking and classification in BEV is much easier than in direct vision. It is much simpler to process the original 3D data directly; the image acquisition component will detect objects that are larger when they are closer and smaller when they are farther away. There is almost no scale difference between similar targets in BEV, so it is easier to learn the consistency of feature scale.

下面结合附图及具体实施例对本公开作进一步详细的说明。The present disclosure is further described in detail below with reference to the accompanying drawings and specific embodiments.

本公开的实施例提供一种绕障轨迹的设置方法,参照图6所示,该方法包括以下步骤:An embodiment of the present disclosure provides a method for setting an obstacle avoidance trajectory. As shown in FIG. 6 , the method includes the following steps:

步骤101:确定移动设备的第一绕障轨迹。Step 101: Determine a first obstacle circumvention trajectory of a mobile device.

可以理解的,移动设备包括但不限于扫地机器人(robot cleaner)、玻璃窗清扫机器人、园区配送机器人,本公开对此不做限定。It will be understood that mobile devices include, but are not limited to, robot cleaners, window cleaning robots, and campus delivery robots, and the present disclosure does not limit this.

本公开实施例中,以移动设备是扫地机器人为例,扫地机器人又称自动打扫机、智能吸尘、机器人吸尘器等,是智能家用电器的一种,能自动在房间内完成地板清理工作。一般采用刷扫和真空方式,将地面杂物先吸纳进入自身的垃圾收纳盒,从而完成地面清理的功能。一般来说,将完成清扫、吸尘、擦地工作的机器人,也统一归为扫地机器人。扫地机器人的机身为自动化技术的可移动装置,与有集尘盒的真空吸尘装置,配合机身设定控制路径,在室内反复行走,如沿边清扫、集中清扫、随机清扫、直线清扫等路径打扫,并辅以边刷、中央主刷旋转、抹布等方式,加强打扫效果,以完成拟人化居家清洁效果。In the disclosed embodiments, the mobile device is a sweeping robot as an example. The sweeping robot is also called an automatic sweeper, smart vacuum cleaner, robot vacuum cleaner, etc. It is a kind of smart home appliance that can automatically complete the floor cleaning work in the room. Generally, the brushing and vacuuming methods are used to absorb the debris on the ground into its own garbage storage box, thereby completing the function of floor cleaning. Generally speaking, robots that complete the work of sweeping, vacuuming, and mopping are also uniformly classified as sweeping robots. The body of the sweeping robot is a movable device of automation technology, and a vacuum cleaner with a dust collection box cooperates with the body to set a control path, and walks repeatedly in the room, such as sweeping along the edge, centralized sweeping, random sweeping, straight line sweeping and other path sweeping, and is supplemented by side brushes, central main brush rotation, rags and other methods to enhance the cleaning effect, so as to complete the anthropomorphic home cleaning effect.

扫地机器人通常配置了多类传感器,可以包括激光雷达、摄像头、线激光、位置敏感传感器(Position Sensitive detector,PSD)、碰撞板和惯性检测单元(Inertial Measurement Unit,IMU);激光雷达可以测量机器人周围的障碍物距离和角度;线激光和PSD可以检测地面上的障碍物和边缘;碰撞板可以检测机器人附近的碰撞情况;IMU可以检测机器人的位姿;将传感器采集到的数据进行处理,构建扫地机器人所在环境的地图;示例性的,可以通过人工智能(Artificial Intelligence,AI)将采集到的传感器数据进行障碍物识别,将扫地机器人所在环境中的障碍物进行分类,将障碍物的信息融合到同步定位与地图构建(Simultaneous Localization And Mapping,SLAM)系统的概率栅格地图中,生成有障碍物标记的环境地图,根据环境地图设置扫地机器人的导航轨迹,使扫地机器人能够避开障碍物。A sweeping robot is usually equipped with multiple types of sensors, which may include lidar, camera, line laser, position sensitive detector (PSD), collision plate and inertial measurement unit (IMU); lidar can measure the distance and angle of obstacles around the robot; line laser and PSD can detect obstacles and edges on the ground; collision plate can detect collisions near the robot; IMU can detect the position and posture of the robot; the data collected by the sensor is processed to build a map of the environment where the sweeping robot is located; illustratively, the collected sensor data can be used for obstacle recognition through artificial intelligence (AI), the obstacles in the environment where the sweeping robot is located can be classified, and the obstacle information can be integrated into the probability grid map of the simultaneous localization and mapping (SLAM) system to generate an environment map with obstacle markings, and the navigation trajectory of the sweeping robot can be set according to the environment map, so that the sweeping robot can avoid obstacles.

扫地机器人所处环境中的障碍物会发生变化,因此,扫地机器人的导航轨迹也会随障碍物的变化而改变,第一绕障轨迹可以为根据标记的第一障碍物为扫地机器人设置的导航轨迹,第一障碍物可以包括但不限于体积较大、位置相对固定的障碍物;例如:墙面、柜体、冰箱和饮水机等。The obstacles in the environment of the sweeping robot will change, so the navigation trajectory of the sweeping robot will also change with the changes of the obstacles. The first obstacle avoidance trajectory can be a navigation trajectory set for the sweeping robot according to the marked first obstacle. The first obstacle can include but is not limited to obstacles with large volume and relatively fixed position; for example: walls, cabinets, refrigerators and water dispensers, etc.

在实际应用中,确定移动设备的第一绕障轨迹可以理解为根据环境地图确定扫地机器人避开第一障碍物的导航轨迹。In practical applications, determining the first obstacle circumventing trajectory of the mobile device can be understood as determining the navigation trajectory of the cleaning robot to avoid the first obstacle according to the environment map.

步骤102:若第一绕障轨迹满足更新条件,更新第一绕障轨迹,得到第二绕障轨迹;其中,第二绕障轨迹包括的第二障碍物与第一绕障轨迹包括的第一障碍物具有不同的物体特征。Step 102: If the first obstacle avoidance trajectory meets the update condition, the first obstacle avoidance trajectory is updated to obtain a second obstacle avoidance trajectory; wherein a second obstacle included in the second obstacle avoidance trajectory has different object features from the first obstacle included in the first obstacle avoidance trajectory.

可以理解的,更新条件可以根据实际情况进行设置,本公开对此不做具体限定。示例性的,实际情况可以包括扫地机器人按照第一绕障轨迹进行清扫的过程中,确定有第二障碍物;实际情况还可以包括在第一绕障轨迹上添加第二障碍物,例如,以人机交互界面显示第一绕障轨迹,在第一绕障轨迹上添加第二障碍物。It is understandable that the update condition can be set according to the actual situation, and the present disclosure does not specifically limit this. For example, the actual situation may include that the sweeping robot determines that there is a second obstacle during the cleaning process according to the first obstacle avoidance trajectory; the actual situation may also include adding the second obstacle to the first obstacle avoidance trajectory, for example, displaying the first obstacle avoidance trajectory in a human-computer interaction interface and adding the second obstacle to the first obstacle avoidance trajectory.

第二障碍物可以为体积较小、位置可灵活变动的实体障碍物,本公开对此不做限定。示例性的,第二障碍物可以是柱状实体障碍物,例如:桌腿。The second obstacle may be a physical obstacle with a small volume and a flexible position, which is not limited in the present disclosure. For example, the second obstacle may be a columnar physical obstacle, such as a table leg.

本公开实施例中,在扫地机器人按照第一绕障轨迹移动过程中,检测到第二障碍物,可在环境地图中标记第二障碍物;也可以通过人机交互界面在第一绕障轨迹上添加第二障碍物,以此更新第一绕障轨迹,得到第二绕障轨迹;图7为根据扫地机器人所在环境构建的栅格地图,图8是在图7的基础上标记第二障碍物后的栅格地图。In the disclosed embodiment, when the sweeping robot is moving along the first obstacle avoidance trajectory, a second obstacle is detected and the second obstacle can be marked in the environment map; the second obstacle can also be added to the first obstacle avoidance trajectory through the human-computer interaction interface to update the first obstacle avoidance trajectory and obtain the second obstacle avoidance trajectory; Figure 7 is a grid map constructed according to the environment in which the sweeping robot is located, and Figure 8 is a grid map after marking the second obstacle on the basis of Figure 7.

步骤103:控制移动设备沿第二绕障轨迹移动,以使移动设备在移动过程中绕开第一障碍物和第二 障碍物。Step 103: Control the mobile device to move along the second obstacle avoidance trajectory so that the mobile device avoids the first obstacle and the second obstacle during the movement. obstacle.

本公开实施例中,控制移动设备沿第二绕障轨迹移动可以理解为控制扫地机器人按照第二绕障轨迹移动,在完成清扫的过程中避开第一障碍物和第二障碍物。In the disclosed embodiment, controlling the mobile device to move along the second obstacle avoidance trajectory can be understood as controlling the sweeping robot to move along the second obstacle avoidance trajectory to avoid the first obstacle and the second obstacle while completing the cleaning process.

由上述内容可知,本公开提供的绕障轨迹的设置方法,通过确定移动设备的第一绕障轨迹,若第一绕障轨迹满足更新条件,,更新第一绕障轨迹得到第二绕障轨迹,以使移动设备在移动过程中绕开第一障碍物和第二障碍物,解决了相关技术中室内移动机器人只能按照设定好的绕障轨迹移动,避障方式单一的问题,使环境地图更符合真实环境,实现了室内移动机器人在复杂环境中的自主导航和安全避障。From the above content, it can be seen that the obstacle avoidance trajectory setting method provided by the present invention determines a first obstacle avoidance trajectory of a mobile device, and if the first obstacle avoidance trajectory meets the update condition, updates the first obstacle avoidance trajectory to obtain a second obstacle avoidance trajectory, so that the mobile device can avoid the first obstacle and the second obstacle during movement. This solves the problem in the related art that the indoor mobile robot can only move according to the set obstacle avoidance trajectory and has a single obstacle avoidance method, makes the environmental map more in line with the real environment, and realizes the autonomous navigation and safe obstacle avoidance of the indoor mobile robot in a complex environment.

在本公开的一些实施例中,步骤102若第一绕障轨迹满足更新条件,更新第一绕障轨迹,得到第二绕障轨迹,可以通过如下步骤实现:In some embodiments of the present disclosure, if the first obstacle avoidance trajectory satisfies the update condition, step 102 updates the first obstacle avoidance trajectory to obtain the second obstacle avoidance trajectory, which can be implemented by the following steps:

控制移动设备沿第一绕障轨迹移动,以使移动设备在移动过程中绕开第一障碍物;Controlling the mobile device to move along a first obstacle avoidance trajectory so that the mobile device avoids the first obstacle during the movement;

若移动设备沿第一绕障轨迹移动过程中存在第二障碍物,确定第一绕障轨迹满足更新条件,基于第二障碍物更新第一绕障轨迹,得到第二绕障轨迹。If a second obstacle exists when the mobile device moves along the first obstacle avoidance trajectory, it is determined that the first obstacle avoidance trajectory meets an update condition, and the first obstacle avoidance trajectory is updated based on the second obstacle to obtain a second obstacle avoidance trajectory.

本公开实施例中,扫地机器人按照第一绕障轨迹移动时,扫地机器人会自动绕开第一障碍物,在此过程中,扫地机器人可以通过自身配置的多类传感器检测障碍物,若检测到第二障碍物,则对第一绕障轨迹进行更新,得到第二绕障轨迹。In the disclosed embodiment, when the sweeping robot moves along the first obstacle avoidance trajectory, the sweeping robot will automatically avoid the first obstacle. During this process, the sweeping robot can detect obstacles through multiple types of sensors configured by itself. If a second obstacle is detected, the first obstacle avoidance trajectory is updated to obtain a second obstacle avoidance trajectory.

在本公开的一些实施例中,第一障碍物包括基于第一传感器的第一传感数据所确定的障碍物,和/或第二障碍物包括基于第二传感器的第二传感数据感应所确定的障碍物。In some embodiments of the present disclosure, the first obstacle includes an obstacle determined based on first sensing data of a first sensor, and/or the second obstacle includes an obstacle determined based on second sensing data of a second sensor.

本公开实施例中,第一传感器可以是激光雷达;第一传感数据可以包括扫地机器人的激光雷达检测到的障碍物数据;第二传感器可以是除激光雷达以外的传感器,例如:线激光、PSD、碰撞板和摄像头等;第二传感数据可以包括扫地机器人除雷达激光以外的传感器检测到的障碍物数据。In the disclosed embodiment, the first sensor may be a laser radar; the first sensing data may include obstacle data detected by the laser radar of the sweeping robot; the second sensor may be a sensor other than the laser radar, such as: line laser, PSD, collision plate and camera, etc.; the second sensing data may include obstacle data detected by sensors other than radar laser of the sweeping robot.

在本公开的一些实施例中,控制移动设备沿所述第一绕障轨迹移动时,该方法还包括:In some embodiments of the present disclosure, when controlling the mobile device to move along the first obstacle avoidance trajectory, the method further includes:

获取移动设备沿第一绕障轨迹移动过程中,移动设备的避障数据;Obtaining obstacle avoidance data of the mobile device during the movement of the mobile device along the first obstacle avoidance trajectory;

若避障数据满足避障条件,确定避障数据对应的避障对象为第二障碍物。If the obstacle avoidance data satisfies the obstacle avoidance condition, it is determined that the obstacle avoidance object corresponding to the obstacle avoidance data is the second obstacle.

可以理解地,避障数据为扫地机器人在移动过程中避开障碍物有关的数据,可以通过扫地机器人自身的多类传感器进行采集;避障对象为扫地机器人在移动过程中绕开的障碍物;避障条件可以根据实际需要进行设定,本公开在此不做限定。示例性的,实际需要可以是需要避开的障碍物类型。It can be understood that the obstacle avoidance data is data related to the obstacle avoidance of the sweeping robot during movement, which can be collected by multiple types of sensors of the sweeping robot itself; the obstacle avoidance object is the obstacle that the sweeping robot avoids during movement; the obstacle avoidance condition can be set according to actual needs, which is not limited in this disclosure. For example, the actual need can be the type of obstacle that needs to be avoided.

本公开实施例中,获取移动设备沿第一绕障轨迹移动过程中移动设备的避障数据可以理解为获取扫地机器人沿第一绕障轨迹移动过程中采集的避障数据,对避障数据进行处理,确认扫地机器人在此过程中遇到的障碍物类型,若有障碍物类型满足避障条件,确定满足避障条件的障碍物为第二障碍物。In the disclosed embodiment, obtaining the obstacle avoidance data of the mobile device during the movement of the mobile device along the first obstacle avoidance trajectory can be understood as obtaining the obstacle avoidance data collected by the sweeping robot during the movement of the first obstacle avoidance trajectory, processing the obstacle avoidance data, confirming the type of obstacles encountered by the sweeping robot in this process, and if there is an obstacle type that meets the obstacle avoidance condition, determining that the obstacle that meets the obstacle avoidance condition is the second obstacle.

在本公开的一些实施例中,避障数据包括位姿数据,避障数据满足避障条件,包括:In some embodiments of the present disclosure, the obstacle avoidance data includes posture data, and the obstacle avoidance data satisfies the obstacle avoidance conditions, including:

基于位姿数据确定移动设备沿第一绕障轨迹移动过程中的偏航轨迹;Determine a yaw trajectory of the mobile device during movement along a first obstacle avoidance trajectory based on the posture data;

若偏航轨迹满足轨迹特征,确定避障数据满足避障条件。If the yaw trajectory meets the trajectory characteristics, it is determined that the obstacle avoidance data meets the obstacle avoidance conditions.

可以理解地,位姿数据包括但不限于扫地机器人的移动方向和坐标位置;偏航轨迹可以理解为扫地机器人偏离第一绕障轨迹的移动轨迹;轨迹特征可以根据实际情况进行设定;示例性的,轨迹特征可以是绕圈轨迹,即圆圈形的轨迹,具体可以通过霍夫圆检测算法识别扫地机器人在移动过程中的绕圈轨迹。图9为扫地机器人沿第一绕障轨迹的移动轨迹示意图,图10是在图9的基础上标记出绕圈轨迹的示意图。It can be understood that the posture data includes but is not limited to the moving direction and coordinate position of the sweeping robot; the yaw trajectory can be understood as the moving trajectory of the sweeping robot deviating from the first obstacle circumvention trajectory; the trajectory feature can be set according to the actual situation; for example, the trajectory feature can be a circle trajectory, that is, a circular trajectory, and the circle trajectory of the sweeping robot during movement can be specifically identified by the Hough circle detection algorithm. Figure 9 is a schematic diagram of the moving trajectory of the sweeping robot along the first obstacle circumvention trajectory, and Figure 10 is a schematic diagram of the circle trajectory marked on the basis of Figure 9.

本公开实施例中,基于位姿数据确定移动设备沿第一绕障轨迹移动过程中的偏航轨迹可以理解为根据扫地机器人的移动方向和坐标位置确定在移动过程中偏离第一绕障轨迹的偏航轨迹,若偏航轨迹为绕圈轨迹,确定绕圈轨迹对应的障碍物满足避障条件。In the embodiment of the present disclosure, determining the yaw trajectory of the mobile device during its movement along the first obstacle circumvention trajectory based on the posture data can be understood as determining the yaw trajectory that deviates from the first obstacle circumvention trajectory during the movement according to the moving direction and coordinate position of the sweeping robot; if the yaw trajectory is a circular trajectory, determining that the obstacle corresponding to the circular trajectory meets the obstacle avoidance condition.

在本公开的一些实施例中,避障数据包括碰撞数据,避障数据满足避障条件,包括: In some embodiments of the present disclosure, the obstacle avoidance data includes collision data, and the obstacle avoidance data satisfies the obstacle avoidance conditions, including:

基于碰撞数据确定移动设备沿第一绕障轨迹移动过程中的碰撞角度和碰撞强度;Determine, based on the collision data, a collision angle and a collision intensity of the mobile device during movement along the first obstacle circumvention trajectory;

若碰撞角度满足预设角度且碰撞强度满足预设力度,确定避障数据满足避障条件。If the collision angle meets the preset angle and the collision intensity meets the preset intensity, it is determined that the obstacle avoidance data meets the obstacle avoidance condition.

可以理解的,碰撞数据为扫地机器人在移动过程中碰撞板采集到的数据;预设角度和预设力度可以根据实际情况进行设置,本公开对此不做限定。实际情况可以是障碍物材质硬度和障碍物表面弧度。It is understandable that the collision data is the data collected by the collision plate of the sweeping robot during movement; the preset angle and preset force can be set according to actual conditions, and the present disclosure does not limit this. The actual conditions may be the hardness of the obstacle material and the curvature of the obstacle surface.

本公开实施例中,根据碰撞板采集到的数据确定扫地机器人在移动过程中碰撞不同障碍物的角度和强度,确定是否存在碰撞角度满足预设角度且碰撞强度满足预设力度的障碍物,若存在,确定该障碍物满足避障条件。In the embodiment of the present disclosure, the angle and intensity of the collision of the sweeping robot with different obstacles during movement are determined based on the data collected by the collision plate, and it is determined whether there is an obstacle whose collision angle meets the preset angle and whose collision intensity meets the preset intensity. If so, it is determined that the obstacle meets the obstacle avoidance condition.

在本公开的一些实施例中,避障数据包括障碍物对应的面积和轮廓,避障数据满足避障条件,包括:In some embodiments of the present disclosure, the obstacle avoidance data includes the area and contour corresponding to the obstacle, and the obstacle avoidance data satisfies the obstacle avoidance conditions, including:

若障碍物对应的面积满足预设面积且轮廓满足预设形状,确定避障数据满足避障条件。If the area corresponding to the obstacle meets the preset area and the outline meets the preset shape, it is determined that the obstacle avoidance data meets the obstacle avoidance condition.

可以理解的,障碍物对应的面积和轮廓可以根据扫地机器人绕开障碍物的轨迹计算得到;预设形状和预设面积可以根据所需要标记的障碍物类型来设置,本公开对此不做限定。示例性的,预设形状可以为圆形,预设面积为100cm2It is understandable that the area and contour corresponding to the obstacle can be calculated based on the trajectory of the sweeping robot to avoid the obstacle; the preset shape and preset area can be set according to the type of obstacle to be marked, and the present disclosure does not limit this. For example, the preset shape can be circular and the preset area is 100 cm2 .

本公开实施例中,根据扫地机器人在移动过程中的轨迹计算是否存在满足预设面积且轮廓满足预设形状的障碍物,若存在,确定该障碍物满足避障条件。In the embodiment of the present disclosure, it is calculated based on the trajectory of the sweeping robot during movement whether there is an obstacle that meets the preset area and the outline of which meets the preset shape. If so, it is determined that the obstacle meets the obstacle avoidance condition.

在本公开的一些实施例中,控制移动设备沿第一绕障轨迹移动时,该方法还包括:In some embodiments of the present disclosure, when controlling the mobile device to move along the first obstacle avoidance trajectory, the method further includes:

获取移动设备沿第一绕障轨迹移动过程中绕开的障碍物对应的面积;Obtaining the area corresponding to the obstacle avoided by the mobile device during the movement along the first obstacle avoidance trajectory;

若面积小于预设阈值,确定存在第二障碍物。If the area is smaller than the preset threshold, it is determined that a second obstacle exists.

可以理解的,第二障碍物为柱状实体障碍物,预设阈值可以根据柱状实体障碍物进行设置,例如:将预设面积设置为10cm2It can be understood that the second obstacle is a columnar physical obstacle, and the preset threshold value can be set according to the columnar physical obstacle, for example, the preset area is set to 10 cm 2 .

本公开实施例中,计算移动设备沿第一绕障轨迹移动过程中绕开的障碍物对应的面积,确定是否存在对应面积小于预设阈值的障碍物,若存在,确定该障碍物为柱状实体障碍物。In the embodiment of the present disclosure, the area corresponding to the obstacle avoided by the mobile device during the movement along the first obstacle avoidance trajectory is calculated to determine whether there is an obstacle with a corresponding area smaller than a preset threshold. If so, the obstacle is determined to be a columnar solid obstacle.

在本公开的一些实施例中,步骤101确定移动设备的第一绕障轨迹,可以通过如下步骤实现:In some embodiments of the present disclosure, step 101 of determining a first obstacle circumvention trajectory of a mobile device may be implemented by the following steps:

获取第一障碍物信息;Obtaining first obstacle information;

基于第一障碍物信息构建环境地图,生成第一绕障轨迹。An environment map is constructed based on the first obstacle information, and a first obstacle avoidance trajectory is generated.

本公开实施例中,通过扫地机器人的传感器采集第一障碍物信息,将第一障碍物信息融合到同SLAM系统的概率栅格地图中,生成有第一障碍物标记的环境地图,根据环境地图设置扫地机器人的第一绕障轨迹。In the disclosed embodiment, the first obstacle information is collected by the sensor of the sweeping robot, the first obstacle information is integrated into the probability grid map of the same SLAM system, an environment map marked with the first obstacle is generated, and the first obstacle avoidance trajectory of the sweeping robot is set according to the environment map.

在本公开的一些实施例中,步骤102若第一绕障轨迹满足更新条件,更新第一绕障轨迹,得到第二绕障轨迹,还可以通过如下步骤实现:In some embodiments of the present disclosure, if the first obstacle avoidance trajectory satisfies the update condition, step 102 updates the first obstacle avoidance trajectory to obtain the second obstacle avoidance trajectory, which can also be implemented by the following steps:

在人机交互界面上显示第一绕障轨迹;Displaying the first obstacle avoidance trajectory on the human-computer interaction interface;

当移动设备沿第一绕障轨迹移动过程中,在人机交互界面上接收针对第一绕障轨迹标记的第二障碍物,确定第一绕障轨迹满足更新条件;When the mobile device moves along the first obstacle avoidance trajectory, receiving a second obstacle marked for the first obstacle avoidance trajectory on the human-computer interaction interface, and determining that the first obstacle avoidance trajectory meets the update condition;

基于第二障碍物更新第一绕障轨迹,得到第二绕障轨迹,并在人机交互界面上显示第二绕障轨迹。The first obstacle avoidance trajectory is updated based on the second obstacle to obtain a second obstacle avoidance trajectory, and the second obstacle avoidance trajectory is displayed on the human-computer interaction interface.

本公开实施例中,人机交互界面可以包括但不限于手机界面和平板界面。例如:手机或平板中应用程序的交互界面。In the embodiments of the present disclosure, the human-computer interaction interface may include but is not limited to a mobile phone interface and a tablet interface, for example: an interaction interface of an application in a mobile phone or tablet.

本公开实施例中,可以在手机应用程序的交互界面显示扫地机器人的第一绕障轨迹,移动设备沿第一绕障轨迹移动过程中,可以通过手机应用程序接收第二障碍物的标记信息,接收到标记信息后则更新第一绕障轨迹得到第二绕障轨迹,在手机应用程序的交互界面显示扫地机器人的第二绕障轨迹,以实时更新扫地机器人的绕障轨迹。In the disclosed embodiment, a first obstacle avoidance trajectory of the sweeping robot can be displayed on the interactive interface of the mobile phone application. While the mobile device is moving along the first obstacle avoidance trajectory, marking information of the second obstacle can be received through the mobile phone application. After receiving the marking information, the first obstacle avoidance trajectory is updated to obtain a second obstacle avoidance trajectory. The second obstacle avoidance trajectory of the sweeping robot is displayed on the interactive interface of the mobile phone application to update the obstacle avoidance trajectory of the sweeping robot in real time.

在一个可实现的场景中,参照图11所示,扫地机器人绕障轨迹的设置可以通过如下步骤实现: In a feasible scenario, as shown in FIG. 11 , the setting of the obstacle avoidance trajectory of the sweeping robot can be achieved by the following steps:

S601:通过SLAM构建室内移动机器人地图。S601: Constructing indoor mobile robot maps through SLAM.

S602:结合碰撞版、激光雷达、线激光、PSD传感器和AI障碍物识别,设置扫地机器人的导航轨迹。S602: Combines collision plate, LiDAR, line laser, PSD sensor and AI obstacle recognition to set the navigation trajectory of the sweeping robot.

S603:获取机器人绕障时的位姿,得到绕障轨迹。S603: Obtain the position and posture of the robot when circumventing obstacles, and obtain the obstacle circumvention trajectory.

S604:使用图形学算法二值化、高斯模糊、霍夫圆检测等得到若干个绕障轨迹的绕障中心。S604: Obstacle avoidance centers of several obstacle avoidance trajectories are obtained by using graphics algorithms such as binarization, Gaussian blur, and Hough circle detection.

S605:在绕障中心处绘制障碍物标记,并在APP端显示地图、轨迹和障碍物,避免轨迹绕空现象。S605: Draw an obstacle mark at the obstacle avoidance center, and display the map, track and obstacle on the APP to avoid the track bypassing the empty track.

本公开的实施例提供一种绕障轨迹的设置方法,参照图12所示,该方法包括以下步骤:An embodiment of the present disclosure provides a method for setting an obstacle avoidance trajectory. As shown in FIG. 12 , the method includes the following steps:

S701:获取第一地图。S701: Acquire a first map.

本公开实施例中,第一地图可以是能够标记障碍物的任意地图;例如:通过SLAM系统构建的栅格地图;第一地图中可以包括扫地机器人已经识别的第一障碍物。In the embodiment of the present disclosure, the first map may be any map that can mark obstacles; for example: a grid map constructed by a SLAM system; the first map may include the first obstacle that the sweeping robot has identified.

S702:获取移动设备按照第一地图移动时的障碍物信息。S702: Obtain obstacle information when the mobile device moves according to the first map.

本公开实施例中,扫地机器人在移动过程中,可以根据自身设置的多种传感器获取到不同的障碍物信息。In the disclosed embodiment, the sweeping robot can obtain different obstacle information according to various sensors provided on the robot during movement.

S703:根据障碍物信息,更新第一地图上的障碍物,并显示更新后的障碍物。S703: updating obstacles on the first map according to the obstacle information, and displaying the updated obstacles.

本公开实施例中,扫地机器人可以根据移动过程中传感器采集到的障碍物信息,确定不同于第一障碍物的障碍物,以此更新第一地图上的障碍物;更新后的障碍物可以通过人机交互界面在第一地图上显示。In the disclosed embodiment, the sweeping robot can determine obstacles different from the first obstacle based on obstacle information collected by the sensor during movement, thereby updating the obstacles on the first map; the updated obstacles can be displayed on the first map through the human-computer interaction interface.

本公开的一些实施例中,更新后的障碍物包括未更新前的第一障碍物、新增加的第二障碍物,第二障碍物与第一障碍物具有不同的物体特征。In some embodiments of the present disclosure, the updated obstacle includes a first obstacle before updating and a newly added second obstacle, and the second obstacle has different object features from the first obstacle.

本公开实施例中,扫地机器人在移动过程中根据障碍物信息确定不同类别的障碍物,以此区分与第一障碍物不同的第二障碍物,并将第二障碍物更新在第一地图上。In the disclosed embodiment, the sweeping robot determines different categories of obstacles according to obstacle information during movement, thereby distinguishing a second obstacle different from the first obstacle, and updates the second obstacle on the first map.

本公开的一些实施例中,步骤S703根据障碍物信息,更新第一地图上的障碍物,可以通过如下步骤实现:In some embodiments of the present disclosure, step S703 updates obstacles on the first map according to the obstacle information, which can be achieved by the following steps:

若障碍物信息表征移动设备按照第一地图移动时存在第二障碍物,更新第一地图上的障碍物。If the obstacle information indicates that a second obstacle exists when the mobile device moves according to the first map, the obstacle on the first map is updated.

本公开实施例中,根据障碍物信息确定第二障碍物的方法与前述方法相同,在此不做赘述。In the embodiment of the present disclosure, the method for determining the second obstacle according to the obstacle information is the same as the above method and will not be described in detail here.

本公开的一些实施例中,第一障碍物包括基于第一传感器的第一传感数据所确定的障碍物,和/或第二障碍物包括基于第二传感器的第二传感数据所确定的障碍物。In some embodiments of the present disclosure, the first obstacle includes an obstacle determined based on first sensing data of a first sensor, and/or the second obstacle includes an obstacle determined based on second sensing data of a second sensor.

本公开实施例中,第一传感器可以是激光雷达;第一传感数据可以包括扫地机器人的激光雷达检测到的障碍物数据;第二传感器可以是除激光雷达以外的传感器,例如:线激光、PSD、碰撞板和摄像头等;第二传感数据可以包括扫地机器人除雷达激光以外的传感器检测到的障碍物数据。In the disclosed embodiment, the first sensor may be a laser radar; the first sensing data may include obstacle data detected by the laser radar of the sweeping robot; the second sensor may be a sensor other than the laser radar, such as: line laser, PSD, collision plate and camera, etc.; the second sensing data may include obstacle data detected by sensors other than radar laser of the sweeping robot.

本公开的一些实施例中,步骤S702获取移动设备按照第一地图移动时的障碍物信息,可以通过如下步骤实现:In some embodiments of the present disclosure, step S702 of obtaining obstacle information when the mobile device moves according to the first map can be implemented by the following steps:

在人机交互界面上显示第一障碍物;Displaying the first obstacle on the human-computer interaction interface;

当移动设备沿第一地图移动过程中,在人机交互界面上接收标记的第二障碍物,得到障碍物信息。When the mobile device moves along the first map, a marked second obstacle is received on the human-computer interaction interface to obtain obstacle information.

本公开实施例中,显示第一障碍物和接收障碍物信息的方法与前述方法相同,在此不做赘述。In the embodiment of the present disclosure, the method of displaying the first obstacle and receiving obstacle information is the same as the above method, which will not be described in detail here.

本公开的一些实施例中,步骤S703根据障碍物信息,更新第一地图上的障碍物之后,该方法包括:In some embodiments of the present disclosure, after step S703 updates obstacles on the first map according to obstacle information, the method includes:

控制移动设备在移动过程中绕开第一障碍物和第二障碍物。The mobile device is controlled to avoid the first obstacle and the second obstacle during movement.

本公开实施例中,扫地机器人可以根据地图标记,在移动过程中绕开第一障碍物和第二障碍物。In the disclosed embodiment, the sweeping robot can avoid the first obstacle and the second obstacle during movement according to the map markings.

对应于上述实施例的目标检测方法,本公开还提出了一种清洁设备的控制装置。Corresponding to the target detection method of the above embodiment, the present disclosure also proposes a control device for a cleaning device.

图13是本公开实施例的清洁设备的控制装置的结构示意图。FIG. 13 is a schematic diagram of the structure of a control device of a cleaning device according to an embodiment of the present disclosure.

如图13所示,清洁设备的控制装置500包括:获取模块501、检测模块502、分割模块503、构建 模块504和控制模块505。As shown in FIG13 , the control device 500 of the cleaning device includes: an acquisition module 501, a detection module 502, a segmentation module 503, a construction module 504, and a control module 505. Module 504 and control module 505.

其中,获取模块501,用于获取待清洁区域图像;处理模块502,用于对待清洁区域图像进行图像分割和边缘处理;构建模块504,用于根据边缘处理结果和分割结果,构建待清洁区域的目标地图,目标地图包括障碍物信息;控制模块505,用于根据目标地图对清洁设备进行控制。Among them, the acquisition module 501 is used to acquire the image of the area to be cleaned; the processing module 502 is used to perform image segmentation and edge processing on the image of the area to be cleaned; the construction module 504 is used to construct a target map of the area to be cleaned according to the edge processing results and the segmentation results, and the target map includes obstacle information; the control module 505 is used to control the cleaning equipment according to the target map.

在本公开的一些实施例中,利用预先训练好的分割模型对所述待清洁区域图像进行图像分割,分割模型的训练过程包括:采集得到多个工作场景图像;将多个工作场景图像划分为训练集和测试集,并分别对训练集和测试集中的工作场景图像中的待检目标进行标注,其中,待检目标包括背景、地面,以及地面上的可越过障碍物;构建分割模型,并利用训练集及其对应的标注信息对分割模型进行预训练,以及利用测试集及其对应的标注信息对预训练好的分割模型进行测试,得到最终训练好的分割模型。In some embodiments of the present disclosure, a pre-trained segmentation model is used to perform image segmentation on the image of the area to be cleaned, and the training process of the segmentation model includes: acquiring a plurality of work scene images; dividing the plurality of work scene images into a training set and a test set, and respectively annotating the targets to be inspected in the work scene images in the training set and the test set, wherein the targets to be inspected include the background, the ground, and traversable obstacles on the ground; constructing a segmentation model, and pre-training the segmentation model using the training set and its corresponding annotation information, and testing the pre-trained segmentation model using the test set and its corresponding annotation information to obtain a final trained segmentation model.

在本公开的一些实施例中,根据边缘处理结果和分割结果,构建待清洁区域的目标地图,包括:对分割结果中的地面区域进行腐蚀处理,得到第一图像;根据边缘处理结果和第一图像,得到待清洁区域图像中背景与地面接触的边缘线;根据边缘线和分割结果中地面上的可越过障碍物,构建目标地图。In some embodiments of the present disclosure, a target map of the area to be cleaned is constructed based on edge processing results and segmentation results, including: performing corrosion processing on the ground area in the segmentation results to obtain a first image; obtaining edge lines where the background and the ground are in contact in the image of the area to be cleaned based on the edge processing results and the first image; and constructing a target map based on the edge lines and traversable obstacles on the ground in the segmentation results.

在本公开的一些实施例中,待清洁区域图像是利用安装于清洁设备上的图像采集元件采集得到的,根据边缘处理结果和第一图像,得到待清洁区域图像中背景与地面接触的边缘线,包括:根据边缘处理结果中背景与地面的边缘,按照图像采集元件的视线由近及远在第一图像中确定出边缘线。In some embodiments of the present disclosure, the image of the area to be cleaned is acquired by using an image acquisition element installed on the cleaning equipment, and the edge line where the background and the ground meet in the image of the area to be cleaned is obtained based on the edge processing result and the first image, including: based on the edge of the background and the ground in the edge processing result, the edge line is determined in the first image from near to far according to the line of sight of the image acquisition element.

在本公开的一些实施例中,根据边缘线和分割结果中地面上的可越过障碍物,构建目标地图,包括:根据边缘线确定可通行区域,并根据可通行区域和分割结果中地面上的可越过障碍物,得到可通行区域中可越过障碍物的类别和位置;根据可通行区域中可越过障碍物的类别和位置,构建目标地图。In some embodiments of the present disclosure, a target map is constructed based on edge lines and traversable obstacles on the ground in segmentation results, including: determining a traversable area based on the edge lines, and obtaining the categories and positions of the traversable obstacles in the traversable area based on the traversable area and the traversable obstacles on the ground in the segmentation results; and constructing a target map based on the categories and positions of the traversable obstacles in the traversable area.

在本公开的一些实施例中,根据目标地图对清洁设备进行控制,包括:根据目标地图更新全局地图;根据更新后的全局地图确定目标清洁区域和目标清洁策略;按照目标清洁策略控制清洁设备对目标清洁区域进行清洁。In some embodiments of the present disclosure, controlling a cleaning device according to a target map includes: updating a global map according to the target map; determining a target cleaning area and a target cleaning strategy according to the updated global map; and controlling the cleaning device to clean the target cleaning area according to the target cleaning strategy.

需要说明的是,本公开实施例的清洁设备的控制装置500的其他具体实施方式,可参见本公开上述实施例的清洁设备的控制方法的具体实施方式。It should be noted that, for other specific implementations of the control device 500 of the cleaning equipment in the embodiment of the present disclosure, reference may be made to the specific implementations of the control method of the cleaning equipment in the above-mentioned embodiment of the present disclosure.

基于上述实施例的清洁设备的控制方法,本公开还提出了一种计算机可读存储介质。Based on the control method of the cleaning device in the above embodiment, the present disclosure also proposes a computer-readable storage medium.

在该实施例中,其上存储有计算机程序,计算机程序被处理器执行时,实现上述的清洁设备的控制方法。In this embodiment, a computer program is stored thereon, and when the computer program is executed by the processor, the above-mentioned control method of the cleaning device is implemented.

基于上述实施例的清洁设备的控制方法,本公开还提出了一种控制器。Based on the control method of the cleaning device in the above embodiment, the present disclosure also proposes a controller.

图14是本公开实施例的控制器的结构框图。FIG. 14 is a structural block diagram of a controller according to an embodiment of the present disclosure.

如图所示,控制器600包括处理器601、存储器603和存储在存储器上的计算机程序,计算机程序被处理器执行时,实现上述的清洁设备的控制方法。As shown in the figure, the controller 600 includes a processor 601, a memory 603 and a computer program stored in the memory. When the computer program is executed by the processor, the control method of the cleaning device described above is implemented.

处理器601可以是CPU(Central Processing Unit,中央处理器),通用处理器,DSP(Digital Signal Processor,数字信号处理器),ASIC(Application Specific Integrated Circuit,专用集成电路),FPGA(Field Programmable Gate Array,现场可编程门阵列)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本公开公开内容所描述的各种示例性的逻辑方框、模块和电路。处理器601也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等。Processor 601 may be a CPU (Central Processing Unit), a general-purpose processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. It may implement or execute various exemplary logic blocks, modules and circuits described in conjunction with the disclosure of the present invention. Processor 601 may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, etc.

总线602可包括一通路,在上述组件之间传送信息。总线602可以是PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。总线602可以分为地址总线、数据总线、控制总线等。为便于表示,图14中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。 The bus 602 may include a path for transmitting information between the above components. The bus 602 may be a PCI (Peripheral Component Interconnect) bus or an EISA (Extended Industry Standard Architecture) bus, etc. The bus 602 may be divided into an address bus, a data bus, a control bus, etc. For ease of representation, FIG. 14 only uses one thick line, but does not mean that there is only one bus or one type of bus.

存储器603用于存储与本公开上述实施例的清洁设备的控制方法对应的计算机程序,该计算机程序由处理器601来控制执行。处理器601用于执行存储器603中存储的计算机程序,以实现前述方法实施例所示的内容。The memory 603 is used to store a computer program corresponding to the control method of the cleaning device of the above embodiment of the present disclosure, and the computer program is controlled and executed by the processor 601. The processor 601 is used to execute the computer program stored in the memory 603 to implement the contents shown in the above method embodiment.

其中,控制器600包括但不限于:移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)等等的移动终端以及诸如台式计算机等等的固定终端。图14示出的控制器600仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。The controller 600 includes but is not limited to: mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), etc., and fixed terminals such as desktop computers, etc. The controller 600 shown in FIG. 14 is only an example and should not bring any limitation to the functions and scope of use of the embodiments of the present disclosure.

图15是本公开一个实施例的清洁设备的结构框图。FIG. 15 is a structural block diagram of a cleaning device according to an embodiment of the present disclosure.

如图15所示,清洁设备700包括上述实施例的控制器600。As shown in FIG. 15 , a cleaning device 700 includes the controller 600 of the above embodiment.

综上,本公开实施例的清洁设备的控制方法、装置及存储介质、控制器、设备,通过单目图像采集元件采集的待清洁区域图像进行图像分割和边缘处理,并结合分割结果和边缘处理结果实现对工作场景中地面空旷区域以及重点障碍物(如污渍、地毯等)的检测,构建得到目标地图,进而实现根据目标地图对清洁设备进行控制。由此,提高了自动清洁设备在复杂环境中的工作效率和准确性。In summary, the control method, device, storage medium, controller, and device of the cleaning equipment of the disclosed embodiment perform image segmentation and edge processing on the image of the area to be cleaned acquired by the monocular image acquisition element, and detect the open areas on the ground and key obstacles (such as stains, carpets, etc.) in the working scene by combining the segmentation results and edge processing results, and construct a target map, thereby realizing the control of the cleaning equipment according to the target map. As a result, the working efficiency and accuracy of the automatic cleaning equipment in complex environments are improved.

需要说明的是,在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,“计算机可读介质”可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。It should be noted that the logic and/or steps represented in the flowchart or otherwise described herein, for example, can be considered as a sequenced list of executable instructions for implementing logical functions, and can be specifically implemented in any computer-readable medium for use by an instruction execution system, device or apparatus (such as a computer-based system, a system including a processor, or other system that can fetch instructions from an instruction execution system, device or apparatus and execute instructions), or in combination with these instruction execution systems, devices or apparatuses. For the purposes of this specification, "computer-readable medium" can be any device that can contain, store, communicate, propagate or transmit a program for use by an instruction execution system, device or apparatus, or in combination with these instruction execution systems, devices or apparatuses. More specific examples of computer-readable media (a non-exhaustive list) include the following: an electrical connection portion with one or more wirings (electronic device), a portable computer disk box (magnetic device), a random access memory (RAM), a read-only memory (ROM), an erasable and programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disk read-only memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium on which the program is printed, since the program may be obtained electronically, for example, by optically scanning the paper or other medium and then editing, interpreting or processing in other suitable ways if necessary, and then stored in a computer memory.

应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that the various parts of the present disclosure can be implemented in hardware, software, firmware or a combination thereof. In the above-mentioned embodiments, multiple steps or methods can be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one of the following technologies known in the art or their combination: a discrete logic circuit having a logic gate circuit for implementing a logic function for a data signal, a dedicated integrated circuit having a suitable combination of logic gate circuits, a programmable gate array (PGA), a field programmable gate array (FPGA), etc.

在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, the description with reference to the terms "one embodiment", "some embodiments", "example", "specific example", or "some examples" means that the specific features, structures, materials or characteristics described in conjunction with the embodiment or example are included in at least one embodiment or example of the present disclosure. In this specification, the schematic representation of the above terms does not necessarily refer to the same embodiment or example. Moreover, the specific features, structures, materials or characteristics described may be combined in any one or more embodiments or examples in a suitable manner.

在本公开的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”、“顺时针”、“逆时针”、“轴向”、“径向”、“周向”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本公开和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本公开的限制。In the description of the present disclosure, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inside", "outside", "clockwise", "counterclockwise", "axial", "radial", "circumferential" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the accompanying drawings, and are only for the convenience of describing the present disclosure and simplifying the description, and do not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and therefore should not be understood as a limitation on the present disclosure.

此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一 个该特征。在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only and should not be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Therefore, the features defined as "first" and "second" may explicitly or implicitly include at least one In the description of the present disclosure, "plurality" means at least two, such as two, three, etc., unless otherwise clearly and specifically defined.

在本公开中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、“固定”等术语应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本公开中的具体含义。In the present disclosure, unless otherwise clearly specified and limited, the terms "installed", "connected", "connected", "fixed" and the like should be understood in a broad sense, for example, it can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, it can be the internal connection of two elements or the interaction relationship between two elements, unless otherwise clearly defined. For ordinary technicians in this field, the specific meanings of the above terms in the present disclosure can be understood according to specific circumstances.

在本公开中,除非另有明确的规定和限定,第一特征在第二特征“上”或“下”可以是第一和第二特征直接接触,或第一和第二特征通过中间媒介间接接触。而且,第一特征在第二特征“之上”、“上方”和“上面”可是第一特征在第二特征正上方或斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”可以是第一特征在第二特征正下方或斜下方,或仅仅表示第一特征水平高度小于第二特征。In the present disclosure, unless otherwise clearly specified and limited, a first feature being "above" or "below" a second feature may mean that the first and second features are in direct contact, or the first and second features are in indirect contact through an intermediate medium. Moreover, a first feature being "above", "above" or "above" a second feature may mean that the first feature is directly above or obliquely above the second feature, or simply means that the first feature is higher in level than the second feature. A first feature being "below", "below" or "below" a second feature may mean that the first feature is directly below or obliquely below the second feature, or simply means that the first feature is lower in level than the second feature.

尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。 Although the embodiments of the present disclosure have been shown and described above, it is to be understood that the above embodiments are exemplary and are not to be construed as limitations of the present disclosure. A person skilled in the art may change, modify, replace and vary the above embodiments within the scope of the present disclosure.

Claims (27)

一种清洁设备的控制方法,包括:A method for controlling a cleaning device, comprising: 获取待清洁区域数据;Obtain data of the area to be cleaned; 对所述待清洁区域数据进行处理;Processing the data of the area to be cleaned; 根据数据处理结果,构建所述待清洁区域的目标地图,所述目标地图包括障碍物信息;According to the data processing result, a target map of the area to be cleaned is constructed, wherein the target map includes obstacle information; 根据所述目标地图对所述清洁设备进行控制。The cleaning device is controlled according to the target map. 根据权利要求1所述的方法,其中,所述待清洁区域数据包括待清洁区域图像,对所述待清洁区域数据进行处理包括:The method according to claim 1, wherein the data of the area to be cleaned includes an image of the area to be cleaned, and processing the data of the area to be cleaned includes: 对所述待清洁区域数据进行图像分割和边缘处理,以便根据边缘处理结果和分割结果,构建所述待清洁区域的目标地图。The data of the area to be cleaned are subjected to image segmentation and edge processing, so as to construct a target map of the area to be cleaned according to the edge processing result and the segmentation result. 根据权利要求2所述的方法,其中,利用预先训练好的分割模型对所述待清洁区域图像进行图像分割,所述分割模型的训练过程包括:The method according to claim 2, wherein the image of the area to be cleaned is segmented using a pre-trained segmentation model, and the training process of the segmentation model includes: 采集得到多个工作场景图像;A plurality of working scene images are acquired; 将多个所述工作场景图像划分为训练集和测试集,并分别对所述训练集和所述测试集中的工作场景图像中的待检目标进行标注,其中,所述待检目标包括背景、地面,以及地面上的可越过障碍物;Dividing the plurality of work scene images into a training set and a test set, and respectively marking the targets to be inspected in the work scene images in the training set and the test set, wherein the targets to be inspected include a background, a ground, and traversable obstacles on the ground; 构建分割模型,并利用所述训练集及其对应的标注信息对所述分割模型进行预训练,以及利用所述测试集及其对应的标注信息对预训练好的分割模型进行测试,得到最终训练好的分割模型。A segmentation model is constructed, and the segmentation model is pre-trained using the training set and its corresponding annotation information, and the pre-trained segmentation model is tested using the test set and its corresponding annotation information to obtain a final trained segmentation model. 根据权利要求3所述的方法,其中,所述根据分割结果和边缘处理结果,构建所述待清洁区域的目标地图,包括:The method according to claim 3, wherein constructing a target map of the area to be cleaned according to the segmentation results and the edge processing results comprises: 对所述分割结果中的地面区域进行腐蚀处理,得到第一图像;Performing corrosion processing on the ground area in the segmentation result to obtain a first image; 根据所述边缘处理结果和所述第一图像,得到所述待清洁区域图像中背景与地面接触的边缘线;Obtaining an edge line where the background and the ground are in contact in the image of the area to be cleaned according to the edge processing result and the first image; 根据所述边缘线和所述分割结果中地面上的可越过障碍物,构建所述目标地图。The target map is constructed according to the edge lines and the traversable obstacles on the ground in the segmentation results. 根据权利要求4所述的方法,其中,所述待清洁区域图像是利用安装于所述清洁设备的图像采集元件采集得到的,所述根据所述边缘处理结果和所述第一图像,得到所述待清洁区域图像中背景与地面接触的边缘线,包括:The method according to claim 4, wherein the image of the area to be cleaned is acquired by using an image acquisition element installed on the cleaning device, and obtaining an edge line where the background and the ground are in contact in the image of the area to be cleaned according to the edge processing result and the first image, comprises: 根据所述第一图像去掉所述边缘处理结果中处于地面区域的边缘;According to the first image, removing the edge in the ground area in the edge processing result; 按照所述图像采集元件的视线由近及远在去掉处于地面区域的边缘后的边缘处理结果中确定出所述边缘线。The edge line is determined from the edge processing result after removing the edge in the ground area according to the line of sight of the image acquisition element from near to far. 根据权利要求4所述的方法,其中,所述根据所述边缘线和所述分割结果中地面上的可越过障碍物,构建所述目标地图,包括:The method according to claim 4, wherein constructing the target map according to the edge line and the traversable obstacles on the ground in the segmentation result comprises: 根据所述边缘线确定可通行区域,并根据所述可通行区域和所述分割结果中地面上的可越过障碍物,得到所述可通行区域中可越过障碍物的类别和位置;Determine a traversable area according to the edge line, and obtain the type and position of the traversable obstacles in the traversable area according to the traversable area and the traversable obstacles on the ground in the segmentation result; 根据所述可通行区域中可越过障碍物的类别和位置,构建所述目标地图。The target map is constructed according to the categories and positions of the traversable obstacles in the traversable area. 根据权利要求1所述的方法,其中,所述根据所述目标地图对所述清洁设备进行控制,包括:The method according to claim 1, wherein controlling the cleaning device according to the target map comprises: 根据所述目标地图更新全局地图;Updating the global map according to the target map; 根据更新后的全局地图确定目标清洁区域和目标清洁策略;Determine a target cleaning area and a target cleaning strategy based on the updated global map; 按照所述目标清洁策略控制所述清洁设备对所述目标清洁区域进行清洁。The cleaning device is controlled to clean the target cleaning area according to the target cleaning strategy. 根据权利要求1所述的方法,其中,所述清洁设备包括移动设备,所述根据所述目标地图对所述清洁设备进行控制,包括: The method according to claim 1, wherein the cleaning device comprises a mobile device, and the controlling the cleaning device according to the target map comprises: 确定所述移动设备的第一绕障轨迹;Determining a first obstacle avoidance trajectory of the mobile device; 若所述第一绕障轨迹满足更新条件,更新所述第一绕障轨迹,得到第二绕障轨迹;其中,所述第二绕障轨迹包括的第二障碍物与所述第一绕障轨迹包括的第一障碍物具有不同的物体特征;If the first obstacle avoidance trajectory satisfies an update condition, the first obstacle avoidance trajectory is updated to obtain a second obstacle avoidance trajectory; wherein a second obstacle included in the second obstacle avoidance trajectory has different object features from the first obstacle included in the first obstacle avoidance trajectory; 控制移动设备沿所述第二绕障轨迹移动,以使移动设备在移动过程中绕开所述第一障碍物和所述第二障碍物。The mobile device is controlled to move along the second obstacle avoidance trajectory so that the mobile device avoids the first obstacle and the second obstacle during the movement. 根据权利要求8所述的方法,其中,所述若所述第一绕障轨迹满足更新条件,更新所述第一绕障轨迹,得到第二绕障轨迹,包括:The method according to claim 8, wherein if the first obstacle avoidance trajectory satisfies an update condition, updating the first obstacle avoidance trajectory to obtain a second obstacle avoidance trajectory comprises: 控制所述移动设备沿所述第一绕障轨迹移动,以使移动设备在移动过程中绕开第一障碍物;Controlling the mobile device to move along the first obstacle avoidance trajectory so that the mobile device avoids the first obstacle during the movement; 若所述移动设备沿所述第一绕障轨迹移动过程中存在第二障碍物,确定所述第一绕障轨迹满足所述更新条件,基于所述第二障碍物更新所述第一绕障轨迹,得到所述第二绕障轨迹。If a second obstacle exists when the mobile device moves along the first obstacle avoidance trajectory, it is determined that the first obstacle avoidance trajectory satisfies the update condition, and the first obstacle avoidance trajectory is updated based on the second obstacle to obtain the second obstacle avoidance trajectory. 根据权利要求8所述的方法,其中,所述第一障碍物包括基于第一传感器的第一传感数据所确定的障碍物,和/或所述第二障碍物包括基于第二传感器的第二传感数据所确定的障碍物。The method according to claim 8, wherein the first obstacle comprises an obstacle determined based on first sensing data of a first sensor, and/or the second obstacle comprises an obstacle determined based on second sensing data of a second sensor. 根据权利要求9所述的方法,其中,所述控制移动设备沿所述第一绕障轨迹移动时,所述方法还包括:The method according to claim 9, wherein when controlling the mobile device to move along the first obstacle avoidance trajectory, the method further comprises: 获取所述移动设备沿所述第一绕障轨迹移动过程中,所述移动设备的避障数据;Obtaining obstacle avoidance data of the mobile device during the movement of the mobile device along the first obstacle avoidance trajectory; 若所述避障数据满足避障条件,确定所述避障数据对应的避障对象为所述第二障碍物。If the obstacle avoidance data satisfies the obstacle avoidance condition, it is determined that the obstacle avoidance object corresponding to the obstacle avoidance data is the second obstacle. 根据权利要求11所述的方法,其中,所述避障数据包括位姿数据,所述避障数据满足避障条件,包括:The method according to claim 11, wherein the obstacle avoidance data includes posture data, and the obstacle avoidance data satisfies the obstacle avoidance condition, including: 基于所述位姿数据确定所述移动设备沿所述第一绕障轨迹移动过程中的偏航轨迹;Determining a yaw trajectory of the mobile device during movement along the first obstacle avoidance trajectory based on the position and posture data; 若所述偏航轨迹满足轨迹特征,确定所述避障数据满足避障条件。If the yaw trajectory meets the trajectory characteristics, it is determined that the obstacle avoidance data meets the obstacle avoidance condition. 根据权利要求11所述的方法,其中,所述避障数据包括碰撞数据,所述避障数据满足避障条件,包括:The method according to claim 11, wherein the obstacle avoidance data includes collision data, and the obstacle avoidance data satisfies an obstacle avoidance condition, including: 基于所述碰撞数据确定所述移动设备沿所述第一绕障轨迹移动过程中的碰撞角度和碰撞强度;Determine, based on the collision data, a collision angle and a collision intensity of the mobile device during movement along the first obstacle avoidance trajectory; 若所述碰撞角度满足预设角度且所述碰撞强度满足预设力度,确定所述避障数据满足避障条件。If the collision angle satisfies a preset angle and the collision intensity satisfies a preset intensity, it is determined that the obstacle avoidance data satisfies an obstacle avoidance condition. 根据权利要求11所述的方法,其中,所述避障数据包括障碍物对应的面积和轮廓,所述避障数据满足避障条件,包括:The method according to claim 11, wherein the obstacle avoidance data includes an area and a contour corresponding to an obstacle, and the obstacle avoidance data satisfies an obstacle avoidance condition, including: 若所述障碍物对应的面积满足预设面积且轮廓满足预设形状,确定所述避障数据满足避障条件。If the area corresponding to the obstacle satisfies the preset area and the outline satisfies the preset shape, it is determined that the obstacle avoidance data satisfies the obstacle avoidance condition. 根据权利要求14所述的方法,其中,所述控制移动设备沿所述第一绕障轨迹移动时,所述方法还包括:The method according to claim 14, wherein when controlling the mobile device to move along the first obstacle avoidance trajectory, the method further comprises: 获取移动设备沿所述第一绕障轨迹移动过程中绕开的障碍物对应的面积;Obtaining an area corresponding to an obstacle avoided by the mobile device during movement along the first obstacle avoidance trajectory; 若所述面积小于预设阈值,确定存在第二障碍物。If the area is smaller than a preset threshold, it is determined that a second obstacle exists. 根据权利要求8所述的方法,其中,所述确定移动设备的第一绕障轨迹,包括:The method according to claim 8, wherein determining a first obstacle avoidance trajectory of the mobile device comprises: 获取第一障碍物信息;Obtaining first obstacle information; 基于所述第一障碍物信息构建环境地图,生成所述第一绕障轨迹。An environment map is constructed based on the first obstacle information to generate the first obstacle avoidance trajectory. 根据权利要求8所述的方法,其中,所述若所述第一绕障轨迹满足更新条件,更新所述第一绕障轨迹,得到第二绕障轨迹,包括:The method according to claim 8, wherein if the first obstacle avoidance trajectory satisfies an update condition, updating the first obstacle avoidance trajectory to obtain a second obstacle avoidance trajectory comprises: 在人机交互界面上显示所述第一绕障轨迹;Displaying the first obstacle avoidance trajectory on a human-computer interaction interface; 当所述移动设备沿所述第一绕障轨迹移动过程中,在所述人机交互界面上接收针对所述第一绕障轨迹标记的所述第二障碍物,确定所述第一绕障轨迹满足所述更新条件;When the mobile device moves along the first obstacle avoidance trajectory, receiving the second obstacle marked for the first obstacle avoidance trajectory on the human-computer interaction interface, and determining that the first obstacle avoidance trajectory satisfies the update condition; 基于所述第二障碍物更新所述第一绕障轨迹,得到所述第二绕障轨迹,并在所述人机交互界面上显 示所述第二绕障轨迹。The first obstacle avoidance trajectory is updated based on the second obstacle to obtain the second obstacle avoidance trajectory, and the second obstacle avoidance trajectory is displayed on the human-computer interaction interface. The second obstacle avoidance trajectory is shown. 根据权利要求8所述的方法,其中,所述方法还包括:The method according to claim 8, wherein the method further comprises: 获取移动设备按照第一地图移动时的障碍物信息,其中,所述第一地图为所述目标地图;Obtaining obstacle information when the mobile device moves according to a first map, wherein the first map is the target map; 根据所述障碍物信息,更新所述第一地图上的障碍物,并显示更新后的障碍物。According to the obstacle information, the obstacles on the first map are updated, and the updated obstacles are displayed. 根据权利要求18所述的方法,其中,所述更新后的障碍物包括未更新前的第一障碍物、新增加的第二障碍物,所述第二障碍物与所述第一障碍物具有不同的物体特征。The method according to claim 18, wherein the updated obstacle includes a first obstacle before updating and a newly added second obstacle, and the second obstacle has different object features from the first obstacle. 根据权利要求18所述的方法,其中,所述根据所述障碍物信息,更新所述第一地图上的障碍物,包括:The method according to claim 18, wherein updating the obstacles on the first map according to the obstacle information comprises: 若所述障碍物信息表征移动设备按照所述第一地图移动时存在第二障碍物,更新所述第一地图上的障碍物。If the obstacle information indicates that a second obstacle exists when the mobile device moves according to the first map, the obstacle on the first map is updated. 根据权利要求19所述的方法,其中,所述第一障碍物包括基于第一传感器的第一传感数据所确定的障碍物,和/或所述第二障碍物包括基于第二传感器的第二传感数据所确定的障碍物。The method according to claim 19, wherein the first obstacle comprises an obstacle determined based on first sensing data of a first sensor, and/or the second obstacle comprises an obstacle determined based on second sensing data of a second sensor. 根据权利要求18所述的方法,其中,所述获取移动设备按照所述第一地图移动时的障碍物信息,包括:The method according to claim 18, wherein obtaining obstacle information when the mobile device moves according to the first map comprises: 在人机交互界面上显示所述第一障碍物;Displaying the first obstacle on the human-computer interaction interface; 当所述移动设备沿所述第一地图移动过程中,在所述人机交互界面上接收标记的第二障碍物,得到所述障碍物信息。When the mobile device moves along the first map, a marked second obstacle is received on the human-computer interaction interface to obtain the obstacle information. 根据权利要求19至22中任一项所述的方法,其中,所述根据所述障碍物信息,更新所述第一地图上的障碍物之后,所述方法包括:The method according to any one of claims 19 to 22, wherein after updating the obstacles on the first map according to the obstacle information, the method comprises: 控制所述移动设备在移动过程中绕开所述第一障碍物和所述第二障碍物。The mobile device is controlled to avoid the first obstacle and the second obstacle during movement. 一种清洁设备的控制装置,包括:A control device for a cleaning device, comprising: 获取模块,用于获取待清洁区域数据;An acquisition module is used to acquire data of the area to be cleaned; 处理模块,用于对所述待清洁区域数据进行处理;A processing module, used for processing the data of the area to be cleaned; 构建模块,用于根据数据处理结果,构建所述待清洁区域的目标地图,所述目标地图包括障碍物信息;A construction module, used to construct a target map of the area to be cleaned according to the data processing result, wherein the target map includes obstacle information; 控制模块,用于根据所述目标地图对所述清洁设备进行控制。A control module is used to control the cleaning device according to the target map. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现如权利要求1-23中任一项所述的清洁设备的控制方法。A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the control method of the cleaning device according to any one of claims 1 to 23 is implemented. 一种控制器,包括存储器、处理器和存储在所述存储器上的计算机程序,所述计算机程序被所述处理器执行时,实现根据权利要求1-23中任一项所述的清洁设备的控制方法。A controller comprises a memory, a processor and a computer program stored in the memory, wherein when the computer program is executed by the processor, the control method of the cleaning device according to any one of claims 1 to 23 is implemented. 一种清洁设备,其特征在于,包括根据权利要求26所述的控制器。 A cleaning device, characterized by comprising a controller according to claim 26.
PCT/CN2024/121422 2023-09-28 2024-09-26 Cleaning device control method and apparatus, storage medium, controller, and device Pending WO2025067332A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202311278568.5 2023-09-28
CN202311278568.5A CN119758985A (en) 2023-09-28 2023-09-28 A method, device, equipment and storage medium for setting obstacle avoidance trajectory
CN202410322675.1 2024-03-20
CN202410322675.1A CN118924195A (en) 2024-03-20 2024-03-20 Control method and device of cleaning equipment, storage medium, controller and equipment

Publications (1)

Publication Number Publication Date
WO2025067332A1 true WO2025067332A1 (en) 2025-04-03

Family

ID=95204367

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/121422 Pending WO2025067332A1 (en) 2023-09-28 2024-09-26 Cleaning device control method and apparatus, storage medium, controller, and device

Country Status (1)

Country Link
WO (1) WO2025067332A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170097643A1 (en) * 2014-11-26 2017-04-06 Irobot Corporation Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems
CN108344414A (en) * 2017-12-29 2018-07-31 中兴通讯股份有限公司 A kind of map structuring, air navigation aid and device, system
CN110522359A (en) * 2019-09-03 2019-12-03 深圳飞科机器人有限公司 Cleaning robot and control method of cleaning robot
CN110968083A (en) * 2018-09-30 2020-04-07 科沃斯机器人股份有限公司 Construction method, obstacle avoidance method, equipment and medium of grid map
CN114445440A (en) * 2020-11-03 2022-05-06 苏州科瓴精密机械科技有限公司 Obstacle identification method applied to self-walking equipment and self-walking equipment
CN114903384A (en) * 2022-06-13 2022-08-16 苏州澜途科技有限公司 Method and device for segmentation of working scene map area of cleaning robot
CN115381354A (en) * 2022-07-28 2022-11-25 广州宝乐软件科技有限公司 Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170097643A1 (en) * 2014-11-26 2017-04-06 Irobot Corporation Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems
CN108344414A (en) * 2017-12-29 2018-07-31 中兴通讯股份有限公司 A kind of map structuring, air navigation aid and device, system
CN110968083A (en) * 2018-09-30 2020-04-07 科沃斯机器人股份有限公司 Construction method, obstacle avoidance method, equipment and medium of grid map
CN110522359A (en) * 2019-09-03 2019-12-03 深圳飞科机器人有限公司 Cleaning robot and control method of cleaning robot
CN114445440A (en) * 2020-11-03 2022-05-06 苏州科瓴精密机械科技有限公司 Obstacle identification method applied to self-walking equipment and self-walking equipment
CN114903384A (en) * 2022-06-13 2022-08-16 苏州澜途科技有限公司 Method and device for segmentation of working scene map area of cleaning robot
CN115381354A (en) * 2022-07-28 2022-11-25 广州宝乐软件科技有限公司 Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment

Similar Documents

Publication Publication Date Title
CN109682368B (en) Robot and map construction method, positioning method, electronic device, storage medium
EP3985469A1 (en) Cleaning subarea planning method for robot walking along edge, chip and robot
CN113331743A (en) Method for cleaning floor by cleaning robot and cleaning robot
CN113741438A (en) Path planning method and device, storage medium, chip and robot
CN110989631A (en) Self-moving robot control method, device, self-moving robot and storage medium
CN112075879A (en) Information processing method, device and storage medium
KR20180125010A (en) Control method of mobile robot and mobile robot
CN115607052A (en) Cleaning method, device and equipment of robot and cleaning robot
CN114365974B (en) A method and device for indoor cleaning and partitioning, and a sweeping robot
CN108628318A (en) Congestion environment detection method and device, robot and storage medium
CN115500740B (en) Cleaning robot and cleaning robot control method
WO2025195415A1 (en) Self-moving device control method and apparatus, device, medium, and program product
WO2022028110A1 (en) Map creation method and apparatus for self-moving device, and device and storage medium
CN113567550A (en) Ground material detection method and device, electronic equipment, chip and storage medium
CN115511939A (en) Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus
WO2025190403A1 (en) Ground information determination method and apparatus for self-propelled device, device, and medium
CN118633876A (en) Cleaning robot and control method thereof, related equipment and computer program product
Andreasson et al. 6D scan registration using depth-interpolated local image features
CN114779777A (en) Sensor control method and device for self-moving robot, medium and robot
CN112927278B (en) Control method, device, robot and computer readable storage medium
CN118924195A (en) Control method and device of cleaning equipment, storage medium, controller and equipment
WO2025067332A1 (en) Cleaning device control method and apparatus, storage medium, controller, and device
CN113516715A (en) Target area inputting method and device, storage medium, chip and robot
CN118675129A (en) Carpet detection method and device, electronic equipment and storage medium
CN117562443A (en) Robot control method, robot, cleaning system, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24870872

Country of ref document: EP

Kind code of ref document: A1