CN116000924B - Robot control method, device, robot and computer-readable storage medium - Google Patents
Robot control method, device, robot and computer-readable storage mediumInfo
- Publication number
- CN116000924B CN116000924B CN202211679840.6A CN202211679840A CN116000924B CN 116000924 B CN116000924 B CN 116000924B CN 202211679840 A CN202211679840 A CN 202211679840A CN 116000924 B CN116000924 B CN 116000924B
- Authority
- CN
- China
- Prior art keywords
- robot
- point
- polar coordinate
- trapped
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The application is suitable for the technical field of robots, and provides a robot control method, a robot control device, a robot and a computer readable storage medium, wherein when the robot is in a trapped state, a radar installed on the robot is used for acquiring a local point cloud map of a current trapped region; and if a getting rid of trapping path is detected in the process of the edge navigation, controlling the robot to move out of the trapped area along the getting rid of trapping path. By the method, the escaping efficiency and success rate of the robot can be effectively improved, and the intelligent degree of the robot is improved.
Description
Technical Field
The application belongs to the technical field of robots, and particularly relates to a robot control method, a robot control device, a robot and a computer readable storage medium.
Background
With the development of intelligent control technology, robots are increasingly widely used. Some robots, such as a sweeping robot, a killing robot, a mowing robot and the like, for executing regional coverage tasks can be trapped in a small region in the task execution process and cannot be separated. For example, in a home environment, there are many narrow areas such as four foot gaps of a stool and an entrance of a tea table, and when a robot enters these narrow areas due to factors such as accumulated errors, low resolution of collision sensors and infrared sensors, if the robot is to leave from the original entrance, the robot is likely to collide with corners and become trapped in the narrow areas.
Currently, when the robot encounters the above situation, an alarm is usually sent to prompt the user to help the user get rid of the trouble. The intelligent degree of the escape mode is low, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a robot control method, a robot control device, a robot and a computer readable storage medium, which can effectively improve the intelligent degree of the robot.
In a first aspect, an embodiment of the present application provides a robot control method, including:
When the robot is in a trapped state, acquiring a local point cloud map of a current trapped region through a radar installed on the robot;
Performing edge navigation according to the local point cloud map to control the robot to move along the edge in the trapped area;
and if the getting-out path is detected in the process of the edge navigation, controlling the robot to move out of the trapped area along the getting-out path.
According to the embodiment of the application, when the robot is in a trapped state, the robot is guided along the edges according to the local point cloud map of the trapped region, and because the local point cloud map of the trapped region is obtained through the radar installed on the robot, compared with the global map adopted when the robot works normally, the local point cloud map can reflect the environment condition of the trapped region more clearly and in more detail, a reliable data base is provided for subsequent control, and the robot is controlled to move along the edges in the trapped region through the edge navigation, so that the robot can avoid obstacles more easily and find out a escaping path more easily and rapidly. By the method, the escaping efficiency and success rate of the robot are effectively improved, and the intelligent degree of the robot is greatly improved.
In a possible implementation manner of the first aspect, the performing edge navigation according to the local point cloud map includes:
Generating a polar coordinate map according to the local point cloud map for each control process of the edge navigation, wherein the polar coordinate map comprises a plurality of polar coordinate points, and a polar coordinate system of the polar coordinate map takes the current position of the robot as an origin;
Searching a first control point in the polar coordinate map;
determining a first target point according to the first control point and the current pose of the robot;
And controlling the robot to move to the first target point.
In a possible implementation manner of the first aspect, the searching for the first control point in the polar map includes:
Determining a first search range in the polar coordinate map according to the current pose of the robot;
Sequentially traversing polar coordinate points in the first search range;
if two polar coordinate points meeting a preset condition exist in the first search range, determining a polar coordinate point with a rear traversing sequence in the two polar coordinate points meeting the preset condition as the first control point, wherein the two polar coordinate points meet the preset condition in such a way that the distance between the two polar coordinate points is larger than a first preset value or the included angle is larger than a first preset angle;
if two polar coordinate points meeting a preset condition do not exist in the first search range, and the maximum distance value of the polar coordinate points in the first search range is larger than a second preset value, determining the polar coordinate point corresponding to the maximum distance value in the first search range as the first control point;
And if two polar coordinate points meeting the preset condition do not exist in the first search range and the maximum distance value of the polar coordinate points in the first search range is smaller than a second preset value, determining the last traversed polar coordinate point in the first search range as the first control point.
According to the edge navigation method, point cloud data detected by a radar are converted into polar coordinate points, and local control points and local target points are searched step by traversing the polar coordinate points in a search range, so that the edge control of the robot is realized. In addition, as the sampling points controlled by the edge are radar point clouds, the accuracy of the edge control can be effectively improved.
In a possible implementation manner of the first aspect, the method further includes:
if an obstacle is detected in the process of the edge navigation, acquiring point cloud data of the obstacle;
and adding the point cloud data to the local point cloud map.
In a possible implementation manner of the first aspect, the method further includes:
calculating the average position of the robot in a first preset time period before the current moment every preset period;
Calculating the furthest distance between the position of the robot and the average position within the first preset time period;
and if the farthest distance is smaller than a third preset value, judging that the robot is in a trapped state.
In a possible implementation manner of the first aspect, the method further includes:
and stopping the edgewise navigation if the first escaping condition is detected to be met in the edgewise navigation process, wherein the first escaping condition is met in such a way that the distance between the current position and the average position of the robot is larger than the farthest distance.
In a possible implementation manner of the first aspect, the method further includes:
when the robot is in a trapped state, determining a trapped region according to the track of the robot in a second preset time period before the current moment;
and stopping the edgewise navigation if the second escaping condition is detected to be met in the edgewise navigation process, wherein the second escaping condition is met in such a way that the current position of the robot is outside the trapped area.
In a possible implementation manner of the first aspect, the trajectory of the robot includes a plurality of trajectory points;
when the robot is in a trapped state, determining a trapped region according to the track of the robot in a second preset time period before the current moment, including:
When the robot is in a trapped state, calculating a track center point of the robot in the second preset time period before the current moment;
traversing a plurality of track points of the robot within the second preset time period;
For any one of the track points, deleting the track point if the distance between the previous traversing point of the track point and the track center point and the distance between the next traversing point of the track point and the track center are larger than the distance between the track point and the track center point;
and determining the trapped region according to the trace points remained after the traversal.
In a second aspect, an embodiment of the present application provides a robot control device, including:
the acquisition unit is used for acquiring a local point cloud map of a current trapped area through a radar installed on the robot when the robot is in a trapped state;
the navigation unit is used for performing edge navigation according to the local point cloud map so as to control the robot to move along the edge in the trapped area;
and the control unit is used for controlling the robot to move out of the trapped area along the escaping path if the escaping path is detected in the process of edge navigation.
In a third aspect, an embodiment of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the robot control method according to any one of the first aspects when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a robot control method as in any one of the first aspects above.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to perform the robot control method according to any one of the first aspects described above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of a convex polygon provided by an embodiment of the present application;
Fig. 2 is a schematic flow chart of a robot control method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of searching control points according to an embodiment of the present application;
fig. 4 is a block diagram of a robot control device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise.
With the development of intelligent control technology, robots are increasingly widely used. Some robots, such as a sweeping robot, a killing robot, a mowing robot and the like, for executing regional coverage tasks can be trapped in a small region in the task execution process and cannot be separated. For example, in a home environment, there are many narrow areas such as four foot gaps of a stool and an entrance of a tea table, and when a robot enters these narrow areas due to factors such as accumulated errors, low resolution of collision sensors and infrared sensors, if the robot is to leave from the original entrance, the robot is likely to collide with corners and become trapped in the narrow areas.
Currently, when the robot encounters the above situation, an alarm is usually sent to prompt the user to help the user get rid of the trouble. The intelligent degree of the escape mode is low, and the user experience is poor.
In order to solve the above problems, embodiments of the present application provide a robot control method, apparatus, robot, and computer-readable storage medium. In the embodiment of the application, whether the robot is in a trapped state is detected in real time, and when the robot is detected to be in the trapped state, the robot is controlled to move along edges according to a local point cloud map of a trapped area acquired by a radar installed on the robot. By the method provided by the embodiment of the application, the escaping efficiency and success rate of the robot can be effectively improved, and the intelligent degree of the robot is greatly improved.
The robot control method provided by the embodiment of the application comprises two parts of trapped detection and trapped release control. First, a method for detecting a robot being trapped is described.
The first case, the trapped detection method may include:
In the task execution process of the robot, the average position of the robot in a first preset time period before the current moment is calculated every preset period, the farthest distance between the position of the robot and the average position in the first preset time period is calculated, the robot is judged to be in a trapped state if the farthest distance is smaller than a third preset value, and the robot is judged to be not in the trapped state if the farthest distance is larger than or equal to the third preset value.
Illustratively, during the task execution of the robot, the positioning coordinates of the robot in the global map are recorded every 2s (sampling period), and a position list L2 is generated. Every 20s (preset period), the average position of the robot within 1min (first preset period) before the current time is calculated. For example, when the current time is 10:00, all positioning coordinates in the period of 09:59-10:00 are obtained from L2, the average value of the positioning coordinates is calculated and is recorded as the average position, and then the distance between all positioning coordinates in the period of 09:59-10:00 and the average position is calculated to find the farthest distance.
It should be noted that the data in the above examples are only examples and are not limiting. In practical applications, the sampling period, the preset period, the first preset time period, etc. can be adjusted as required.
The detection method for the first case and the escaping condition can comprise the following steps:
Stopping the edge navigation if the first escaping condition is detected to be met in the edge navigation process, and continuing the edge navigation if the first escaping condition is not met in the edge navigation process, wherein the first escaping condition is met in such a way that the distance between the current position and the average position of the robot is larger than the farthest distance.
And secondly, in the process of executing tasks by the robot, performing edge navigation according to the global map. In this case, the trapped state detection may be performed by determining that the robot is in a trapped state if a closed area is formed along the navigation path of the edge navigation.
The second and the escaping detection method for the second case can comprise the following steps:
and stopping the edgewise navigation if the second escaping condition is detected to be met in the edgewise navigation process, wherein the second escaping condition is met in such a way that the current position of the robot is outside a trapped area.
In the embodiment of the application, the premise of performing the escaping detection for the second condition is that when the robot is in the trapped state, the trapped region is determined according to the track of the robot in a second preset time period before the current moment. The robot trajectory includes a plurality of trajectory points.
In some embodiments, the trapped region may be described by a convex polygon. The method for determining the trapped region comprises the steps of calculating a track center point of the robot in a second preset time period before the current moment when the robot is in a trapped state, traversing a plurality of track points of the robot in the second preset time period, and determining the trapped region according to the track points left after traversing if the distance between the previous traversing point of the track point and the track center point and the distance between the later traversing point of the track point and the track center point are larger than the distance between the track point and the track center point for any track point.
In the embodiment of the application, the track center point may be an average value of a plurality of track points.
Optionally, after calculating the track center point, a screening process may also be performed on the track point. In one implementation, the polar coordinates of the track points in the track of the robot are calculated, the polar coordinates comprise the distance and the angle from the track center point, and for any two adjacent track points, if the difference value between the angles in the polar coordinates corresponding to the two track points is smaller than a preset angle value (such as 20 degrees), the track point with the later sampling time is deleted. After the screening process, the remaining plurality of trace points are traversed.
For example, the relationship between the point cloud coordinates (x, y) of the trajectory point and the polar coordinates (r, θ) thereof is x= rcos θ, y= rsin θ. Where r represents the distance between the track point and the track center point, and θ represents the angle of the track point.
Exemplary, referring to fig. 1, a schematic diagram of a convex polygon according to an embodiment of the present application is provided. As shown in fig. 1 (a), the screening process includes 9 track points n1 to n9, and the track center point is o1. Traversing from the n1 point to the n9 point. For the n9 point, the distance d1 between the previous traversing points n8 and o1 is larger than the distance d2 between n9 and o1, and the distance d3 between the subsequent traversing points n1 and o1 is larger than the distance d2 between n9 and o1, so n9 is a concave point, and n9 is deleted. Note that, the next traversal point of the last traversal point n9 is the first traversal point n1. As shown in fig. 1 (b), a convex polygon is formed for the remaining traversing points, and the region occupied by the convex polygon is regarded as a trapped region.
Optionally, after obtaining the convex polygon, a range may be further expanded for the convex polygon, and the expanded range is determined as the trapped area. Specifically, each track point on the convex polygon is expanded by a preset distance along the angle in the polar coordinate of the track point. For example, for n1 in (b) of fig. 1, the polar coordinate angle is th1, and the angle is 0.2m along the th1 direction, i.e. the o1n1 line segment is extended by 0.2m.
By the method, the trapped region is properly amplified, so that inaccurate trapped range caused by calculation errors and detection data errors can be prevented, the determined trapped region contains the actual trapped range as much as possible, and the detection accuracy of the trapped region is improved.
Based on the two methods for detecting the trapped robot, the method for controlling the robot to get rid of the trapped robot is described below. Referring to fig. 2, which is a schematic flow chart of a robot control method according to an embodiment of the present application, by way of example and not limitation, as shown in fig. 2, the robot control method may include the following steps:
S101, when the robot is in a trapped state, acquiring a local point cloud map of a current trapped region through a radar installed on the robot.
The radar may be a lidar or a line radar, etc. It will be appreciated that if other sensors are mounted on the robot, such as a depth camera or an RGB camera, the sensor data acquired by the other sensors may be projected into the local point cloud map.
In one mode, the robot rotates at a certain speed (for example, 180 degrees), acquires a frame of radar data at certain intervals (for example, 45 degrees), and then superimposes the acquired radar data to generate a local point cloud map.
Each point in the radar data represents the angle and distance of a point on the probe target relative to the radar center. In the superposition process, point data in radar data are converted into a map coordinate system, and points in a local point cloud map are obtained. It should be noted that, the conversion relationship between the radar data and the map coordinate system may be calibrated in advance.
S102, performing edge navigation according to the local point cloud map so as to control the robot to move along the edge in the trapped area.
Optionally, if an obstacle is detected in the process of the edge navigation, acquiring point cloud data of the obstacle, and adding the point cloud data into the local point cloud map.
In the embodiment of the application, the point cloud data of the obstacle can be acquired through a radar on the robot or through other sensors (such as a camera and the like) on the robot.
The obstacle may be detected by a sensor on the robot, or the robot collides with the obstacle, or the obstacle may be detected when the elevation angle of the chassis of the robot is larger than a certain angle threshold.
And adding the point cloud data into the local point cloud map, specifically, acquiring a conversion relation between a sensor coordinate system corresponding to the point cloud data and a map coordinate system, and projecting the point cloud data into the map coordinate system according to the conversion relation. For example, when the point cloud data is acquired by a camera, the point cloud data is projected into a map coordinate system according to a conversion relationship between the camera coordinate system and the map coordinate system, and when the point cloud data is acquired by a radar, the point cloud data is projected into the map coordinate system according to a conversion relationship between the radar coordinate system and the map coordinate system.
By the method, the local point cloud map can be updated in real time according to the actual road conditions of the robot, and a reliable data base is provided for the edge navigation.
Optionally, in the process of edge navigation, acquiring a frame of radar data every a third preset time period, and overlapping the radar data into the local point cloud map. The overlapping manner is the same as the manner of adding the point cloud data to the local point cloud map, and is not described herein.
By the method, the local point cloud map can be updated in real time according to the actual road conditions of the robot, a reliable data base is provided for the edge navigation, and therefore the quick finding of the escaping path is facilitated.
And S103, if the getting-out path is detected in the process of the edge navigation, controlling the robot to move out of the trapped area along the getting-out path.
In the embodiment of the application, in the process of edge navigation, whether a getting rid of poverty path exists or not can be detected once every fourth preset time period according to the local point cloud map. The escape path is a path which has a path width larger than that of the robot and which communicates between the trapped region and the non-trapped region.
In one mode, when the escaping path is detected, the robot is controlled to retract for a preset distance, sensor data (such as a shot image or radar data) of a road in front of the current robot is obtained through a sensor on the robot, the escaping path is re-identified according to the sensor data, and if the re-identification result shows that the escaping path is detected, the robot is controlled to move out of a trapped area along the escaping path.
Through the mode of re-identifying the getting rid of poverty path, the false identification of the getting rid of poverty path can be effectively avoided, the failure rate of getting rid of poverty of the robot is reduced, and the getting rid of poverty efficiency of the robot is improved.
In some embodiments, the escaping detection may be performed once every fifth preset time period, if the escaping detection result indicates that the robot meets the escaping condition, the edge navigation is stopped, and if the escaping detection result indicates that the robot does not meet the escaping condition, the edge navigation is continued.
The detection method for detecting the two trapped states is described in the above embodiments, and is not described herein.
In the embodiment of S101-S103, when the robot is in a trapped state, the robot is guided along the edges according to the local point cloud map of the trapped region, and because the local point cloud map of the trapped region is acquired through the radar installed on the robot, compared with the global map adopted when the robot works normally, the local point cloud map can reflect the environment condition of the trapped region more clearly and in more detail, a reliable data basis is provided for subsequent control, and the robot is controlled to move along the edges in the trapped region through the edge navigation, so that the robot can avoid obstacles more easily and find out a trapped path more easily and quickly. By the method, the escaping efficiency and success rate of the robot are effectively improved, and the intelligent degree of the robot is greatly improved.
In some embodiments, the edgewise navigation process in S102 may include:
Generating a polar coordinate map according to the local point cloud map for each control process of the edge navigation, wherein the polar coordinate map comprises a plurality of polar coordinate points, and a polar coordinate system of the polar coordinate map takes the current position of the robot as an origin;
Searching a first control point in the polar coordinate map;
determining a first target point according to the first control point and the current pose of the robot;
And controlling the robot to move to the first target point.
In the embodiment of the application, when the robot moves from the first control point to the first target point, the control of one-time edgewise navigation is completed. Every time the robot reaches a target point, entering the next control process, and re-establishing a polar coordinate map by taking the current position of the robot as an origin.
Optionally, when the control times of the edge navigation reach the preset times, the escaping path is still not detected, or the current robot is still in a trapped state, the edge navigation is stopped, and an alarm message is sent out.
Through the limitation of preset times, when the robot cannot escape by depending on the self capacity in the limit environment, the robot can be effectively prevented from escaping from the limit navigation task all the time.
Optionally, searching for the first control point in the polar map may include the steps of:
Determining a first search range in the polar coordinate map according to the current pose of the robot;
Sequentially traversing polar coordinate points in the first search range;
if two polar coordinate points meeting a preset condition exist in the first search range, determining a polar coordinate point with a rear traversing sequence in the two polar coordinate points meeting the preset condition as the first control point, wherein the two polar coordinate points meet the preset condition in such a way that the distance between the two polar coordinate points is larger than a first preset value or the included angle is larger than a first preset angle;
if two polar coordinate points meeting a preset condition do not exist in the first search range, and the maximum distance value of the polar coordinate points in the first search range is larger than a second preset value, determining the polar coordinate point corresponding to the maximum distance value in the first search range as the first control point;
And if two polar coordinate points meeting the preset condition do not exist in the first search range and the maximum distance value of the polar coordinate points in the first search range is smaller than a second preset value, determining the last traversed polar coordinate point in the first search range as the first control point.
In the embodiment of the application, for the first control process of the edge navigation, the starting point of the first search range is the polar coordinate point with the smallest angle difference with the current direction of the robot in the polar coordinate map. For the nth control procedure (n > 1) of the edgewise navigation, the start point of the first search range is the control point determined in the n-1 th control procedure.
The first search range is rotated clockwise by a preset angle or counterclockwise by a preset angle at the polar coordinate angle corresponding to the start point.
Exemplary, referring to fig. 3, a schematic diagram of searching a control point according to an embodiment of the present application is shown. As shown in fig. 3, the direction indicated by the solid arrow is the current direction in which the robot is facing (i.e., the direction from o2 to o 3). Assume that a polar coordinate point with the smallest angle difference with the current direction of the robot in the polar coordinate map is m9, and the preset angle is 120 degrees. Then the angle of the polar coordinate of m1 is obtained by rotating 120 degrees clockwise with the angle of the polar coordinate of m9 as a reference. I.e. 9 polar coordinate points in total from m1 to m9 are included in the first search range.
Assuming that the distance between the two points of m6 and m7 is greater than a first preset value or the included angle is greater than a first preset angle, determining m7 as a first control point.
Assuming that the distance between every two adjacent points in m1-m9 is smaller than a first preset value, the included angle is smaller than a first preset angle, and the distance of the polar coordinates corresponding to m6 is the largest; and if the distance of the polar coordinates corresponding to m6 is larger than a second preset value, determining m6 as a first control point. As shown in fig. 3, m6 is determined as the first control point.
And if the distance between every two adjacent points in m1-m9 is smaller than a first preset value, the included angle is smaller than a first preset angle, the distance between the polar coordinates corresponding to m6 is the largest, and if the distance between the polar coordinates corresponding to m6 is smaller than a second preset value, m9 is determined to be a first control point.
In the embodiment of the application, the first target point is determined according to the first control point and the current pose of the robot by determining the foot drop point corresponding to the shortest distance from the first control point to the current direction of the robot as the first target point.
As shown in fig. 3, the line segment m6o3 is perpendicular to the line where o2o3 is located, and the first target point is o3. Wherein the direction from o2 to o3 is the direction of robot.
According to the edge navigation method, point cloud data detected by a radar are converted into polar coordinate points, and local control points and local target points are searched step by traversing the polar coordinate points in a search range, so that the edge control of the robot is realized. In addition, as the sampling points controlled by the edge are radar point clouds, the accuracy of the edge control can be effectively improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Corresponding to the robot control method described in the above embodiments, fig. 4 is a block diagram of the robot control device according to the embodiment of the present application, and for convenience of explanation, only the portions related to the embodiment of the present application are shown.
Referring to fig. 4, the apparatus includes:
an obtaining unit 41, configured to obtain, when the robot is in a trapped state, a local point cloud map of a current trapped area through a radar installed on the robot.
And the navigation unit 42 is used for performing edge navigation according to the local point cloud map so as to control the robot to move along the edge in the trapped area.
And the control unit 43 is configured to control the robot to move out of the trapped region along the escaping path if the escaping path is detected during the edge navigation.
Optionally, the navigation unit 42 is further configured to:
Generating a polar coordinate map according to the local point cloud map for each control process of the edge navigation, wherein the polar coordinate map comprises a plurality of polar coordinate points, and a polar coordinate system of the polar coordinate map takes the current position of the robot as an origin;
Searching a first control point in the polar coordinate map;
determining a first target point according to the first control point and the current pose of the robot;
And controlling the robot to move to the first target point.
Optionally, the navigation unit 42 is further configured to:
Determining a first search range in the polar coordinate map according to the current pose of the robot;
Sequentially traversing polar coordinate points in the first search range;
if two polar coordinate points meeting a preset condition exist in the first search range, determining a polar coordinate point with a rear traversing sequence in the two polar coordinate points meeting the preset condition as the first control point, wherein the two polar coordinate points meet the preset condition in such a way that the distance between the two polar coordinate points is larger than a first preset value or the included angle is larger than a first preset angle;
if two polar coordinate points meeting a preset condition do not exist in the first search range, and the maximum distance value of the polar coordinate points in the first search range is larger than a second preset value, determining the polar coordinate point corresponding to the maximum distance value in the first search range as the first control point;
And if two polar coordinate points meeting the preset condition do not exist in the first search range and the maximum distance value of the polar coordinate points in the first search range is smaller than a second preset value, determining the last traversed polar coordinate point in the first search range as the first control point.
Optionally, the obtaining unit 41 is further configured to:
if an obstacle is detected in the process of the edge navigation, acquiring point cloud data of the obstacle;
and adding the point cloud data to the local point cloud map.
Optionally, the apparatus 4 further comprises:
The detecting unit 44 is configured to calculate an average position of the robot in a first preset time period before the current time every preset period, calculate a farthest distance between the position of the robot and the average position in the first preset time period, and determine that the robot is in a trapped state if the farthest distance is smaller than a third preset value.
Optionally, the control unit 43 is further configured to:
and stopping the edgewise navigation if the first escaping condition is detected to be met in the edgewise navigation process, wherein the first escaping condition is met in such a way that the distance between the current position and the average position of the robot is larger than the farthest distance.
Optionally, the control unit 43 is further configured to:
when the robot is in a trapped state, determining a trapped region according to the track of the robot in a second preset time period before the current moment;
and stopping the edgewise navigation if the second escaping condition is detected to be met in the edgewise navigation process, wherein the second escaping condition is met in such a way that the current position of the robot is outside the trapped area.
Optionally, the track of the robot includes a plurality of track points.
Accordingly, the control unit 43 is also configured to:
When the robot is in a trapped state, calculating a track center point of the robot in the second preset time period before the current moment;
traversing a plurality of track points of the robot within the second preset time period;
For any one of the track points, deleting the track point if the distance between the previous traversing point of the track point and the track center point and the distance between the next traversing point of the track point and the track center are larger than the distance between the track point and the track center point;
and determining the trapped region according to the trace points remained after the traversal.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
The robot control device shown in fig. 4 may be a software unit, a hardware unit, or a combination of software and hardware units built in an existing terminal device, may be integrated into the terminal device as a separate pendant, or may exist as a separate terminal device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Fig. 5 is a schematic structural diagram of a robot according to an embodiment of the present application. As shown in fig. 5, the robot 5 of this embodiment comprises at least one processor 50 (only one is shown in fig. 5), a memory 51 and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, which processor 50 implements the steps of any of the various robot control method embodiments described above when executing the computer program 52.
The robotic robot may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the robot 5 and is not meant to be limiting of the robot 5, and may include more or fewer components than shown, or may combine certain components, or may include different components, such as input-output devices, network access devices, etc.
The Processor 50 may be a central processing unit (Central Processing Unit, CPU), the Processor 50 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may in some embodiments be an internal storage unit of the robot 5, such as a hard disk or a memory of the robot 5. The memory 51 may also be an external storage device of the robot 5 in other embodiments, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the robot 5. Further, the memory 51 may also include both an internal memory unit and an external memory device of the robot 5. The memory 51 is used for storing an operating system, application programs, boot Loader (Boot Loader), data, other programs, etc., such as program codes of the computer program. The memory 51 may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of the respective method embodiments described above.
Embodiments of the present application provide a computer program product enabling a robot to carry out the steps of the method embodiments described above when the computer program product is run on the robot.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium can include at least any entity or device capable of carrying computer program code to an apparatus/robot, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the apparatus/robot embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The foregoing embodiments are merely illustrative of the technical solutions of the present application, and not restrictive, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that modifications may still be made to the technical solutions described in the foregoing embodiments or equivalent substitutions of some technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211679840.6A CN116000924B (en) | 2022-12-26 | 2022-12-26 | Robot control method, device, robot and computer-readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211679840.6A CN116000924B (en) | 2022-12-26 | 2022-12-26 | Robot control method, device, robot and computer-readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN116000924A CN116000924A (en) | 2023-04-25 |
| CN116000924B true CN116000924B (en) | 2025-09-05 |
Family
ID=86031081
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211679840.6A Active CN116000924B (en) | 2022-12-26 | 2022-12-26 | Robot control method, device, robot and computer-readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116000924B (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110448241A (en) * | 2019-07-18 | 2019-11-15 | 广东宝乐机器人股份有限公司 | The stranded detection of robot and method of getting rid of poverty |
| CN114625135A (en) * | 2022-03-09 | 2022-06-14 | 美智纵横科技有限责任公司 | Robot edgewise control method and device, robot and storage medium |
| CN115357016A (en) * | 2022-06-30 | 2022-11-18 | 深圳银星智能集团股份有限公司 | Obstacle-crossing and trapped-removing method for cleaning robot, cleaning robot and storage medium |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2008013568A2 (en) * | 2005-12-30 | 2008-01-31 | Irobot Corporation | Autonomous mobile robot |
| US11449061B2 (en) * | 2016-02-29 | 2022-09-20 | AI Incorporated | Obstacle recognition method for autonomous robots |
| US11927965B2 (en) * | 2016-02-29 | 2024-03-12 | AI Incorporated | Obstacle recognition method for autonomous robots |
| CN112493924B (en) * | 2019-08-26 | 2023-03-10 | 苏州宝时得电动工具有限公司 | Cleaning robot and control method thereof |
| CN110908388B (en) * | 2019-12-17 | 2023-08-11 | 小狗电器互联网科技(北京)股份有限公司 | Robot trapped detection method and robot |
| CN111904346B (en) * | 2020-07-09 | 2021-08-24 | 深圳拓邦股份有限公司 | Method and device for getting rid of difficulties of sweeping robot, computer equipment and storage medium |
| CN112286191B (en) * | 2020-10-28 | 2024-08-02 | 深圳拓邦股份有限公司 | Robot getting rid of poverty control method and device and sweeping robot |
| CN114488185B (en) * | 2022-03-01 | 2024-08-16 | 山东大学 | Robot navigation system method and system based on multi-line laser radar |
| CN114721396B (en) * | 2022-04-15 | 2025-07-29 | 杭州萤石软件有限公司 | Mobile robot escaping processing method and device and mobile robot |
| CN115268443B (en) * | 2022-07-27 | 2024-11-29 | 珠海一微半导体股份有限公司 | Robot obstacle avoidance path planning method |
-
2022
- 2022-12-26 CN CN202211679840.6A patent/CN116000924B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110448241A (en) * | 2019-07-18 | 2019-11-15 | 广东宝乐机器人股份有限公司 | The stranded detection of robot and method of getting rid of poverty |
| CN114625135A (en) * | 2022-03-09 | 2022-06-14 | 美智纵横科技有限责任公司 | Robot edgewise control method and device, robot and storage medium |
| CN115357016A (en) * | 2022-06-30 | 2022-11-18 | 深圳银星智能集团股份有限公司 | Obstacle-crossing and trapped-removing method for cleaning robot, cleaning robot and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116000924A (en) | 2023-04-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111753609B (en) | A method, device and camera for target recognition | |
| CN110286389B (en) | Grid management method for obstacle identification | |
| CN111611853A (en) | A sensor information fusion method, device and storage medium | |
| CN110442120B (en) | Method for controlling robot to move in different scenes, robot and terminal equipment | |
| CN114740867B (en) | Intelligent obstacle avoidance method and device based on binocular vision, robot and medium | |
| CN113269811A (en) | Data fusion method and device and electronic equipment | |
| CN114663526A (en) | Obstacle detection method, obstacle detection device, robot and computer-readable storage medium | |
| JP7221381B2 (en) | Target tracking method and apparatus | |
| US11704815B2 (en) | Tracking device, tracking method, and tracking system | |
| CN110866544A (en) | Sensor data fusion method and device and storage medium | |
| CN112923938A (en) | Map optimization method, device, storage medium and system | |
| CN117103259A (en) | Target following method, device, removable following equipment and storage medium | |
| CN112686951B (en) | Method, device, terminal and storage medium for determining robot position | |
| CN117908536A (en) | Robot obstacle avoidance method, terminal device and computer readable storage medium | |
| CN116000924B (en) | Robot control method, device, robot and computer-readable storage medium | |
| CN113787516A (en) | Positioning method, device and robot | |
| WO2022252482A1 (en) | Robot, and environment map construction method and apparatus therefor | |
| CN118311549A (en) | A self-calibration method, device, equipment and medium for laser radar and camera | |
| Lindenmaier et al. | Semi-automatic bev ground truth generation for automotive perception systems | |
| CN118917841A (en) | Unmanned aerial vehicle recycling method, device, equipment and medium based on visual algorithm | |
| CN115393393B (en) | Multi-sensor fusion obstacle tracking method, device, equipment and medium | |
| CN114564018B (en) | Control method, control device, terminal equipment and computer readable storage medium | |
| CN117630905A (en) | Radar target tracking method, device, processing device and storage medium | |
| CN114037977B (en) | Road vanishing point detection method, device, equipment and storage medium | |
| CN117213463A (en) | Raster map update method, equipment and storage medium based on point cloud detection |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |