[go: up one dir, main page]

CN113927584B - Robot control method and device, computer readable storage medium and robot - Google Patents

Robot control method and device, computer readable storage medium and robot Download PDF

Info

Publication number
CN113927584B
CN113927584B CN202111214282.1A CN202111214282A CN113927584B CN 113927584 B CN113927584 B CN 113927584B CN 202111214282 A CN202111214282 A CN 202111214282A CN 113927584 B CN113927584 B CN 113927584B
Authority
CN
China
Prior art keywords
robot
capture point
error
step length
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111214282.1A
Other languages
Chinese (zh)
Other versions
CN113927584A (en
Inventor
葛利刚
熊友军
刘益彰
陈春玉
罗秋月
周江琛
任俊琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202111214282.1A priority Critical patent/CN113927584B/en
Publication of CN113927584A publication Critical patent/CN113927584A/en
Application granted granted Critical
Publication of CN113927584B publication Critical patent/CN113927584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The application belongs to the technical field of robots, and particularly relates to a robot control method, a robot control device, a computer readable storage medium and a robot. The method comprises the following steps: determining a planned capture point and a measurement capture point of a robot, and calculating a capture point error of the robot according to the planned capture point and the measurement capture point; integrating the capture point errors in a support stage of a support leg of the robot to obtain accumulated capture point errors of the robot; before the swing legs of the robot start to swing, adjusting the step length of the robot according to the accumulated capture point errors and the capture point errors to obtain an adjusted step length; and controlling the swing legs of the robot to move according to the adjusted step length. In the method, the step length is adjusted by comprehensively considering the capture point error and the accumulated capture point error so as to counter the influence of external impact force, and the stable state of the robot can be kept undamaged even in the face of larger impact force.

Description

Robot control method and device, computer readable storage medium and robot
Technical Field
The application belongs to the technical field of robots, and particularly relates to a robot control method, a robot control device, a computer readable storage medium and a robot.
Background
The realization of stable walking of bipedal robots has been a challenging research topic, especially in the case of external impact disturbances during the movement of the robot or when operating in complex terrain environments. In the movement process of the robot, if the robot receives a large impact force, the robot is easy to be unstable and even fall down. To solve this problem, scholars have proposed various balance control methods such as attitude control, zero moment point control, ankle compliance control, and the like. The existing methods improve the stability of the bipedal robot under the impact of external force to a certain extent, but for larger impact force, the control methods still have difficulty in ensuring that the stable state of the robot is not damaged.
Disclosure of Invention
In view of this, the embodiments of the present application provide a robot control method, apparatus, computer readable storage medium, and robot, so as to solve the problem that the existing robot control method is difficult to keep the robot stable under a larger impact force.
A first aspect of an embodiment of the present application provides a robot control method, which may include:
determining a planned capture point and a measurement capture point of a robot, and calculating a capture point error of the robot according to the planned capture point and the measurement capture point;
integrating the capture point errors in a support stage of a support leg of the robot to obtain accumulated capture point errors of the robot;
before the swing legs of the robot start to swing, calculating a first step length adjustment amount of the robot according to the accumulated capture point errors, calculating a second step length adjustment amount of the robot according to the capture point errors, calculating the maximum value of the first step length adjustment amount and the second step length adjustment amount, determining the maximum value as the step length adjustment amount of the robot, and superposing the step length of the robot and the step length adjustment amount to obtain an adjusted step length;
in a specific implementation manner of the first aspect, the calculating a first step adjustment of the robot according to the accumulated capture point error may include:
if the accumulated capture point error is greater than a preset first threshold, calculating the first step adjustment according to the following formula:
Foot_adjust1=k1*CP error_add_x
wherein, CP error_add_x For the accumulated capture point error, k1 is a preset first step sizeIntegral coefficient, foot_Adjust1 is the first step adjustment;
and if the accumulated capture point error is smaller than or equal to the first threshold value, setting the first step size adjustment amount to be 0.
In a specific implementation manner of the first aspect, the calculating the second step adjustment of the robot according to the capture point error may include:
if the capture point error is greater than a preset second threshold, calculating the second step adjustment according to the following formula:
Foot_adjust2=k2*CP error_x
wherein, CP error_x For the capture point error, k2 is a preset second step adjustment coefficient, and Foot_Adjust2 is the second step adjustment amount;
and if the capture point error is smaller than or equal to the second threshold value, setting the second step length adjustment amount to be 0.
In a specific implementation manner of the first aspect, the integrating the capture point error in the support stage of the support leg of the robot to obtain the accumulated capture point error of the robot may include:
calculating the accumulated capture point error according to:
Figure GDA0004008249290000021
wherein, CP error_x For the capture point error, stand_begin is the start time of the support phase, stand_end is the end time of the support phase, CP error_add_x For the accumulated capture point error.
In a specific implementation manner of the first aspect, the calculating a capture point error of the robot according to the planned capture point and the measurement capture point may include:
calculating the capture point error according to:
CP error_x =ξ plan_xmeasure_x
wherein, xi plan_x For the planned capture point, ζ measure_x For the measurement acquisition point, CP error_x Is the capture point error.
A second aspect of the embodiments of the present application provides a robot control device, which may include:
the capture point error calculation module is used for determining a planned capture point and a measurement capture point of the robot and calculating the capture point error of the robot according to the planned capture point and the measurement capture point;
the accumulated capture point error calculation module is used for integrating the capture point error in the support stage of the support leg of the robot to obtain the accumulated capture point error of the robot;
the step length adjusting module is used for calculating a first step length adjusting quantity of the robot according to the accumulated capture point error before the swing leg of the robot starts swinging, calculating a second step length adjusting quantity of the robot according to the capture point error, calculating the maximum value of the first step length adjusting quantity and the second step length adjusting quantity, determining the maximum value as the step length adjusting quantity of the robot, and superposing the step length of the robot and the step length adjusting quantity to obtain an adjusted step length;
and the motion control module is used for controlling the swing legs of the robot to move according to the adjusted step length.
In a specific implementation manner of the second aspect, the first step adjustment amount calculating unit may specifically be configured to:
if the accumulated capture point error is greater than a preset first threshold, calculating the first step adjustment according to the following formula:
Foot_adjust1=k1*CP error_add_x
wherein, CP error_add_x For the accumulated capture point error, k1 is a preset first step adjustment coefficient, and Foot_Adjust1 is the first step adjustment amount;
and if the accumulated capture point error is smaller than or equal to the first threshold value, setting the first step size adjustment amount to be 0.
In a specific implementation manner of the second aspect, the second step adjustment amount calculating unit may specifically be configured to:
if the capture point error is greater than a preset second threshold, calculating the second step adjustment according to the following formula:
Foot_adjust2=k2*CP error_x
wherein, CP error_x For the capture point error, k2 is a preset second step adjustment coefficient, and Foot_Adjust2 is the second step adjustment amount;
and if the capture point error is smaller than or equal to the second threshold value, setting the second step length adjustment amount to be 0.
In a specific implementation manner of the second aspect, the accumulated capture point error calculating module may specifically be configured to calculate the accumulated capture point error according to the following equation:
Figure GDA0004008249290000041
wherein, CP error_x For the capture point error, stand_begin is the start time of the support phase, stand_end is the end time of the support phase, CP error_add_x For the accumulated capture point error.
In a specific implementation manner of the second aspect, the capture point error calculating module may specifically be configured to calculate the capture point error according to the following formula:
CP error_x =ξ plan_xmeasure_x
wherein, xi plan_x For the planned capture point, ζ measure_x For the measurement acquisition point, CP error_x Is the capture point error.
A third aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of any one of the robot control methods described above.
A fourth aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of any one of the above-mentioned robot control methods when executing the computer program.
A fifth aspect of the embodiments of the present application provides a computer program product for, when run on a robot, causing the robot to perform the steps of any one of the robot control methods described above.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the method, a planned capture point and a measurement capture point of the robot are determined, and a capture point error of the robot is calculated according to the planned capture point and the measurement capture point; integrating the capture point errors in a support stage of a support leg of the robot to obtain accumulated capture point errors of the robot; before the swing legs of the robot start to swing, adjusting the step length of the robot according to the accumulated capture point errors and the capture point errors to obtain an adjusted step length; and controlling the swing legs of the robot to move according to the adjusted step length. In the embodiment of the application, the influence of the external force on the state of the robot is reasonably estimated through integration of the capture point error, and the step length is adjusted by comprehensively considering the capture point error and the accumulated capture point error so as to counter the influence of external impact force, and the stable state of the robot can be kept undamaged even in the face of larger impact force.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of one embodiment of a method of controlling a robot in an embodiment of the present application;
FIG. 2 is a schematic diagram of a linear inverted pendulum model;
FIG. 3 is a schematic flow chart of adjusting a step size of a robot based on accumulated capture point errors and capture point errors;
FIG. 4 is a block diagram of one embodiment of a robotic control device according to an embodiment of the present application;
fig. 5 is a schematic block diagram of a robot in an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the embodiments described below are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," etc. are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
For simplicity of description, the robots mentioned in the embodiments of the present application are bipedal robots unless otherwise specified.
Referring to fig. 1, an embodiment of a robot control method in an embodiment of the present application may include:
step S101, determining a planned capture point and a measurement capture point of the robot, and calculating a capture point error of the robot according to the planned capture point and the measurement capture point.
In planning and control of robots, complex multi-rigid-body systems are usually mapped in a model-simplified manner, and the most classical simplified model is a linear inverted pendulum model (Linear InvertedPendulum Model, LIPM) as shown in fig. 2. Taking the sagittal plane as an example, the dynamics equation of the LIPM in the x-axis direction (i.e., the direction of travel of the robot) is as follows:
Figure GDA0004008249290000071
wherein,,
Figure GDA0004008249290000072
is the mass center acceleration of the robot, x c Is the centroid position of the robot, p x Is the zero moment point (Zero Moment Point, ZMP) of the robot, ω is the natural frequency of LIPM, and
Figure GDA0004008249290000073
g is gravity acceleration, Z c Is the centroid height of the robot.
A Capture Point (CP) is an important concept in LIPM, and its physical meaning is a supporting Point that can realize that the inverted pendulum is completely stationary. That is, if the robot falls on the capture point during movement, complete quiescence of the centroid can be achieved.
With xi x Representing the capture points, then, according to the definition of the capture points, the capture points can be calculated by:
Figure GDA0004008249290000074
wherein,,
Figure GDA0004008249290000075
is the centroid speed of the robot.
In the embodiment of the application, the centroid position and the centroid speed of the robot can be planned in advance, and a specific planning method can adopt any one of the planning methods in the prior art according to actual conditions, wherein the centroid position and the centroid speed obtained by planning are respectively recorded as a centroid planning position and a centroid planning speed. After the centroid planning position and the centroid planning speed are obtained through planning, the capturing point corresponding to the centroid planning position and the centroid planning speed can be obtained according to the following formula, namely, the capturing point is planned:
Figure GDA0004008249290000081
wherein x is plan A position is planned for the centroid,
Figure GDA0004008249290000082
planning a speed, ζ, for the centroid plan_x And capturing points for the planning.
In the embodiment of the application, a zero moment point of the robot under the world coordinate system can be calculated based on six-dimensional force data measured by a six-dimensional force sensor pre-installed on the robot, the robot is simplified into a linear inverted pendulum model taking ZMP as a supporting point, the centroid position of the robot under the world coordinate system is solved through a body pose rotation matrix, the centroid speed of the robot under the world coordinate system is obtained through a difference mode, and the estimated centroid position and the estimated centroid speed are respectively recorded as a centroid measurement position and a centroid measurement speed. After the centroid measurement position and the centroid measurement speed are obtained, the capture point corresponding to the centroid measurement position and the centroid measurement speed can be obtained according to the following formula, namely, the capture point is measured:
Figure GDA0004008249290000083
wherein x is measure A position is measured for the centroid,
Figure GDA0004008249290000084
measuring velocity, ζ, for the centroid measure_x Capture points for the measurements.
After determining the planned capture point and the measured capture point, the capture point error may then be calculated according to the following equation:
CP error_x =ξ plan_xmeasure_x
wherein, CP error_x Is the capture point error.
The above is a sagittal plane analysis process, and by analogy with the sagittal plane analysis process to the coronal plane, the planned capture point (denoted as ζ) in the y-axis direction (i.e. the left direction of the robot, i.e. the direction in which the travel direction of the robot is rotated 90 degrees counter-clockwise) can be calculated according to the above process plan_y ) And measuring the capture point (denoted as xi measure_y ) And further performing calculation of a capture point error in the y-axis direction according to the following formula:
CP error_y =ξ plan_ymeasure_y
wherein, CP error_y I.e. the capture point error in the y-axis direction.
And step S102, integrating the capture point errors in the support stage of the support legs of the robot to obtain accumulated capture point errors of the robot.
Typically, there is a certain error between the planned capture point and the measured capture point, i.e. the capture point error is typically not zero. Moreover, during the movement of the robot, the positive and negative jump of the capture point error is usually large, and is not suitable for being directly used as step adjustment or impact force estimation.
In the embodiment of the present application, the influence of positive and negative jumps of the capture point error can be eliminated by integrating the capture point error, taking the x-axis direction as an example, the cumulative capture point error can be calculated according to the following formula:
Figure GDA0004008249290000091
wherein, stand_begin is the starting time of the supporting stage of the supporting leg, stand_end is the ending time of the supporting stage of the supporting leg, CP error_add_x And the accumulated capture point error is used for measuring the sum of external force interference suffered by the robot in the whole support stage.
Specifically, when the left leg is a supporting leg, there are:
Figure GDA0004008249290000092
when the right leg is a supporting leg, then there are:
Figure GDA0004008249290000093
where l_stand_end is the start time of the support phase of the left leg, l_stand_end is the end time of the support phase of the left leg, CP l_error_add_x For accumulated acquisition point error in the support phase of the left leg, r_stand_end is the support of the right legThe start time of the phase, r_stand_end, is the end time of the support phase of the right leg, CP r_error_add_x Is the accumulated capture point error for the support phase of the right leg.
Step S103, before the swing legs of the robot start to swing, adjusting the step length of the robot according to the accumulated capture point errors and the capture point errors to obtain an adjusted step length.
In the embodiment of the present application, the swing leg may start to swing at a previous time (denoted as t swing (-) to adjust the step size, as shown in fig. 3, the step S103 may specifically include the following steps:
step S1031, calculating a first step length adjustment amount of the robot according to the accumulated capture point error.
In a specific implementation manner of the embodiment of the present application, the accumulated acquisition point error may be first compared with a preset first threshold (denoted as CP error_add_x_set ) Comparing, if the accumulated capture point error is less than or equal to the first threshold, setting the first step size adjustment to 0; if the accumulated capture point error is greater than the first threshold, the first step adjustment may be calculated according to:
Foot_adjust1=k1*CP error_add_x
wherein k1 is a preset first step adjustment coefficient, and a specific value thereof may be set according to an actual situation, which is not specifically limited in the embodiment of the present application, and Foot_adjust1 is the first step adjustment amount.
The specific value of the first threshold may be set according to practical situations, which is not specifically limited in the embodiment of the present application.
Step S1032, calculating a second step length adjustment amount of the robot according to the capture point error.
In a specific implementation manner of the embodiment of the application, t may be first set as swing Said acquisition point error at time instant is compared with a preset second threshold (denoted CP error_x_set ) Comparing, if the capture point error is less than or equal to the second threshold, setting the second stepThe length adjustment amount is 0; if the capture point error is greater than the second threshold, the second step adjustment may be calculated according to:
Foot_adjust2=k2*CP error_x
wherein k2 is a preset second step adjustment coefficient, and a specific value thereof may be set according to an actual situation, which is not specifically limited in the embodiment of the present application, and Foot_Adjust2 is the second step adjustment amount.
The specific value of the second threshold may be set according to practical situations, which is not specifically limited in the embodiment of the present application.
Step S1033, calculating the step adjustment amount of the robot according to the first step adjustment amount and the second step adjustment amount.
In a specific implementation manner of the embodiment of the present application, a larger value of the first step adjustment amount and the second step adjustment amount may be used as the step adjustment amount, that is, the step adjustment amount may be calculated according to the following formula:
Foot_adjust=max(Food_adjust1,Foot_adjust2)
wherein, foot_Adjust1 is the first step adjustment amount, foot_Adjust2 is the second step adjustment amount, max is the maximum function, foot_Adjust is the step adjustment amount.
And step S1034, superposing the step length and the step length adjustment amount to obtain an adjusted step length.
In a specific implementation manner of the embodiment of the present application, the adjusted step size may be calculated according to the following formula:
Foot_step_new=Foot_step_old+Foot_adjust
wherein, foot_step_old is the step length, and is the adjusted step length.
And step S104, controlling the swing legs of the robot to move according to the adjusted step length.
It should be noted that the above procedure is only one step adjustment procedure, and it is easy to understand that in the actual movement procedure of the robot, the left leg and the right leg are alternately used as swing legs, and each time when the swing legs start to swing, the step adjustment can be performed once again according to the above procedure, so that the robot always maintains a stable state in the movement procedure.
In summary, the embodiment of the application determines a planned capture point and a measurement capture point of a robot, and calculates a capture point error of the robot according to the planned capture point and the measurement capture point; integrating the capture point errors in a support stage of a support leg of the robot to obtain accumulated capture point errors of the robot; before the swing legs of the robot start to swing, adjusting the step length of the robot according to the accumulated capture point errors and the capture point errors to obtain an adjusted step length; and controlling the swing legs of the robot to move according to the adjusted step length. In the embodiment of the application, the influence of the external force on the state of the robot is reasonably estimated through integration of the capture point error, and the step length is adjusted by comprehensively considering the capture point error and the accumulated capture point error so as to counter the influence of external impact force, and the stable state of the robot can be kept undamaged even in the face of larger impact force.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 4 shows a block diagram of an embodiment of a robot control device according to an embodiment of the present application, corresponding to a robot control method described in the above embodiment.
In this embodiment, a robot control device may include:
a capture point error calculation module 401, configured to determine a planned capture point and a measured capture point of a robot, and calculate a capture point error of the robot according to the planned capture point and the measured capture point;
an accumulated capture point error calculation module 402, configured to integrate the capture point error during a support phase of a support leg of the robot, to obtain an accumulated capture point error of the robot;
a step length adjustment module 403, configured to adjust a step length of the robot according to the accumulated capture point error and the capture point error before the swing leg of the robot begins to swing, so as to obtain an adjusted step length;
and the motion control module 404 is used for controlling the swing leg of the robot to move according to the adjusted step length.
In a specific implementation manner of the embodiment of the present application, the step adjustment module may specifically include:
a first step adjustment amount calculating unit configured to calculate a first step adjustment amount of the robot according to the accumulated capture point error;
a second step adjustment amount calculating unit for calculating a second step adjustment amount of the robot according to the capture point error;
a step adjustment amount calculating unit configured to calculate a step adjustment amount of the robot according to the first step adjustment amount and the second step adjustment amount;
and the step length adjusting unit is used for superposing the step length with the step length adjusting amount to obtain an adjusted step length.
In a specific implementation manner of the embodiment of the present application, the first step adjustment amount calculating unit may specifically be configured to:
if the accumulated capture point error is greater than a preset first threshold, calculating the first step adjustment according to the following formula:
Foot_adjust1=k1*CP error_add_x
wherein, CP error_add_x For the accumulated capture point error, k1 is a preset first step adjustment coefficient, and Foot_Adjust1 is the first step adjustment amount;
and if the accumulated capture point error is smaller than or equal to the first threshold value, setting the first step size adjustment amount to be 0.
In a specific implementation manner of the embodiment of the present application, the second step adjustment amount calculating unit may specifically be configured to:
if the capture point error is greater than a preset second threshold, calculating the second step adjustment according to the following formula:
Foot_adjust2=k2*CP error_x
wherein, CP error_x For the capture point error, k2 is a preset second step adjustment coefficient, and Foot_Adjust2 is the second step adjustment amount;
and if the capture point error is smaller than or equal to the second threshold value, setting the second step length adjustment amount to be 0.
In a specific implementation manner of the embodiment of the present application, the step adjustment amount calculating unit may specifically be configured to calculate the step adjustment amount of the robot according to the following formula:
Foot_adjust=max(Foot_adjust1,Foot_adjust2)
wherein, foot_Adjust1 is the first step adjustment amount, foot_Adjust2 is the second step adjustment amount, max is the maximum function, foot_Adjust is the step adjustment amount.
In a specific implementation manner of the embodiment of the present application, the accumulated capture point error calculating module may specifically be configured to calculate the accumulated capture point error according to the following formula:
Figure GDA0004008249290000131
wherein, CP error_x For the capture point error, stand_begin is the start time of the support phase, stand_end is the end time of the support phase, CP error_add_x For the accumulated capture point error.
In a specific implementation manner of the embodiment of the present application, the capture point error calculation module may specifically be configured to calculate the capture point error according to the following formula:
CP error_x =ξ plan_xmeasure_x
wherein, xi plan_x For the planned capture point, ζ measure_x Capturing said measurementsGet points, CP error_x Is the capture point error.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described apparatus, modules and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Fig. 5 shows a schematic block diagram of a robot provided in an embodiment of the present application, and for convenience of explanation, only a portion related to the embodiment of the present application is shown.
As shown in fig. 5, the robot 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps of the respective robot control method embodiments described above, for example, steps S101 to S104 shown in fig. 1. Alternatively, the processor 50 may perform the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 401 to 404 shown in fig. 4, when executing the computer program 52.
By way of example, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 52 in the robot 5.
It will be appreciated by those skilled in the art that fig. 5 is merely an example of the robot 5 and is not meant to be limiting of the robot 5, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the robot 5 may also include input and output devices, network access devices, buses, etc.
The processor 50 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), field programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the robot 5, such as a hard disk or a memory of the robot 5. The memory 51 may be an external storage device of the robot 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the robot 5. Further, the memory 51 may also include both an internal memory unit and an external memory device of the robot 5. The memory 51 is used for storing the computer program as well as other programs and data required by the robot 5. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the apparatus/robot embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable storage medium may include content that is subject to appropriate increases and decreases as required by jurisdictions and by jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunications signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (8)

1. A robot control method, comprising:
determining a planned capture point and a measurement capture point of a robot, and calculating a capture point error of the robot according to the planned capture point and the measurement capture point;
integrating the capture point errors in a support stage of a support leg of the robot to obtain accumulated capture point errors of the robot;
before the swing legs of the robot start to swing, calculating a first step length adjustment amount of the robot according to the accumulated capture point errors, calculating a second step length adjustment amount of the robot according to the capture point errors, calculating the maximum value of the first step length adjustment amount and the second step length adjustment amount, determining the maximum value as the step length adjustment amount of the robot, and superposing the step length of the robot and the step length adjustment amount to obtain an adjusted step length;
and controlling the swing legs of the robot to move according to the adjusted step length.
2. The robot control method according to claim 1, wherein the calculating the first step adjustment amount of the robot from the accumulated capture point error includes:
if the accumulated capture point error is greater than a preset first threshold, calculating the first step adjustment according to the following formula:
Foot_adjust1=k1*CP error_add_x
wherein CPe is rror_add_x For the accumulated capture point error, k1 is a preset first step adjustment coefficient, and Foot_Adjust1 is the first step adjustment amount;
and if the accumulated capture point error is smaller than or equal to the first threshold value, setting the first step size adjustment amount to be 0.
3. The robot control method according to claim 1, wherein the calculating the second step adjustment amount of the robot from the capture point error includes:
if the capture point error is greater than a preset second threshold, calculating the second step adjustment according to the following formula:
Foot_adjust2=k2*CP error_x
wherein, CP error_x For the capture point error, k2 is a preset second step adjustment coefficient, and Foot_Adjust2 is the second step adjustment amount;
and if the capture point error is smaller than or equal to the second threshold value, setting the second step length adjustment amount to be 0.
4. The robot control method according to claim 1, wherein integrating the capture point error in a support phase of a support leg of the robot results in a cumulative capture point error of the robot, comprising:
calculating the accumulated capture point error according to:
Figure FDA0004008249280000021
wherein, CP error_x For the capture point error, stand_begin is the start time of the support phase, stand_end is the end time of the support phase, CP error_add_x For the accumulated capture point error.
5. The robot control method according to any one of claims 1 to 4, characterized in that the calculating a capture point error of the robot from the planned capture point and the measured capture point comprises:
calculating the capture point error according to:
CP error_x =ξ plan_xmeasure_x
wherein, xi plan_x For the planned capture point, ζ measure_x For the measurement ofAcquisition Point, CP error_x Is the capture point error.
6. A robot control device, comprising:
the capture point error calculation module is used for determining a planned capture point and a measurement capture point of the robot and calculating the capture point error of the robot according to the planned capture point and the measurement capture point;
the accumulated capture point error calculation module is used for integrating the capture point error in the support stage of the support leg of the robot to obtain the accumulated capture point error of the robot;
the step length adjusting module is used for calculating a first step length adjusting quantity of the robot according to the accumulated capture point error before the swing leg of the robot starts swinging, calculating a second step length adjusting quantity of the robot according to the capture point error, calculating the maximum value of the first step length adjusting quantity and the second step length adjusting quantity, determining the maximum value as the step length adjusting quantity of the robot, and superposing the step length of the robot and the step length adjusting quantity to obtain an adjusted step length;
and the motion control module is used for controlling the swing legs of the robot to move according to the adjusted step length.
7. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the robot control method according to any one of claims 1 to 5.
8. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, realizes the steps of the robot control method according to any one of claims 1 to 5.
CN202111214282.1A 2021-10-19 2021-10-19 Robot control method and device, computer readable storage medium and robot Active CN113927584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111214282.1A CN113927584B (en) 2021-10-19 2021-10-19 Robot control method and device, computer readable storage medium and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111214282.1A CN113927584B (en) 2021-10-19 2021-10-19 Robot control method and device, computer readable storage medium and robot

Publications (2)

Publication Number Publication Date
CN113927584A CN113927584A (en) 2022-01-14
CN113927584B true CN113927584B (en) 2023-04-21

Family

ID=79280261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111214282.1A Active CN113927584B (en) 2021-10-19 2021-10-19 Robot control method and device, computer readable storage medium and robot

Country Status (1)

Country Link
CN (1) CN113927584B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052783A (en) * 2018-01-29 2018-05-18 济南大学 A kind of unsaturated soil dynamic numerical calculation method based on adaptive step

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101953113B1 (en) * 2011-05-30 2019-03-05 삼성전자주식회사 Robot and control method thereof
CN106945034B (en) * 2016-01-07 2021-09-03 鸿富锦精密电子(郑州)有限公司 Robot point location adjusting method and system
CN108768447B (en) * 2018-04-25 2020-10-27 西安宇飞电子技术有限公司 Method and device for stably tracking code loop after quick acquisition
CN110948482A (en) * 2019-11-06 2020-04-03 江苏信息职业技术学院 Redundant robot trajectory planning method
CN111176283B (en) * 2019-12-31 2022-08-26 广东省智能制造研究所 Active compliance control method for foot type robot under complex terrain
CN111515929A (en) * 2020-04-15 2020-08-11 深圳航天科技创新研究院 Human motion state estimation method, device, terminal and computer readable storage medium
CN112731953B (en) * 2020-12-24 2024-07-19 深圳市优必选科技股份有限公司 Robot control method and device, computer readable storage medium and robot

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052783A (en) * 2018-01-29 2018-05-18 济南大学 A kind of unsaturated soil dynamic numerical calculation method based on adaptive step

Also Published As

Publication number Publication date
CN113927584A (en) 2022-01-14

Similar Documents

Publication Publication Date Title
CN112731953B (en) Robot control method and device, computer readable storage medium and robot
CN112731952B (en) Robot centroid planning method and device, readable storage medium and robot
CN111015653B (en) Robot control method, device, computer readable storage medium and robot
CN112744313B (en) Robot state estimation method and device, readable storage medium and robot
CN112536796B (en) Robot control method and device, computer readable storage medium and robot
US9683865B2 (en) In-use automatic calibration methodology for sensors in mobile devices
US12076860B2 (en) Control method for robot, computer-readable storage medium and robot
CN111098300B (en) Robot balance control method and device, readable storage medium and robot
CN113283082B (en) Centroid track generation method, centroid track generation device, computer readable storage medium and robot
CN108534744A (en) A kind of attitude angle acquisition methods, device and handle
CN112486170B (en) Robot control method and device, computer readable storage medium and robot
CN109866217A (en) Robot mileage positioning method, device, terminal device and computer storage medium
CN115042205B (en) Force-bit hybrid control method and device, computer readable storage medium and robot
CN111330214B (en) Safety protection method and system of VR treadmill and readable storage medium
CN113927584B (en) Robot control method and device, computer readable storage medium and robot
WO2020126809A1 (en) Method for equine motion analysis
WO2025087350A1 (en) Method and apparatus for correcting target vehicle velocity, and device and storage medium
CN113927585B (en) Robot balance control method and device, readable storage medium and robot
CN111844013A (en) Robot gait planning method, device, robot and storage medium
CN113359791A (en) Robot control method, device, computer readable storage medium and robot
CN113246123B (en) Robot control method, device, computer readable storage medium and robot
CN113485100B (en) Robot leg length planning method and device, readable storage medium and robot
CN114545017B (en) Speed fusion method, device and computer equipment based on optical flow and accelerometer
WO2013143126A1 (en) Gyroscope calibration
CN112720447A (en) Zero moment point jitter processing method and device, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant