[go: up one dir, main page]

CN115502965B - Robot control method, system, robot and medium - Google Patents

Robot control method, system, robot and medium Download PDF

Info

Publication number
CN115502965B
CN115502965B CN202110632729.0A CN202110632729A CN115502965B CN 115502965 B CN115502965 B CN 115502965B CN 202110632729 A CN202110632729 A CN 202110632729A CN 115502965 B CN115502965 B CN 115502965B
Authority
CN
China
Prior art keywords
robot
target
motion
current
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110632729.0A
Other languages
Chinese (zh)
Other versions
CN115502965A (en
Inventor
郑宇�
迟万超
姜鑫洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110632729.0A priority Critical patent/CN115502965B/en
Publication of CN115502965A publication Critical patent/CN115502965A/en
Application granted granted Critical
Publication of CN115502965B publication Critical patent/CN115502965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1633Programme controls characterised by the control loop compliant, force, torque control, e.g. combined with position control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

公开了一种机器人控制方法、系统、机器人及介质,且该方法包括:获取该机器人的当前旋转运动信息及该机器人的目标旋转运动参考信息;根据该机器人的当前旋转运动信息,基于线性函数确定用于控制该机器人的目标旋转运动的目标力矩;基于该目标力矩、该机器人的当前旋转运动信息及该机器人的目标旋转运动参考信息,确定该机器人的各个关节的目标扭矩,其中,所述当前旋转运动信息包括:机器人当前旋转角度、机器人当前旋转角速度中的至少一项;所述目标旋转运动参考信息包括:机器人目标旋转参考角度、机器人目标旋转参考角速度、机器人目标旋转参考角加速度中的至少一项。

Disclosed are a robot control method, system, robot and medium, and the method comprises: obtaining current rotational motion information of the robot and target rotational motion reference information of the robot; determining a target torque for controlling the target rotational motion of the robot based on a linear function according to the current rotational motion information of the robot; determining a target torque of each joint of the robot based on the target torque, the current rotational motion information of the robot and the target rotational motion reference information of the robot, wherein the current rotational motion information comprises at least one of the current rotation angle of the robot and the current rotation angular velocity of the robot; and the target rotational motion reference information comprises at least one of the target rotation reference angle of the robot, the target rotation reference angular velocity of the robot and the target rotation reference angular acceleration of the robot.

Description

Robot control method, system, robot and medium
Technical Field
The invention relates to the field of artificial intelligence and robots, in particular to a robot control method, a system, a robot and a medium.
Background
With the wide application of artificial intelligence and robot technology in civil and commercial fields, robots based on the artificial intelligence and robot technology play an increasingly important role in the fields of intelligent transportation, intelligent home and the like, and also face higher requirements.
At present, when the robot is in motion control, firstly, target motion reference information of the robot is generated through motion planning information of a motion planner, then, target motion estimation of the robot is determined based on a mass center motion model of the robot, and the optimal target contact force is obtained through the target motion reference information and the target motion estimation so as to realize control of the motion of the robot, so that the actual motion of the robot accurately follows the planned target motion process. The method comprises the steps of carrying out motion control on a robot, wherein the rotation motion (particularly angular motion) of the robot is nonlinear motion, under the condition that nonlinear rotation motion parameter (moment quantity) is directly applied, the calculated quantity in the motion control process is large, the real-time performance of the motion control is poor, and in the process of linearizing the moment quantity, the nonlinear quantity in the moment quantity is directly omitted and is calculated only by using the linear quantity in the motion control process, so that the motion control of the robot has large error, the actual motion of the robot has large error with the planned motion process, the motion control precision is poor, and the robustness is low.
Therefore, a method for generating an optimized linearized centroid motion model to flexibly and highly precisely control a motion process of a robot is needed, and the method has good accuracy and stability and high robustness, by better extracting and retaining a linear component in the nonlinear moment on the premise of realizing motion control of the robot, particularly real-time motion control of the robot, thereby improving the accuracy of the moment.
Disclosure of Invention
The invention provides a robot control method, a robot control system, a robot and a medium. The robot control method provided by the invention can perform good motion control on the robot, and the target rotation motion reference information comprises at least one of the robot target rotation reference angle, the robot target rotation reference angular velocity and the robot target rotation reference angular acceleration by enabling the current rotation motion information to comprise at least one of the current rotation angle and the current rotation angular velocity of the robot, so that the robot can accurately realize the target motion process, flexible and high-precision control is realized, and the method has good reliability, stability and robustness.
According to one aspect of the disclosure, a robot control method is provided, and the method comprises the steps of obtaining current rotation movement information of a robot and target rotation movement reference information of the robot, determining target moment for controlling target rotation movement of the robot based on a linear function according to the current rotation movement information of the robot, and determining target torque of each joint of the robot based on the target moment, the current rotation movement information of the robot and the target rotation movement reference information of the robot, wherein the current rotation movement information comprises at least one of a current rotation angle of the robot and a current rotation angle speed of the robot, and the target rotation movement reference information comprises at least one of a target rotation reference angle of the robot, a target rotation reference angle speed of the robot and a target rotation reference angle acceleration of the robot.
In some embodiments, the linear function comprises at least two of a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angle gain parameter and a robot current rotational angular velocity parameter, wherein the robot angular velocity gain parameter is a difference value between the robot target rotational angular velocity and the robot current rotational angular velocity, and the angle gain parameter is a difference value between the robot target rotational angle and the robot current rotational angle.
In some embodiments, the linear function comprises a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angle gain parameter and a robot current rotational angular velocity parameter, wherein the robot angular velocity gain parameter is a difference value between the robot target rotational angular velocity and the robot current rotational angular velocity, and the angle gain parameter is a difference value between the robot target rotational angle and the robot current rotational angle.
In some embodiments, the method further comprises the step of obtaining current translational motion information of the robot and target translational motion reference information of the robot, wherein the target translational motion reference information of the robot comprises a target motion reference position of the robot, a target motion reference speed of the robot and a target motion reference acceleration of the robot, and the current translational motion information of the robot comprises a current motion position of the robot, a current motion speed of the robot and a current motion acceleration of the robot.
In some embodiments, the linear function is:
And wherein, For the target torque at the target time t k, I 0 is the rotational inertia value at the current time t 0,The robot angular acceleration is a robot target rotational angular acceleration, P and Q are parameters selected according to actual needs, deltaomega k is a robot angular velocity gain parameter, deltaomega k=ωk0k is a robot target rotational angular velocity, omega 0 is a current rotational angular velocity of the robot, deltatheta k is the robot angular gain parameter, deltatheta k=θk0k is a robot target rotational angle, and theta 0 is a current rotational angle of the robot,Refers to converting the current angular velocity of the robot into a diagonally symmetric matrix.
In some embodiments, determining the target torque of each joint of the robot based on the target torque, the current rotational motion information of the robot, and the target rotational motion reference information of the robot includes determining a target contact force of the robot based on the target torque, the current motion information of the robot, and the target motion reference information of the robot, determining the target torque of each joint of the robot based on the target contact force, the skeletal structure of the robot, and the pose information of the robot, wherein the current motion information of the robot includes current rotational motion information and current translational motion information, and the target motion reference information of the robot includes target rotational motion reference information and target translational motion reference information.
In some embodiments, determining the target contact force of the robot based on the target moment, the current motion information of the robot, and the target motion reference information of the robot includes generating a target motion estimate of the robot based on a robot centroid motion model based on the current motion information of the robot and the target moment, wherein the target motion estimate is a function of the contact force of the robot, generating an error function of the target motion reference information and the target motion estimate based on the target motion reference information and the target motion estimate, and determining the target contact force based on the error function, wherein the target motion estimate includes at least a portion of a target motion position estimate, a target motion velocity estimate, a target motion acceleration estimate, a target rotation angle velocity estimate, and a target rotation angle acceleration estimate.
In some embodiments, determining the target contact force based on the error function includes optimizing the error function based on a quadratic optimization algorithm and determining the contact force that minimizes the error function as the target contact force.
In some embodiments, determining the target torque for each joint of the robot based on the target contact force, the skeletal structure of the robot, and the pose information of the robot includes determining a main torque amount for each joint of the robot based on the target contact force and the skeletal structure of the robot, obtaining target pose reference information and current pose information for each joint of the robot, determining an additional torque amount for each joint of the robot based on the target pose reference information and the current pose information of the robot, and determining the target torque for each joint of the robot based on the main torque amount and the additional torque amount.
In some embodiments, acquiring the target pose reference information and the current pose information of each joint of the robot comprises acquiring motion planning information of the robot, generating the target pose reference information of the robot based on the motion planning information, wherein the target pose reference information comprises a joint target reference angle and a joint target reference angular velocity of each joint of the robot, and acquiring the current pose information of the robot, wherein the current pose information comprises a joint current angle and a joint current angular velocity of each joint.
According to another aspect of the disclosure, a robot control system is provided, the system comprising a rotational motion information acquisition module configured to acquire current rotational motion information of the robot and target rotational motion reference information of the robot, a target torque generation module configured to determine a target torque for controlling target rotational motion of the robot based on a linear function according to the current rotational motion information of the robot, a joint torque generation module configured to determine target torques of respective joints of the robot based on the target torque, the current rotational motion information of the robot, and the target rotational motion reference information of the robot, wherein the current rotational motion information includes at least one of a current rotational angle of the robot, a current rotational angular velocity of the robot, and the target rotational motion reference information includes at least one of a target rotational reference angle of the robot, a target rotational reference angular velocity of the robot, and a target rotational reference angular acceleration of the robot.
In some embodiments, the linear function comprises at least two of a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angle gain parameter and a robot current rotational angular velocity parameter, wherein the robot angular velocity gain parameter is a difference value between the robot target rotational angular velocity and the robot current rotational angular velocity, and the angle gain parameter is a difference value between the robot target rotational angle and the robot current rotational angle.
In some embodiments, the linear function comprises a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angle gain parameter and a robot current rotational angular velocity parameter, wherein the robot angular velocity gain parameter is a difference value between the robot target rotational angular velocity and the robot current rotational angular velocity, and the angle gain parameter is a difference value between the robot target rotational angle and the robot current rotational angle.
According to another aspect of the present disclosure, a robot is presented, which comprises a robot control system as described above and which implements a motion control of the robot by means of a robot control method as described above.
According to another aspect of the present disclosure, a computer-readable storage medium is presented, characterized in that it has stored thereon computer-readable instructions, which when executed by a computer perform the method as described before.
By using the robot control method, the system, the robot and the medium provided by the invention, the accuracy of the generated target torque can be improved by enabling the current rotation movement information to comprise at least one of the current rotation angle and the current rotation angular velocity of the robot. The target rotation movement reference information comprises at least one of a robot target rotation reference angle, a robot target rotation reference angular speed and a robot target rotation reference angular acceleration, so that the generation of target torque of each joint of the robot is facilitated to be realized based on the target rotation movement reference information, the accuracy of the generated target torque is improved, and the movement control of the robot can be well realized, and particularly, the robot has higher control accuracy and good robustness.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art. The following drawings are not intended to be drawn to scale, emphasis instead being placed upon illustrating the principles of the invention.
FIG. 1 illustrates an exemplary flowchart of a robot control method 100 according to an embodiment of the present disclosure;
FIG. 2 illustrates an exemplary flowchart of a process S103 of determining target torque for each joint of the robot, according to an embodiment of the present disclosure;
an exemplary flowchart of a process S1031 of determining a target contact force of the robot according to an embodiment of the disclosure is shown in fig. 3;
Fig. 4 illustrates an exemplary flowchart of a process S1032 of determining target torques of respective joints of the robot according to an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of a robot control process according to an embodiment of the disclosure;
fig. 6 shows an exemplary block diagram of a robot control system 600 according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are also within the scope of the invention.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that simulates, extends, and extends human intelligence using a digital computer or a machine controlled by a digital computer, perceives the environment, obtains knowledge, and uses the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The application relates to application of artificial intelligence in motion control, in particular to a robot control method based on artificial intelligence, which utilizes an exponential scaling quantity to represent related nonlinear parameter quantity (target moment parameter) of a robot in the process of rotating motion, so that linear components in the nonlinear parameter quantity can be further obtained and reserved, and then linearization of a mass center motion model of the robot can be better realized based on the target moment parameter, and accuracy and reliability of the robot motion control are improved on the basis of realizing real-time control based on the linearization model.
The robot provided by the application is a robot capable of realizing autonomous motion control, and can have various forms according to actual needs. The robot may be, for example, a scooter, a bicycle, or other type of robotic device. Embodiments of the present disclosure are not limited by the particular type of robot and its composition.
At present, when the robot is in motion control, firstly, target motion reference information of the robot is generated through motion planning information of a motion planner, then, target motion estimation of the robot is determined based on a mass center motion model of the robot, and the optimal target contact force is obtained through the target motion reference information and the target motion estimation so as to realize control of the motion of the robot, so that the actual motion of the robot accurately follows the planned target motion process. The method comprises the steps of carrying out motion control on a robot, wherein the rotation motion (particularly angular motion) of the robot is nonlinear motion, under the condition that nonlinear rotation motion parameter (moment quantity) is directly applied, the calculated quantity in the motion control process is large, the real-time performance of the motion control is poor, and in the process of linearizing the moment quantity, the nonlinear quantity in the moment quantity is directly omitted and is calculated only by using the linear quantity in the motion control process, so that the motion control of the robot has large error, the actual motion of the robot has large error with the planned motion process, the motion control precision is poor, and the robustness is low.
Based on the above, the application provides a robot control method. The method is suitable for realizing the motion control of the robot, particularly the real-time motion control of the robot, and can flexibly and highly accurately control the motion process of the robot through an optimized linear centroid motion model by further extracting the linear component in the nonlinear parameter. Fig. 1 shows an exemplary flowchart of a robot control method 100 according to an embodiment of the present disclosure.
Referring to fig. 1, first, in step S101, current rotational motion information of the robot and target rotational motion reference information of the robot are acquired.
The current rotational movement information refers to data information for representing a rotational movement state of the robot at a current time. In the application, the current rotation movement information comprises at least one of a current rotation angle of the robot and a current rotation angular velocity of the robot. However, it should be appreciated that the current rotational motion information may also include, for example, the current rotational angular acceleration of the robot, etc., as desired.
The target rotational movement reference information refers to data information for characterizing a desired state of rotational movement of the robot at a target time (which may be, for example, a time next to a current time according to actual needs). In the application, the target rotation movement reference information comprises at least one of a robot target rotation reference angle, a robot target rotation reference angular speed and a robot target rotation reference angular acceleration. However, it should be appreciated that the target rotational motion reference information may also include other parameters, for example, as desired. Embodiments of the present disclosure are not limited by the specific composition of the target rotational movement desired information.
For example, current rotational motion information of the robot may be acquired via torque sensors provided at joints of the robot and vision sensors provided in the surrounding environment of the robot, and target rotational motion reference information of the robot may be obtained based on processing motion planning information generated by a motion planner of the robot.
However, it should be appreciated that the above is given as only one example of acquiring the target rotational motion reference information and the current rotational motion information of the robot, and embodiments of the present disclosure are not limited by the specific manner in which the target rotational motion reference information and the current rotational motion information are acquired.
Thereafter, in step S102, a target torque for controlling a target rotational movement of the robot is determined based on a linear function according to current rotational movement information of the robot.
The linear function is, for example, a linear expression of the target torque. Specifically, for example, the rotational motion process of the robot may be approximated via exponential coordinates, so that the nonlinear parameter in the nonlinear expression of the existing torque parameter is further expanded, and the optimized linear expression obtained by extracting the linear component therein is extracted.
After the target torque is obtained, in step S103, a target torque of each joint of the robot is determined based on the target torque, current rotational movement information of the robot, and target rotational movement reference information of the robot.
Specifically, the target contact force of the robot may be determined first, for example, based on the target moment, the current rotational motion information of the robot, and the target rotational motion reference information of the robot. Thereafter, a target torque of each joint of the robot is determined based on, for example, the target contact force, the skeletal structure of the robot, and the posture information of the robot.
It should be appreciated that the above is given only as an exemplary method of achieving the target torque, and embodiments of the present disclosure are not limited by the particular process and manner in which the target torque is obtained.
Based on the above, in the application, on the basis of obtaining the current rotation movement information of the robot and the target rotation movement reference information of the robot, the target moment is obtained based on the current rotation movement information by applying the optimized linear function, and the target torque of each joint of the robot is determined according to the obtained target moment, so that the robot can be accurately controlled to execute the target movement process on the basis of realizing good real-time control of the robot, and the control accuracy and robustness of the robot are obviously improved on the basis of considering real-time. Specifically, the current rotation movement information comprises at least one of the current rotation angle and the current rotation angular velocity of the robot, so that when the calculation of the target moment is realized by applying a linear function, the good representation of the target moment can be realized based on the current multi-dimensional and multi-level movement information, and the accuracy of the generated target moment is improved. In addition, the target rotation movement reference information comprises at least one of a robot target rotation reference angle, a robot target rotation reference angular speed and a robot target rotation reference angular acceleration, so that the target rotation movement process can be more comprehensively described, the generation of target torque of each joint of the robot based on the target rotation movement reference information is facilitated, and the accuracy of the generated target torque is improved.
In some embodiments, the linear function includes at least two of a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angular gain parameter, a current rotational angular velocity parameter of the robot.
The angular gain parameter is a difference between a target rotation angle of the robot and a current rotation angle of the robot, and is described in detail below with reference to specific embodiments.
By including at least two of the robot target rotational angular acceleration parameter, the robot angular velocity gain parameter, the robot angular gain parameter and the robot current rotational angular velocity parameter in the linear function, compared with the method that only one of the robot target rotational angular acceleration parameters is used when the target moment is calculated at present, more linear components capable of reflecting the target moment are reserved by further expanding the linear function expression of the target moment, so that the calculation of linearization is realized to improve the calculation speed, the calculation amount is reduced, the accuracy of the calculated target moment is obviously improved, and the method is beneficial to realizing the subsequent good motion control of the robot based on the target moment.
In some embodiments, the linear function includes a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angular gain parameter, a robot current rotational angular velocity parameter.
The robot angular velocity gain parameter is a difference value between a robot target rotation angular velocity and a current rotation angular velocity of the robot, and the angle gain parameter is a difference value between a robot target rotation angle and the current rotation angle of the robot.
For example, the linear function will be described in more detail below in connection with an example expression of a centroid dynamics equation and the linear function of the robot.
For example, for a course of motion of a robot, the centroid dynamics equation of the robot is known as follows:
Equation 1 a) characterizes the translational motion process of the robot at the current moment, and equation 1 b) is the euler equation of the robot, which is used for characterizing the rotational motion process of the robot at the current moment.
And wherein f i is the contact force applied to the robot at the ith contact point at the moment, m is the mass center of the robot, g is the gravitational acceleration, p is the position of the robot at the moment,For the speed of the robot at that moment,The acceleration of the robot at that time. And where i is a positive integer greater than 0 and less than N, N being the total number of points of contact the robot has, it being understood that the points of contact are, for example, points of action of the robot in contact with the surrounding environment, which are predetermined according to the actual configuration of the robot and its manner of action, for example, when the robot is a four-legged robot and it interacts with the surrounding environment via four-legged feet, the total number of points of contact of the robot is, for example, 4, and each point of contact is the portion of the robot's foot in contact with the surrounding environment. And wherein,To convert the position of the i-th contact point of the robot into a diagonally symmetric matrix at this moment,In order to convert the position of the centroid of the robot into a diagonally symmetrical matrix at this moment, L is the angular momentum of the robot,Is the moment of the robot.
And wherein the moment of the robot can be further expressed as, for example:
wherein, As a torque parameter of the robot,The angular acceleration vector of the robot at the current moment, I is the moment of inertia of the robot, ω is the angular velocity of the robot,The angular velocity of the robot is converted into an oblique symmetry matrix for calculating the cross product of the angular velocity vector and the moment of inertia.
Since the rotational motion of the robot is a nonlinear motion, the torque parameter is expressed in the expressionItems and articlesThe terms are all nonlinear quantities. In this case, when the target torque at the target time t k is obtained, the formula of the torque parameter at the target time t k is as follows:
Wherein the subscript k characterizes the parameter amount at time t k, and the meaning of each parameter amount is as previously described. In this case, in the current motion control process, the torque parameter is usually taken into account The term approximatesWherein, the rotational inertia value of the current time t 0 of I 0,Is an estimate of the target angular acceleration at the target time t k. And will beItems are directly ignored. The expression of the target torque at the target time t k calculated in this way contains onlyBut because the nonlinear quantity in the moment parameter quantity is ignored, the accuracy of the calculated moment parameter quantity is greatly reduced, which is unfavorable for realizing good motion control.
Based on this, in the present application, the rotational motion process of the robot is approximated by exponential coordinates. Specifically, the angular motion process (rotational motion process) in the current robot motion process may be developed, for example, via the following formula:
Where R k characterizes the angular orientation information of the robot at the target instant t k (i.e. its euler angle information, e.g. including roll angle, pitch angle, yaw angle), R k-1 characterizes the angular orientation information of the robot at instant t k-1, The exponential coordinates of the amount of rotation between time t k-1 and time t k are characterized.
Based on the above, by approximating the angular orientation information at the target time t k via the exponential coordinates, equation 4) is obtained, and taylor expansion is further performed on the expression subsequently, the higher-order terms therein are ignored, and the linear components therein are retained, for example, the linear function in the present application can be obtained, which is, for example:
Wherein subscript k characterizes the parameter at target time t k and subscript 0 characterizes the parameter at current time t 0. And wherein, For the target torque, I 0 is the rotational inertia value at the current time t 0,The robot angular acceleration is a robot target rotational angular acceleration, P and Q are parameters selected according to actual needs, deltaomega k is a robot angular velocity gain parameter, deltaomega k=ωk0k is a robot target rotational angular velocity, omega 0 is a current rotational angular velocity of the robot, deltatheta k is the robot angular gain parameter, deltatheta k=θk0k is a robot target rotational angle, and theta 0 is a current rotational angle of the robot,It means that the current angular velocity of the robot is converted into a diagonally symmetric matrix for calculating the cross product of the angular velocity vector and the moment of inertia.
And wherein,The term is a robot target rotation angular acceleration parameter term, the Q delta omega k term is a robot angular velocity gain parameter term, the P delta theta k term is a robot angular gain parameter term,The term is the current rotation angular velocity parameter term of the robot.
Based on the above, in the application, by setting the linear function to include the robot target rotational angular acceleration parameter, the robot angular velocity gain parameter, the robot angular gain parameter, and the current rotational angular velocity parameter of the robot, the nonlinear term is directly omitted (as described above) as compared with the prior art that only one of the robot target rotational angular acceleration parameters is includedItem), by optimizing the linear function (via an exponential coordinate approximation manner), further extracting and retaining more linear components in the nonlinear item, thereby being beneficial to more accurately generating the target moment, being capable of more comprehensively and accurately reflecting the target motion state of the robot, improving the accuracy of motion control and simultaneously taking account of good real-time performance of the motion control.
In some embodiments, the robot control method further comprises obtaining current translational motion information of the robot and target translational motion reference information of the robot.
The current translational motion information of the robot refers to data information for representing the motion state of the robot at the current moment, and for example, the current translational motion information of the robot comprises the current motion position of the robot, the current motion speed of the robot and the current motion acceleration of the robot.
The target translational motion reference information of the robot refers to data information for characterizing a desired state of translational motion of the robot at a target moment (which may be, for example, a moment next to a current moment according to actual needs). According to actual needs, the target translational motion reference information of the robot comprises a robot target motion reference position, a robot target motion reference speed and a robot target motion reference acceleration.
For example, the current translational motion information of the robot may be acquired by processing based on data in a motion sensor provided inside the robot or a vision sensor provided in the surrounding of the robot, and the target translational motion reference information of the robot may be obtained by processing based on motion planning information generated by a motion planner of the robot. It should be appreciated that embodiments of the present disclosure are not limited by the particular sources and manner of acquisition of the target translational motion reference information and the current translational motion information.
Based on the above, in the application, on the basis of acquiring the rotation motion information (including the current rotation motion information and the target rotation motion reference information) of the robot, the translational motion information (including the current translational motion information and the target translational motion reference information) of the robot is further acquired and acquired, so that the translational motion process of the robot, the current state of the rotation motion process and the expected state of the target moment can be acquired, and the subsequent further acquisition of the contact force between the robot and the environment based on the robot centroid motion model and the motion control of the robot are facilitated.
In some embodiments, step S103 described above may be described in more detail, for example. Fig. 2 shows an exemplary flowchart of a process S103 of determining target torques of respective joints of the robot according to an embodiment of the present disclosure.
Referring to fig. 2, first, in step S1031, a target contact force of the robot is determined based on the target moment, current motion information of the robot, and target motion reference information of the robot.
The current motion information of the robot comprises current rotation motion information and current translation motion information, and the target motion reference information of the robot comprises target rotation motion reference information and target translation motion reference information. The specific meaning of each information is as described above, and will not be described here again.
The target contact force refers to interaction force between the robot and the surrounding environment under the condition that the robot is regarded as rigid motion at the target motion moment, and the target contact force is associated with the mass center motion state of the robot at the target moment, so that the torque of each joint can be further determined and motion control of the robot can be realized based on the determined target contact force. It will be appreciated that the target contact force corresponds one-to-one to the point of contact of the robot with the surrounding environment as described above in connection with equation 1 b).
For example, according to the actual situation, the target contact force may be, for example, a single contact force, for example, when the robot is of a wheelbarrow construction, the robot has, for example, only one contact point with the surroundings and only one contact force. Or the target contact force may be a plurality of contact forces, specifically, for example, the robot is a four-foot robot, and the robot contacts the surrounding environment (here, for example, a table top) via the four feet, so that the robot has four contact points with the surrounding environment, for example, and has a contact force corresponding to the contact point for each contact point. Embodiments of the present disclosure are not limited by the specific number of target contact forces.
For example, in some embodiments, in the process of obtaining the target contact force, a target motion estimation of the robot can be generated based on a robot mass center motion model according to current motion information of the robot and the target moment, wherein the target motion estimation is a function of the contact force of the robot, an error function of the target motion reference information and the target motion estimation is generated based on the target motion reference information and the target motion estimation, and the target contact force is determined based on the error function.
It should be appreciated that the above is given as only one specific example of determining the target contact force, and embodiments of the present disclosure are not limited by the specific manner in which the target contact force is determined.
Thereafter, in step S1032, the target torque of each joint of the robot is determined based on the target contact force, the bone structure of the robot, and the posture information of the robot.
For example, the torque of each joint of the robot may be generated from the target contact force directly based on the skeletal structure of the robot and the current posture information of the robot. Or the target posture reference information (the target posture reference information refers to data information used for representing the expected state of each joint posture of the robot at the target moment) can be comprehensively considered on the basis of the above, the target posture reference information, the current posture information and the skeleton structure of the robot are integrated, and the target torque of each joint of the robot is generated based on the target contact force by using a preset algorithm.
Based on the above, the application determines the target contact force of the robot based on the target moment, the current motion information of the robot and the target motion reference information of the robot, and then determines the target torque of each joint of the robot based on the target contact force, the skeleton structure of the robot and the posture information of the robot, so that the target torque of each joint of the robot can be simply and efficiently generated based on the calculated target moment, thereby being beneficial to realizing real-time and high-precision motion control of the robot.
In some embodiments, the above-described process S1031 of determining the target contact force of the robot according to the target moment, the current motion information of the robot, and the target motion reference information of the robot may be described in more detail, for example. An exemplary flowchart of a process S1031 of determining a target contact force for the robot according to an embodiment of the disclosure is shown in fig. 3.
Referring to fig. 3, first, in step S1031-1, a target motion estimate of the robot is generated based on a robot centroid motion model based on current motion information of the robot and the target moment, wherein the target motion estimate is a function of a contact force of the robot.
The target motion estimation includes at least a portion of a target motion position estimator, a target motion velocity estimator, a target motion acceleration estimator, a target rotation angle estimator, a target rotation angular velocity estimator, a target rotation angular acceleration estimator.
For example, the centroid dynamics equation set of the robot is shown in the formula 1 a) and the formula 1 b), and for example, for the target time t k, the centroid motion of the robot should satisfy the following centroid motion equation:
The subscript k of the formula represents the parameter amount at the target time t k, and the meanings of the other parameters are as described above and are not described herein. The two equations of the equation set are combined to obtain the following centroid motion model:
wherein, The acceleration of the robot at the target time t k is converted into an oblique symmetry matrix, Δp k is a position gain amount, Δp k=pk-p0, where p k is the position of the robot at the target time t k, p 0 is the position of the robot at the current time t 0,To convert gravitational acceleration into a diagonally symmetric matrix. The other parameters have the meanings as described above.
If the expression of the target torque calculated via the linear function is shown in formula 5), for example, the expression of the target rotational angular acceleration can be obtained by substituting and processing formula 5) into the motion expression 6):
And wherein, AndThe target rotational angular acceleration is known as a function of the contact force f, as a constant term calculated based on the current motion information of the robot. Accordingly, when the current motion information is substituted into the expression, the estimated target rotational angular acceleration for the robot at the target time t k can be obtained
And through the relation between the rotational angular acceleration and the rotational angle and angular velocity, the expressions of the target rotational angular velocity ω k and the target rotational angle Δθ k can be further obtained:
ωk=Aω,kf+bω,k 8)
Δθk=Aθ,kf+bθ,k 9)
And wherein a ω,kf,bω,k,Aθ,kf,bθ,k is also a constant term calculated based on current motion information of the robot, and the target rotational angular velocity and the target rotational angle are also functions of the contact force f. Accordingly, when the current motion information is substituted into the expression, the estimated target rotational angular velocity of the robot at the target time t k can be obtained Target rotation angle estimator
Based on the same solving mode, the target motion position estimator, the target motion speed estimator and the target motion acceleration estimator can be obtained through solving, and are not described in detail herein.
Based on the above, a target motion estimation of the robot is generated based on the robot centroid motion model and the target moment, each estimated amount in the target motion estimation is an expression of the contact force f, and the rest parameter amounts except the contact force f which is an unknown number can be obtained through calculation of the current motion information.
Thereafter, in step S1031-2, an error function of the target motion reference information and the target motion estimate is generated based on the target motion reference information and the target motion estimate.
For example, the error function e k may have the following form, for example:
wherein, Characterizing target motion reference positionsEstimated amount of movement position with targetIs used for the difference in (a),Characterizing target motion reference velocityEstimated speed of motion with targetIs used for the difference in (a),Characterizing a target rotation reference angleEstimated rotation angle with targetIs used for the difference in (a),Characterizing a target rotational reference angular velocityEstimated rotational angular velocity from targetIs a difference in (c).
After the error function is obtained, in step S1031-3, the target contact force is determined based on the error function.
For example, the error function may be processed based on an optimization algorithm to determine an optimal contact force value and direction, and a target contact force based on the optimal contact force value and direction. Or may generate the target contact force based on other means. Embodiments of the present disclosure are not limited by the particular manner in which the target contact force is generated.
Based on the above, the application generates the target motion estimation of the robot based on the robot centroid motion model according to the current motion information of the robot and the target moment, and then generates the error function of the target motion reference information and the target motion estimation and determines the target contact force based on the target motion reference information and the target motion estimation, so that the centroid motion model of the robot, the current and the motion state of the target are comprehensively considered in the process of generating the target contact force, thereby being beneficial to generating the high-precision target contact force, enabling the robot to well execute the expected motion, and improving the reliability and precision of the motion control.
For example, in some embodiments, determining the target contact force based on the error function includes optimizing the error function based on a quadratic optimization algorithm and determining the contact force that minimizes the error function as the target contact force. By adjusting the contact force such that the error function takes a minimum value (i.e. such that the target motion reference information has a minimum error value with the target motion estimation), an optimal contact force for achieving motion control of the robot is determined as a target contact force.
It should be appreciated that the secondary optimization algorithm may be selected based on actual needs, for example, and that embodiments of the present disclosure are not limited by the particular algorithm type of secondary optimization algorithm used.
Based on the above, in the application, the optimal solution of the error function is solved by applying the quadratic optimization algorithm, and the contact force for enabling the error function to obtain the optimal solution is determined as the target contact force, so that the solution of the target contact force can be conveniently realized through the optimization process, the obtained target contact force can enable the robot to well execute the expected motion process, and the flexibility and the robustness of motion control are improved.
In some embodiments, when there are multiple contact points between the robot and the surrounding environment, i.e. there are multiple contact forces for the robot, and when there are multiple groups of contact force combinations to make the error function take the minimum value, then each group of contact force combinations will be further weighted and summed at this time to calculate the total contact force corresponding to the contact force combination, and the contact force combination calculated to the minimum total contact force is determined as the target contact force combination.
In the case that a plurality of contact points exist, and therefore a plurality of contact forces exist, if the plurality of groups of contact force combinations can realize the optimal solution of the error function, the execution of the expected motion process and the minimization of the total contact force can be considered by further comparing the total contact force of the plurality of groups of contact force combinations, so that the motion control process of the robot is further optimized.
In some embodiments, the process S1032 of determining the target torque of each joint of the robot based on the target contact force, the skeletal structure of the robot, and the pose information of the robot can be more specifically described, for example. Fig. 4 shows an exemplary flowchart of a process S1032 of determining target torques of respective joints of the robot according to an embodiment of the present disclosure.
Referring to fig. 4, first, in step S1032-1, the main torque amounts of the joints of the robot are determined based on the target contact force and the bone structure of the robot.
The main torque is the corresponding torque of each joint of the robot corresponding to the target contact force based on the target contact force and the skeleton structure of the robot. This amount of torque can be used to achieve a target contact force of the robot, i.e. to achieve a desired course of motion.
Thereafter, in step S1032-2, the target posture reference information and the current posture information of each joint of the robot are acquired.
The target posture reference information refers to data information for representing a desired posture of each joint of the robot at a target time, and is, for example, a joint reference angle of each joint of the robot at the target time.
The current gesture information refers to data information for representing the gesture of each joint of the robot at the current moment, and is, for example, the current angle of each joint of the robot at the current moment.
It will be appreciated that it is possible to obtain, for example, motion planning information of the robot and generate target pose reference information of the robot based on the motion planning information, and obtain current pose information of the robot via torque sensors provided at respective joints of the robot.
However, the above only gives an exemplary manner of obtaining the target pose reference information and the current pose information of each joint of the robot, and other manners may be selected to obtain the information according to actual needs, and the embodiments of the present disclosure are not limited by the specific manner of obtaining the target pose reference information and the current pose information of each joint of the robot.
Thereafter, in step S1032-3, an additional torque amount of each joint of the robot is determined based on the target pose reference information of the robot and the current pose information of the robot.
The additional torque amount is used to adjust the main torque amount such that the joints of the robot can also have a desired joint pose while achieving a desired course of motion.
For example, the additional torque amount may be generated based on the target pose information and the current pose information via a preset algorithm, or the target pose reference information, the current pose information of the robot, the skeleton structure information of the robot, and the main torque amount of each joint of the robot may be input into a preset algorithm, and each information is comprehensively considered to generate the additional torque amount.
After the main torque amount and the additional torque amount are obtained, in step S1032-4, the target torque of each joint of the robot is determined based on the main torque amount and the additional torque amount.
For example, for each joint, the amount of main torque it has may be superimposed with the amount of additional torque to generate the target torque for that joint. This process is illustrated, for example, by the following formula:
τm=τm ffm fb 10)
τ m is the target torque of the mth joint of the robot. And wherein τ m ff is the amount of main torque that the mth joint has and τ m fb is the amount of accessory torque that the mth joint has. And M is a positive integer greater than 0 and less than or equal to M, wherein M is the total number of joints of the robot.
Based on the above, by comprehensively considering the overall expected motion of the robot and the expected joint posture of each joint of the robot in the process of generating the target torque of each joint, the generated target torque of each joint can be further used for enabling each joint of the robot to be at an expected joint posture angle in the overall motion process on the basis of enabling the robot to well execute the expected motion, so that more comprehensive and accurate motion control is realized.
In some embodiments, acquiring the target pose reference information and the current pose information of each joint of the robot comprises acquiring motion planning information of the robot, generating the target pose reference information of the robot based on the motion planning information, wherein the target pose reference information comprises a joint target reference angle and a joint target reference angular velocity of each joint of the robot at a target moment, and acquiring the current pose information of the robot, wherein the current pose information comprises a joint current angle and a joint current angular velocity of each joint of the robot at the current moment.
The motion planning information is global planning information of the overall motion of the robot, and may include, for example, a starting position and a stopping position of the motion of the robot, a starting angle and a stopping angle, a total time of the motion process of the robot, an average speed of the motion process of the robot, and the like. By processing the motion planning information, expected motion information of the robot at each motion time can be obtained, for example, expected joint angles and expected joint angular velocities of each joint of the robot at each motion time, namely target attitude reference information can be obtained.
Based on the above, the motion planning information of the robot is utilized to generate the target gesture reference information, and the target gesture reference information is further set to include the joint reference angles of all joints of the robot at the target time, so that the joint reference angles of all joints and the joint target reference angular velocity can be determined by comprehensively considering global motion planning, and accurate and real-time motion control on multiple layers of global motion, local motion and the like is facilitated.
The foregoing robot control method will be described in more detail with reference to specific application scenarios. Fig. 5 shows a schematic diagram of a robot control process according to an embodiment of the present disclosure.
For example, when a four-legged robot is used to achieve linear acceleration movements in a plane, the robot has, for example, four feet for contact with the ground, and accordingly has four corresponding contact forces acting on the four feet, respectively.
And the robot is provided with, for example, a target planner, a motion controller, and a detector. The target planner is used for generating motion planning information of the robot based on preset information of the user input and the system. As previously mentioned, it is for example a global motion plan of the robot, e.g. comprising global position planning informationGlobal speed planning informationGlobal acceleration planning information
The detector comprises, for example, a torque sensor arranged at each joint of the robot, a speed sensor arranged on the robot, a displacement sensor, a vision sensor arranged in the surrounding environment of the robot, and the like, and is used for detecting the current motion information (current rotation motion information and current translation motion information) of the robot, for example, the current position p 0 and the current speedCurrent rotation euler angle data R 0 (angle data θ 0 can be calculated based on the euler angle data), current rotation angular velocity ω 0, current robot-to-environment contact point data R I, where I is, for example, the total number of contact points of the robot, and in the four-legged robot, I has a value of 4.
The motion controller, for example, acquires motion planning information of the robot and generates target motion reference information (target rotation motion reference information and target translation motion reference information) of the robot based on the motion planning information via a target motion reference information generation module, for example, including a target motion reference positionTarget motion reference speedTarget rotation reference angleTarget rotation reference angular velocityEtc.
And wherein the motion controller further obtains the aforementioned current motion information from the detector, for example, via a current motion information obtaining module, and generates a target torque based on the current motion information and a linear functionAnd generating target motion estimation of the robot based on the target moment and a mass center motion model of the robot, wherein the estimated value of each target motion information is a function of the contact force. Thereafter, an error function is generated based on the target motion estimate and the aforementioned target motion reference information, and a target contact force is determined by solving an optimal solution of the error function using a quadratic optimization algorithm.
The motion controller also receives target pose reference information from a target planner, including, for example, joint target reference angles for each joint of the robot at a target timeReference angular velocity of joint targetWherein M is the total number of joints of the robot. And the motion controller also receives current gesture information from the detector, wherein the current gesture information comprises a current joint angle q 0,M and a current joint angular speed of each joint of the robot at a current time
Finally, based on the target contact force, the target pose reference information, the current pose information, and comprehensively considering the skeletal architecture of the robot in a torque generation module, a motion controller will generate a target torque for controlling each joint of the robot to perform a desired motion.
Based on the above, in the application, on the basis of obtaining the current rotation movement information of the robot and the target rotation movement reference information of the robot, the target moment is obtained based on the current rotation movement information by applying the optimized linear function, and the target torque of each joint of the robot is determined according to the obtained target moment, so that the robot can be accurately controlled to execute the target movement process on the basis of realizing good real-time control of the robot, and the control accuracy and robustness of the robot are obviously improved on the basis of considering real-time.
According to another aspect of the present disclosure, a robot control system is presented. The robot comprises a front handle and a front handle controller, wherein the front handle controller provides steering torque for the front handle. Fig. 6 shows an exemplary block diagram of a robot control system 600 according to an embodiment of the invention.
The robot control system 600 shown in fig. 6 includes a rotational motion information acquisition module 610, a target torque generation module 620, and a joint torque generation module 630.
The rotational motion information obtaining module 610 is configured to perform the process of step S101 in fig. 1, and obtain current rotational motion information of the robot and target rotational motion reference information of the robot.
The current rotational movement information refers to data information for representing a rotational movement state of the robot at a current time. In the application, the current rotation movement information comprises at least one of a current rotation angle of the robot and a current rotation angular velocity of the robot. However, it should be appreciated that the current rotational motion information may also include, for example, the current rotational angular acceleration of the robot, etc., as desired.
The target rotational movement reference information refers to data information for characterizing a desired state of rotational movement of the robot at a target time (which may be, for example, a time next to a current time according to actual needs). In the application, the target rotation movement reference information comprises at least one of a robot target rotation reference angle, a robot target rotation reference angular speed and a robot target rotation reference angular acceleration. However, it should be appreciated that the target rotational motion reference information may also include other parameters, for example, as desired. Embodiments of the present disclosure are not limited by the specific composition of the target rotational movement desired information.
For example, current rotational motion information of the robot may be acquired via torque sensors provided at joints of the robot and vision sensors provided in the surrounding environment of the robot, and target rotational motion reference information of the robot may be obtained based on processing motion planning information generated by a motion planner of the robot. Embodiments of the present disclosure are not limited by the particular manner in which the target rotational motion reference information and the current rotational motion information are obtained.
The target torque generation module 620 is configured to perform the process of step S102 in fig. 1, and determine a target torque for controlling a target rotational movement of the robot based on a linear function according to current rotational movement information of the robot.
The linear function is, for example, a linear expression of the target torque. Specifically, for example, the rotational motion process of the robot may be approximated via exponential coordinates, so that the nonlinear parameter in the nonlinear expression of the existing torque parameter is further expanded, and the optimized linear expression obtained by extracting the linear component therein is extracted.
The joint torque generation module 630 is configured to perform the process of step S103 in fig. 1, determine a target torque for each joint of the robot based on the target torque, current rotational motion information of the robot, and target rotational motion reference information of the robot.
Based on the above, in the application, on the basis of obtaining the current rotation movement information of the robot and the target rotation movement reference information of the robot, the target moment is obtained based on the current rotation movement information by applying the optimized linear function, and the target torque of each joint of the robot is determined according to the obtained target moment, so that the robot can be accurately controlled to execute the target movement process on the basis of realizing good real-time control of the robot, and the control accuracy and robustness of the robot are obviously improved on the basis of considering real-time. Specifically, the current rotation movement information comprises at least one of the current rotation angle and the current rotation angular velocity of the robot, so that when the calculation of the target moment is realized by applying a linear function, the good representation of the target moment can be realized based on the current multi-dimensional and multi-level movement information, and the accuracy of the generated target moment is improved. In addition, the target rotation movement reference information comprises at least one of a robot target rotation reference angle, a robot target rotation reference angular speed and a robot target rotation reference angular acceleration, so that the target rotation movement process can be more comprehensively described, the generation of target torque of each joint of the robot based on the target rotation movement reference information is facilitated, and the accuracy of the generated target torque is improved.
In some embodiments, the linear function includes at least two of a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angular gain parameter, a current rotational angular velocity parameter of the robot.
The angular gain parameter is a difference between a target rotation angle of the robot and a current rotation angle of the robot, and is described in detail below with reference to specific embodiments.
By including at least two of the robot target rotational angular acceleration parameter, the robot angular velocity gain parameter, the robot angular gain parameter and the robot current rotational angular velocity parameter in the linear function, compared with the method that only one of the robot target rotational angular acceleration parameters is used when the target moment is calculated at present, more linear components capable of reflecting the target moment are reserved by further expanding the linear function expression of the target moment, so that the calculation of linearization is realized to improve the calculation speed, the calculation amount is reduced, the accuracy of the calculated target moment is obviously improved, and the method is beneficial to realizing the subsequent good motion control of the robot based on the target moment.
In some embodiments, the linear function includes a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angular gain parameter, a robot current rotational angular velocity parameter.
The robot angular velocity gain parameter is a difference value between a robot target rotation angular velocity and a current rotation angular velocity of the robot, and the angle gain parameter is a difference value between a robot target rotation angle and the current rotation angle of the robot.
Based on the above, in the application, by setting the linear function to include the robot target rotational angular acceleration parameter, the robot angular velocity gain parameter, the robot angular gain parameter, and the current rotational angular velocity parameter of the robot, the nonlinear term is directly omitted (as described above) as compared with the prior art that only one of the robot target rotational angular acceleration parameters is includedItem), by optimizing the linear function (via an exponential coordinate approximation manner), further extracting and retaining more linear components in the nonlinear item, thereby being beneficial to more accurately generating the target moment, being capable of more comprehensively and accurately reflecting the target motion state of the robot, improving the accuracy of motion control and simultaneously taking account of good real-time performance of the motion control.
In some embodiments, the robotic control system is capable of performing the methods as described above, with the functions as described above.
According to another aspect of the present disclosure, a robot is presented. And wherein the robot has a control system robot control system as described above, and is capable of executing the robot control method as described above, and realizing the robot control function as described above.
In addition, the robot may further include a bus, a memory, a sensor assembly, a controller, a communication module, an input-output device, and the like.
A bus may be a circuit that interconnects the components of the robot and communicates communication information (e.g., control messages or data) among the components.
The sensor assembly may be used to sense the physical world, including for example cameras, infrared sensors, ultrasonic sensors, and the like. The sensor assembly may further comprise means for measuring the current operation and movement state of the robot, such as hall sensors, laser position sensors, or strain sensors.
The controller is used to control the operation of the robot, for example in an artificial intelligence control manner.
The controller comprises, for example, a processing means. The processing means may include a microprocessor, digital signal processor ("DSP"), application specific integrated circuit ("ASIC"), field programmable gate array, state machine, or other processing device for processing electrical signals received from the sensor lines. Such processing devices may include programmable electronics, such as PLCs, programmable interrupt controllers ("PICs"), programmable logic devices ("PLDs"), programmable read-only memories ("PROMs"), electronically programmable read-only memories, and the like.
The communication module may be connected to a network, for example, by wire or by invalidation, to facilitate communication with the physical world (e.g., a server). The communication module may be wireless and may include a wireless interface, such as an IEEE 802.11, bluetooth, wireless local area network ("WLAN") transceiver, or a radio interface for accessing a cellular telephone network (e.g., a transceiver/antenna for accessing CDMA, GSM, UMTS or other mobile communication networks). In another example, the communication module may be wired and may include an interface such as ethernet, USB, or IEEE 1394.
The input-output means may transfer, for example, commands or data input from a user or any other external device to one or more other components of the robot, or may output commands or data received from one or more other components of the robot to the user or other external device.
Multiple robots may be grouped into a robotic system to cooperatively accomplish a task, the multiple robots being communicatively connected to a server and receiving cooperative robot instructions from the server.
According to another aspect of the present invention there is also provided a non-volatile computer readable storage medium having stored thereon computer readable instructions which when executed by a computer can perform a method as described above.
Program portions of the technology may be considered to be "products" or "articles of manufacture" in the form of executable code and/or associated data, embodied or carried out by a computer readable medium. A tangible, persistent storage medium may include any memory or storage used by a computer, processor, or similar device or related module. Such as various semiconductor memories, tape drives, disk drives, or the like, capable of providing storage functionality for software.
All or a portion of the software may sometimes communicate over a network, such as the internet or other communication network. Such communication may load software from one computer device or processor to another. Thus, another medium capable of carrying software elements may also be used as a physical connection between local devices, such as optical, electrical, electromagnetic, etc., propagating through cable, optical cable, air, etc. Physical media used for carrier waves, such as electrical, wireless, or optical, may also be considered to be software-bearing media. Unless limited to a tangible "storage" medium, other terms used herein to refer to a computer or machine "readable medium" mean any medium that participates in the execution of any instructions by a processor.
The application uses specific words to describe embodiments of the application. Reference to "a first/second embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the application are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.

Claims (15)

1.一种机器人控制方法,该方法包括:1. A robot control method, the method comprising: 获取该机器人的当前旋转运动信息及该机器人的目标旋转运动参考信息;Obtaining the current rotational motion information of the robot and the target rotational motion reference information of the robot; 根据该机器人的当前旋转运动信息,利用线性函数逼近用于控制该机器人的目标旋转运动的理想力矩中的非线性量,以得到用于控制该机器人的目标旋转运动的目标力矩;According to the current rotational motion information of the robot, a nonlinear amount in an ideal torque for controlling the target rotational motion of the robot is approximated by a linear function to obtain a target torque for controlling the target rotational motion of the robot; 基于该目标力矩、该机器人的当前旋转运动信息及该机器人的目标旋转运动参考信息,确定该机器人的各个关节的目标扭矩;Determining a target torque of each joint of the robot based on the target torque, the current rotational motion information of the robot and the target rotational motion reference information of the robot; 其中,所述当前旋转运动信息包括:机器人当前旋转角度、机器人当前旋转角速度中的至少一项;所述目标旋转运动参考信息包括:机器人目标旋转参考角度、机器人目标旋转参考角速度、机器人目标旋转参考角加速度中的至少一项。Among them, the current rotational motion information includes: at least one of the robot's current rotation angle and the robot's current rotation angular velocity; the target rotational motion reference information includes: at least one of the robot's target rotation reference angle, the robot's target rotation reference angular velocity, and the robot's target rotation reference angular acceleration. 2.根据权利要求1所述的机器人控制方法,其中,所述线性函数包括:机器人目标旋转角加速度参量、机器人角速度增益参量、机器人角度增益参量、机器人当前旋转角速度参量中的至少两项,2. The robot control method according to claim 1, wherein the linear function comprises at least two of the robot target rotation angular acceleration parameter, the robot angular velocity gain parameter, the robot angle gain parameter, and the robot current rotation angular velocity parameter. 其中,所述机器人角速度增益参量为机器人目标旋转角速度与机器人当前旋转角速度的差值;所述角度增益参量为机器人目标旋转角度与机器人当前旋转角度的差值。Among them, the robot angular velocity gain parameter is the difference between the robot's target rotation angular velocity and the robot's current rotation angular velocity; the angle gain parameter is the difference between the robot's target rotation angle and the robot's current rotation angle. 3.根据权利要求1或2所述的机器人控制方法,其中,所述线性函数包括:机器人目标旋转角加速度参量、机器人角速度增益参量、机器人角度增益参量、机器人当前旋转角速度参量。3. The robot control method according to claim 1 or 2, wherein the linear function includes: the robot target rotation angular acceleration parameter, the robot angular velocity gain parameter, the robot angle gain parameter, and the robot current rotation angular velocity parameter. 4.根据权利要求1所述的机器人控制方法,其中还包括:4. The robot control method according to claim 1, further comprising: 获取该机器人的当前平移运动信息及该机器人的目标平移运动参考信息;Obtaining the current translational motion information of the robot and the target translational motion reference information of the robot; 且其中,该机器人的目标平移运动参考信息包括:机器人目标运动参考位置、机器人目标运动参考速度、机器人目标运动参考加速度;该机器人的当前平移运动信息包括:机器人当前运动位置、机器人当前运动速度、机器人当前运动加速度。And wherein, the target translation motion reference information of the robot includes: the target motion reference position of the robot, the target motion reference speed of the robot, and the target motion reference acceleration of the robot; the current translation motion information of the robot includes: the current motion position of the robot, the current motion speed of the robot, and the current motion acceleration of the robot. 5.根据权利要求3所述的机器人控制方法,其中所述线性函数为:5. The robot control method according to claim 3, wherein the linear function is: 且其中,为目标时刻tk的目标力矩,I0为当前时刻t0的转动惯量值,为机器人目标旋转角加速度,P,Q为根据实际需要选取的参数量,Δωk为机器人角速度增益参量,其中,Δωk=ωk0,ωk为机器人目标旋转角速度,ω0为机器人当前旋转角速度,Δθk为该机器人角度增益参量,其中,Δθk=θk0,θk为机器人目标旋转角度,θ0为机器人当前旋转角度,是指将该机器人的当前角速度转换为斜对称矩阵。And among them, is the target torque at the target time tk , I0 is the moment of inertia at the current time t0 , is the robot target rotation angular acceleration, P and Q are parameters selected according to actual needs, Δω k is the robot angular velocity gain parameter, where Δω k = ω k - ω 0 , ω k is the robot target rotation angular velocity, ω 0 is the robot's current rotation angular velocity, Δθ k is the robot's angle gain parameter, where Δθ k = θ k - θ 0 , θ k is the robot's target rotation angle, θ 0 is the robot's current rotation angle, It means converting the current angular velocity of the robot into a skew-symmetric matrix. 6.根据权利要求4所述的机器人控制方法,其中,基于该目标力矩、该机器人的当前旋转运动信息及该机器人的目标旋转运动参考信息,确定该机器人的各个关节的目标扭矩包括:6. The robot control method according to claim 4, wherein determining the target torque of each joint of the robot based on the target torque, the current rotational motion information of the robot and the target rotational motion reference information of the robot comprises: 根据该目标力矩、该机器人的当前运动信息及该机器人的目标运动参考信息,确定该机器人的目标接触力;Determining a target contact force of the robot according to the target torque, current motion information of the robot and target motion reference information of the robot; 基于该目标接触力、该机器人的骨骼结构及该机器人的姿态信息,确定该机器人的各个关节的目标扭矩;Determining target torques of various joints of the robot based on the target contact force, the skeletal structure of the robot, and posture information of the robot; 其中,该机器人的当前运动信息包括:当前旋转运动信息及当前平移运动信息;该机器人的目标运动参考信息包括:目标旋转运动参考信息及目标平移运动参考信息。The current motion information of the robot includes: current rotational motion information and current translational motion information; the target motion reference information of the robot includes: target rotational motion reference information and target translational motion reference information. 7.根据权利要求6所述的机器人控制方法,其中,根据该目标力矩、该机器人的当前运动信息及该机器人的目标运动参考信息,确定该机器人的目标接触力包括:7. The robot control method according to claim 6, wherein determining the target contact force of the robot according to the target torque, the current motion information of the robot and the target motion reference information of the robot comprises: 根据该机器人的当前运动信息及该目标力矩,基于机器人质心运动模型生成该机器人的目标运动估计,其中,该目标运动估计为该机器人的接触力的函数;According to the current motion information of the robot and the target torque, generating a target motion estimate of the robot based on a robot center of mass motion model, wherein the target motion estimate is a function of a contact force of the robot; 基于该目标运动参考信息与该目标运动估计,生成目标运动参考信息与该目标运动估计的误差函数;Based on the target motion reference information and the target motion estimate, generating an error function between the target motion reference information and the target motion estimate; 基于该误差函数,确定目标接触力;Based on the error function, a target contact force is determined; 其中,所述目标运动估计包括:目标运动位置估计量、目标运动速度估计量、目标运动加速度估计量、目标旋转角度估计量、目标旋转角速度估计量、目标旋转角加速度估计量中的至少一部分。The target motion estimation includes at least a part of a target motion position estimation, a target motion speed estimation, a target motion acceleration estimation, a target rotation angle estimation, a target rotation angular velocity estimation, and a target rotation angular acceleration estimation. 8.根据权利要求7所述的机器人控制方法,其中,基于该误差函数确定目标接触力包括:基于二次优化算法对该误差函数进行优化处理,并将使该误差函数取得最小值的接触力确定为目标接触力。8. The robot control method according to claim 7, wherein determining the target contact force based on the error function comprises: optimizing the error function based on a quadratic optimization algorithm, and determining the contact force that minimizes the error function as the target contact force. 9.根据权利要求6所述的机器人控制方法,其中,基于该目标接触力、该机器人的骨骼结构及该机器人的姿态信息,确定该机器人的各个关节的目标扭矩包括:9. The robot control method according to claim 6, wherein determining the target torque of each joint of the robot based on the target contact force, the skeletal structure of the robot and the posture information of the robot comprises: 基于该目标接触力及该机器人的骨骼结构,确定该机器人各关节的主扭矩量;Determine the main torque of each joint of the robot based on the target contact force and the skeletal structure of the robot; 获取该机器人各关节的目标姿态参考信息及当前姿态信息;Obtain target posture reference information and current posture information of each joint of the robot; 基于该机器人的目标姿态参考信息及该机器人的当前姿态信息,确定该机器人各关节的附加扭矩量;Determining the additional torque of each joint of the robot based on the target posture reference information of the robot and the current posture information of the robot; 基于该主扭矩量及该附加扭矩量,确定该机器人各关节的目标扭矩。Based on the main torque and the additional torque, the target torque of each joint of the robot is determined. 10.根据权利要求9所述的机器人控制方法,其中,获取该机器人各关节的目标姿态参考信息及当前姿态信息包括:10. The robot control method according to claim 9, wherein obtaining target posture reference information and current posture information of each joint of the robot comprises: 获取该机器人的运动规划信息,并基于该运动规划信息生成该机器人的目标姿态参考信息,所述目标姿态参考信息包括机器人各关节的关节目标参考角度及关节目标参考角速度;Acquire motion planning information of the robot, and generate target posture reference information of the robot based on the motion planning information, wherein the target posture reference information includes joint target reference angles and joint target reference angular velocities of each joint of the robot; 获取该机器人的当前姿态信息,所述当前姿态信息包括各关节的关节当前角度及关节当前角速度。The current posture information of the robot is obtained, wherein the current posture information includes the current angle of each joint and the current angular velocity of each joint. 11.一种机器人控制系统,该系统包括:11. A robot control system, the system comprising: 旋转运动信息获取模块,其被配置为获取该机器人的当前旋转运动信息及该机器人的目标旋转运动参考信息;A rotational motion information acquisition module, configured to acquire current rotational motion information of the robot and target rotational motion reference information of the robot; 目标力矩生成模块,其被配置为根据该机器人的当前旋转运动信息,利用线性函数逼近用于控制该机器人的目标旋转运动的理想力矩中的非线性量,以得到用于控制该机器人的目标旋转运动的目标力矩;a target torque generating module, which is configured to approximate the nonlinear amount in the ideal torque for controlling the target rotational motion of the robot using a linear function according to the current rotational motion information of the robot, so as to obtain the target torque for controlling the target rotational motion of the robot; 关节扭矩生成模块,其被配置为基于该目标力矩、该机器人的当前旋转运动信息及该机器人的目标旋转运动参考信息,确定该机器人的各个关节的目标扭矩;a joint torque generation module, configured to determine a target torque of each joint of the robot based on the target torque, current rotational motion information of the robot and target rotational motion reference information of the robot; 其中,所述当前旋转运动信息包括:机器人当前旋转角度、机器人当前旋转角速度中的至少一项;所述目标旋转运动参考信息包括:机器人目标旋转参考角度、机器人目标旋转参考角速度、机器人目标旋转参考角加速度中的至少一项。Among them, the current rotational motion information includes: at least one of the robot's current rotation angle and the robot's current rotation angular velocity; the target rotational motion reference information includes: at least one of the robot's target rotation reference angle, the robot's target rotation reference angular velocity, and the robot's target rotation reference angular acceleration. 12.根据权利要求11所述的机器人控制系统,其中,所述线性函数包括:机器人目标旋转角加速度参量、机器人角速度增益参量、机器人角度增益参量、机器人当前旋转角速度参量中的至少两项,12. The robot control system according to claim 11, wherein the linear function comprises at least two of the robot target rotation angular acceleration parameter, the robot angular velocity gain parameter, the robot angle gain parameter, and the robot current rotation angular velocity parameter. 其中,所述机器人角速度增益参量为机器人目标旋转角速度与机器人当前旋转角速度的差值;所述角度增益参量为机器人目标旋转角度与机器人当前旋转角度的差值。Among them, the robot angular velocity gain parameter is the difference between the robot's target rotation angular velocity and the robot's current rotation angular velocity; the angle gain parameter is the difference between the robot's target rotation angle and the robot's current rotation angle. 13.根据权利要求11所述的机器人控制系统,其中,所述线性函数包括:机器人目标旋转角加速度参量、机器人角速度增益参量、机器人角度增益参量、机器人当前旋转角速度参量;13. The robot control system according to claim 11, wherein the linear function comprises: a robot target rotation angular acceleration parameter, a robot angular velocity gain parameter, a robot angle gain parameter, and a robot current rotation angular velocity parameter; 其中,所述机器人角速度增益参量为机器人目标旋转角速度与机器人当前旋转角速度的差值;所述角度增益参量为机器人目标旋转角度与机器人当前旋转角度的差值。Among them, the robot angular velocity gain parameter is the difference between the robot's target rotation angular velocity and the robot's current rotation angular velocity; the angle gain parameter is the difference between the robot's target rotation angle and the robot's current rotation angle. 14.一种机器人,该机器人包括如前述权利要求11-13中任一项所述的机器人控制系统,且其通过如权利要求1-10中任一项所述的机器人控制方法来实现对所述机器人的运动控制。14. A robot, comprising the robot control system according to any one of claims 11 to 13, and realizing motion control of the robot through the robot control method according to any one of claims 1 to 10. 15.一种计算机可读存储介质,其特征在于,其上存储有计算机可读的指令,当利用计算机执行所述指令时执行上述权利要求1-10中任意一项所述的方法。15. A computer-readable storage medium, characterized in that computer-readable instructions are stored thereon, and when the instructions are executed by a computer, the method described in any one of claims 1 to 10 is executed.
CN202110632729.0A 2021-06-07 2021-06-07 Robot control method, system, robot and medium Active CN115502965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110632729.0A CN115502965B (en) 2021-06-07 2021-06-07 Robot control method, system, robot and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110632729.0A CN115502965B (en) 2021-06-07 2021-06-07 Robot control method, system, robot and medium

Publications (2)

Publication Number Publication Date
CN115502965A CN115502965A (en) 2022-12-23
CN115502965B true CN115502965B (en) 2024-12-06

Family

ID=84499347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110632729.0A Active CN115502965B (en) 2021-06-07 2021-06-07 Robot control method, system, robot and medium

Country Status (1)

Country Link
CN (1) CN115502965B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111360834A (en) * 2020-03-25 2020-07-03 中南大学 A humanoid robot motion control method and system based on deep reinforcement learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5991857B2 (en) * 2011-06-10 2016-09-14 三星電子株式会社Samsung Electronics Co.,Ltd. Robot balance control apparatus and control method thereof
FR2978844B1 (en) * 2011-08-04 2014-03-21 Aldebaran Robotics ROBOT WITH ARTICULATIONS OF VARIABLE RIGIDITY AND METHOD OF CALCULATING SAID OPTIMIZED RIGIDITY
CN103042526A (en) * 2013-01-22 2013-04-17 北京理工大学 Method and device for controlling to support foot of humanoid robot in single leg supporting period
CN108297083A (en) * 2018-02-09 2018-07-20 中国科学院电子学研究所 Mechanical arm system
CN109159151B (en) * 2018-10-23 2021-12-10 北京无线电测量研究所 Mechanical arm space trajectory tracking dynamic compensation method and system
CN111113427B (en) * 2019-12-31 2021-12-17 深圳市优必选科技股份有限公司 Steering engine state control method and device for robot, robot and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111360834A (en) * 2020-03-25 2020-07-03 中南大学 A humanoid robot motion control method and system based on deep reinforcement learning

Also Published As

Publication number Publication date
CN115502965A (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN107618678B (en) Joint Estimation Method of Attitude Control Information under Satellite Attitude Angle Deviation
CN109724597B (en) An inertial navigation solution method and system based on function iterative integration
CN114641375A (en) dynamic programming controller
CN111547176B (en) Self-balancing robot control method and system, self-balancing robot and medium
JP6781101B2 (en) Non-linear system control method, biped robot control device, biped robot control method and its program
Guo et al. A small opening workspace control strategy for redundant manipulator based on RCM method
Goodarzi et al. Global formulation of an extended Kalman filter on SE (3) for geometric control of a quadrotor UAV
CN115533915B (en) An active contact detection and control method for aerial working robots in uncertain environments
CN109655059B (en) Vision-inertia fusion navigation system and method based on theta-increment learning
Miao et al. Geometric formation tracking of quadrotor UAVs using pose-only measurements
Li et al. Dynamic visual servoing of a 6-RSS parallel robot based on optical CMM
CN110967017A (en) A Co-location Method for Rigid-body Cooperative Handling of Dual Mobile Robots
WO2024021744A1 (en) Method and apparatus for controlling legged robot, electronic device, computer-readable storage medium, computer program product and legged robot
WO2024021767A1 (en) Method, apparatus and device for controlling legged robot, legged robot, computer-readable storage medium and computer program product
Fahimi et al. An alternative closed-loop vision-based control approach for Unmanned Aircraft Systems with application to a quadrotor
Gu et al. Geometry-based adaptive tracking control for an underactuated small-size unmanned helicopter
CN118493398B (en) Self-adaptive fixed time control method for mechanical arm
CN114355959B (en) Attitude output feedback control method, device, medium and equipment for aerial robot
CN115502965B (en) Robot control method, system, robot and medium
CN113110107B (en) Unmanned aerial vehicle flight control simulation system, device and storage medium
CN118859980A (en) A spacecraft formation attitude and orbit control method and device
Tong et al. Cascade-LSTM-based visual-inertial navigation for magnetic levitation haptic interaction
CN115963858A (en) Unmanned aerial vehicle flight control method, device, equipment and storage medium
CN115562299A (en) Navigation method, device, mobile robot and medium of a mobile robot
CN117095310A (en) Method for acquiring visual servo model, visual servo method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant