Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are also within the scope of the invention.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that simulates, extends, and extends human intelligence using a digital computer or a machine controlled by a digital computer, perceives the environment, obtains knowledge, and uses the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The application relates to application of artificial intelligence in motion control, in particular to a robot control method based on artificial intelligence, which utilizes an exponential scaling quantity to represent related nonlinear parameter quantity (target moment parameter) of a robot in the process of rotating motion, so that linear components in the nonlinear parameter quantity can be further obtained and reserved, and then linearization of a mass center motion model of the robot can be better realized based on the target moment parameter, and accuracy and reliability of the robot motion control are improved on the basis of realizing real-time control based on the linearization model.
The robot provided by the application is a robot capable of realizing autonomous motion control, and can have various forms according to actual needs. The robot may be, for example, a scooter, a bicycle, or other type of robotic device. Embodiments of the present disclosure are not limited by the particular type of robot and its composition.
At present, when the robot is in motion control, firstly, target motion reference information of the robot is generated through motion planning information of a motion planner, then, target motion estimation of the robot is determined based on a mass center motion model of the robot, and the optimal target contact force is obtained through the target motion reference information and the target motion estimation so as to realize control of the motion of the robot, so that the actual motion of the robot accurately follows the planned target motion process. The method comprises the steps of carrying out motion control on a robot, wherein the rotation motion (particularly angular motion) of the robot is nonlinear motion, under the condition that nonlinear rotation motion parameter (moment quantity) is directly applied, the calculated quantity in the motion control process is large, the real-time performance of the motion control is poor, and in the process of linearizing the moment quantity, the nonlinear quantity in the moment quantity is directly omitted and is calculated only by using the linear quantity in the motion control process, so that the motion control of the robot has large error, the actual motion of the robot has large error with the planned motion process, the motion control precision is poor, and the robustness is low.
Based on the above, the application provides a robot control method. The method is suitable for realizing the motion control of the robot, particularly the real-time motion control of the robot, and can flexibly and highly accurately control the motion process of the robot through an optimized linear centroid motion model by further extracting the linear component in the nonlinear parameter. Fig. 1 shows an exemplary flowchart of a robot control method 100 according to an embodiment of the present disclosure.
Referring to fig. 1, first, in step S101, current rotational motion information of the robot and target rotational motion reference information of the robot are acquired.
The current rotational movement information refers to data information for representing a rotational movement state of the robot at a current time. In the application, the current rotation movement information comprises at least one of a current rotation angle of the robot and a current rotation angular velocity of the robot. However, it should be appreciated that the current rotational motion information may also include, for example, the current rotational angular acceleration of the robot, etc., as desired.
The target rotational movement reference information refers to data information for characterizing a desired state of rotational movement of the robot at a target time (which may be, for example, a time next to a current time according to actual needs). In the application, the target rotation movement reference information comprises at least one of a robot target rotation reference angle, a robot target rotation reference angular speed and a robot target rotation reference angular acceleration. However, it should be appreciated that the target rotational motion reference information may also include other parameters, for example, as desired. Embodiments of the present disclosure are not limited by the specific composition of the target rotational movement desired information.
For example, current rotational motion information of the robot may be acquired via torque sensors provided at joints of the robot and vision sensors provided in the surrounding environment of the robot, and target rotational motion reference information of the robot may be obtained based on processing motion planning information generated by a motion planner of the robot.
However, it should be appreciated that the above is given as only one example of acquiring the target rotational motion reference information and the current rotational motion information of the robot, and embodiments of the present disclosure are not limited by the specific manner in which the target rotational motion reference information and the current rotational motion information are acquired.
Thereafter, in step S102, a target torque for controlling a target rotational movement of the robot is determined based on a linear function according to current rotational movement information of the robot.
The linear function is, for example, a linear expression of the target torque. Specifically, for example, the rotational motion process of the robot may be approximated via exponential coordinates, so that the nonlinear parameter in the nonlinear expression of the existing torque parameter is further expanded, and the optimized linear expression obtained by extracting the linear component therein is extracted.
After the target torque is obtained, in step S103, a target torque of each joint of the robot is determined based on the target torque, current rotational movement information of the robot, and target rotational movement reference information of the robot.
Specifically, the target contact force of the robot may be determined first, for example, based on the target moment, the current rotational motion information of the robot, and the target rotational motion reference information of the robot. Thereafter, a target torque of each joint of the robot is determined based on, for example, the target contact force, the skeletal structure of the robot, and the posture information of the robot.
It should be appreciated that the above is given only as an exemplary method of achieving the target torque, and embodiments of the present disclosure are not limited by the particular process and manner in which the target torque is obtained.
Based on the above, in the application, on the basis of obtaining the current rotation movement information of the robot and the target rotation movement reference information of the robot, the target moment is obtained based on the current rotation movement information by applying the optimized linear function, and the target torque of each joint of the robot is determined according to the obtained target moment, so that the robot can be accurately controlled to execute the target movement process on the basis of realizing good real-time control of the robot, and the control accuracy and robustness of the robot are obviously improved on the basis of considering real-time. Specifically, the current rotation movement information comprises at least one of the current rotation angle and the current rotation angular velocity of the robot, so that when the calculation of the target moment is realized by applying a linear function, the good representation of the target moment can be realized based on the current multi-dimensional and multi-level movement information, and the accuracy of the generated target moment is improved. In addition, the target rotation movement reference information comprises at least one of a robot target rotation reference angle, a robot target rotation reference angular speed and a robot target rotation reference angular acceleration, so that the target rotation movement process can be more comprehensively described, the generation of target torque of each joint of the robot based on the target rotation movement reference information is facilitated, and the accuracy of the generated target torque is improved.
In some embodiments, the linear function includes at least two of a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angular gain parameter, a current rotational angular velocity parameter of the robot.
The angular gain parameter is a difference between a target rotation angle of the robot and a current rotation angle of the robot, and is described in detail below with reference to specific embodiments.
By including at least two of the robot target rotational angular acceleration parameter, the robot angular velocity gain parameter, the robot angular gain parameter and the robot current rotational angular velocity parameter in the linear function, compared with the method that only one of the robot target rotational angular acceleration parameters is used when the target moment is calculated at present, more linear components capable of reflecting the target moment are reserved by further expanding the linear function expression of the target moment, so that the calculation of linearization is realized to improve the calculation speed, the calculation amount is reduced, the accuracy of the calculated target moment is obviously improved, and the method is beneficial to realizing the subsequent good motion control of the robot based on the target moment.
In some embodiments, the linear function includes a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angular gain parameter, a robot current rotational angular velocity parameter.
The robot angular velocity gain parameter is a difference value between a robot target rotation angular velocity and a current rotation angular velocity of the robot, and the angle gain parameter is a difference value between a robot target rotation angle and the current rotation angle of the robot.
For example, the linear function will be described in more detail below in connection with an example expression of a centroid dynamics equation and the linear function of the robot.
For example, for a course of motion of a robot, the centroid dynamics equation of the robot is known as follows:
Equation 1 a) characterizes the translational motion process of the robot at the current moment, and equation 1 b) is the euler equation of the robot, which is used for characterizing the rotational motion process of the robot at the current moment.
And wherein f i is the contact force applied to the robot at the ith contact point at the moment, m is the mass center of the robot, g is the gravitational acceleration, p is the position of the robot at the moment,For the speed of the robot at that moment,The acceleration of the robot at that time. And where i is a positive integer greater than 0 and less than N, N being the total number of points of contact the robot has, it being understood that the points of contact are, for example, points of action of the robot in contact with the surrounding environment, which are predetermined according to the actual configuration of the robot and its manner of action, for example, when the robot is a four-legged robot and it interacts with the surrounding environment via four-legged feet, the total number of points of contact of the robot is, for example, 4, and each point of contact is the portion of the robot's foot in contact with the surrounding environment. And wherein,To convert the position of the i-th contact point of the robot into a diagonally symmetric matrix at this moment,In order to convert the position of the centroid of the robot into a diagonally symmetrical matrix at this moment, L is the angular momentum of the robot,Is the moment of the robot.
And wherein the moment of the robot can be further expressed as, for example:
wherein, As a torque parameter of the robot,The angular acceleration vector of the robot at the current moment, I is the moment of inertia of the robot, ω is the angular velocity of the robot,The angular velocity of the robot is converted into an oblique symmetry matrix for calculating the cross product of the angular velocity vector and the moment of inertia.
Since the rotational motion of the robot is a nonlinear motion, the torque parameter is expressed in the expressionItems and articlesThe terms are all nonlinear quantities. In this case, when the target torque at the target time t k is obtained, the formula of the torque parameter at the target time t k is as follows:
Wherein the subscript k characterizes the parameter amount at time t k, and the meaning of each parameter amount is as previously described. In this case, in the current motion control process, the torque parameter is usually taken into account The term approximatesWherein, the rotational inertia value of the current time t 0 of I 0,Is an estimate of the target angular acceleration at the target time t k. And will beItems are directly ignored. The expression of the target torque at the target time t k calculated in this way contains onlyBut because the nonlinear quantity in the moment parameter quantity is ignored, the accuracy of the calculated moment parameter quantity is greatly reduced, which is unfavorable for realizing good motion control.
Based on this, in the present application, the rotational motion process of the robot is approximated by exponential coordinates. Specifically, the angular motion process (rotational motion process) in the current robot motion process may be developed, for example, via the following formula:
Where R k characterizes the angular orientation information of the robot at the target instant t k (i.e. its euler angle information, e.g. including roll angle, pitch angle, yaw angle), R k-1 characterizes the angular orientation information of the robot at instant t k-1, The exponential coordinates of the amount of rotation between time t k-1 and time t k are characterized.
Based on the above, by approximating the angular orientation information at the target time t k via the exponential coordinates, equation 4) is obtained, and taylor expansion is further performed on the expression subsequently, the higher-order terms therein are ignored, and the linear components therein are retained, for example, the linear function in the present application can be obtained, which is, for example:
Wherein subscript k characterizes the parameter at target time t k and subscript 0 characterizes the parameter at current time t 0. And wherein, For the target torque, I 0 is the rotational inertia value at the current time t 0,The robot angular acceleration is a robot target rotational angular acceleration, P and Q are parameters selected according to actual needs, deltaomega k is a robot angular velocity gain parameter, deltaomega k=ωk-ω0,ωk is a robot target rotational angular velocity, omega 0 is a current rotational angular velocity of the robot, deltatheta k is the robot angular gain parameter, deltatheta k=θk-θ0,θk is a robot target rotational angle, and theta 0 is a current rotational angle of the robot,It means that the current angular velocity of the robot is converted into a diagonally symmetric matrix for calculating the cross product of the angular velocity vector and the moment of inertia.
And wherein,The term is a robot target rotation angular acceleration parameter term, the Q delta omega k term is a robot angular velocity gain parameter term, the P delta theta k term is a robot angular gain parameter term,The term is the current rotation angular velocity parameter term of the robot.
Based on the above, in the application, by setting the linear function to include the robot target rotational angular acceleration parameter, the robot angular velocity gain parameter, the robot angular gain parameter, and the current rotational angular velocity parameter of the robot, the nonlinear term is directly omitted (as described above) as compared with the prior art that only one of the robot target rotational angular acceleration parameters is includedItem), by optimizing the linear function (via an exponential coordinate approximation manner), further extracting and retaining more linear components in the nonlinear item, thereby being beneficial to more accurately generating the target moment, being capable of more comprehensively and accurately reflecting the target motion state of the robot, improving the accuracy of motion control and simultaneously taking account of good real-time performance of the motion control.
In some embodiments, the robot control method further comprises obtaining current translational motion information of the robot and target translational motion reference information of the robot.
The current translational motion information of the robot refers to data information for representing the motion state of the robot at the current moment, and for example, the current translational motion information of the robot comprises the current motion position of the robot, the current motion speed of the robot and the current motion acceleration of the robot.
The target translational motion reference information of the robot refers to data information for characterizing a desired state of translational motion of the robot at a target moment (which may be, for example, a moment next to a current moment according to actual needs). According to actual needs, the target translational motion reference information of the robot comprises a robot target motion reference position, a robot target motion reference speed and a robot target motion reference acceleration.
For example, the current translational motion information of the robot may be acquired by processing based on data in a motion sensor provided inside the robot or a vision sensor provided in the surrounding of the robot, and the target translational motion reference information of the robot may be obtained by processing based on motion planning information generated by a motion planner of the robot. It should be appreciated that embodiments of the present disclosure are not limited by the particular sources and manner of acquisition of the target translational motion reference information and the current translational motion information.
Based on the above, in the application, on the basis of acquiring the rotation motion information (including the current rotation motion information and the target rotation motion reference information) of the robot, the translational motion information (including the current translational motion information and the target translational motion reference information) of the robot is further acquired and acquired, so that the translational motion process of the robot, the current state of the rotation motion process and the expected state of the target moment can be acquired, and the subsequent further acquisition of the contact force between the robot and the environment based on the robot centroid motion model and the motion control of the robot are facilitated.
In some embodiments, step S103 described above may be described in more detail, for example. Fig. 2 shows an exemplary flowchart of a process S103 of determining target torques of respective joints of the robot according to an embodiment of the present disclosure.
Referring to fig. 2, first, in step S1031, a target contact force of the robot is determined based on the target moment, current motion information of the robot, and target motion reference information of the robot.
The current motion information of the robot comprises current rotation motion information and current translation motion information, and the target motion reference information of the robot comprises target rotation motion reference information and target translation motion reference information. The specific meaning of each information is as described above, and will not be described here again.
The target contact force refers to interaction force between the robot and the surrounding environment under the condition that the robot is regarded as rigid motion at the target motion moment, and the target contact force is associated with the mass center motion state of the robot at the target moment, so that the torque of each joint can be further determined and motion control of the robot can be realized based on the determined target contact force. It will be appreciated that the target contact force corresponds one-to-one to the point of contact of the robot with the surrounding environment as described above in connection with equation 1 b).
For example, according to the actual situation, the target contact force may be, for example, a single contact force, for example, when the robot is of a wheelbarrow construction, the robot has, for example, only one contact point with the surroundings and only one contact force. Or the target contact force may be a plurality of contact forces, specifically, for example, the robot is a four-foot robot, and the robot contacts the surrounding environment (here, for example, a table top) via the four feet, so that the robot has four contact points with the surrounding environment, for example, and has a contact force corresponding to the contact point for each contact point. Embodiments of the present disclosure are not limited by the specific number of target contact forces.
For example, in some embodiments, in the process of obtaining the target contact force, a target motion estimation of the robot can be generated based on a robot mass center motion model according to current motion information of the robot and the target moment, wherein the target motion estimation is a function of the contact force of the robot, an error function of the target motion reference information and the target motion estimation is generated based on the target motion reference information and the target motion estimation, and the target contact force is determined based on the error function.
It should be appreciated that the above is given as only one specific example of determining the target contact force, and embodiments of the present disclosure are not limited by the specific manner in which the target contact force is determined.
Thereafter, in step S1032, the target torque of each joint of the robot is determined based on the target contact force, the bone structure of the robot, and the posture information of the robot.
For example, the torque of each joint of the robot may be generated from the target contact force directly based on the skeletal structure of the robot and the current posture information of the robot. Or the target posture reference information (the target posture reference information refers to data information used for representing the expected state of each joint posture of the robot at the target moment) can be comprehensively considered on the basis of the above, the target posture reference information, the current posture information and the skeleton structure of the robot are integrated, and the target torque of each joint of the robot is generated based on the target contact force by using a preset algorithm.
Based on the above, the application determines the target contact force of the robot based on the target moment, the current motion information of the robot and the target motion reference information of the robot, and then determines the target torque of each joint of the robot based on the target contact force, the skeleton structure of the robot and the posture information of the robot, so that the target torque of each joint of the robot can be simply and efficiently generated based on the calculated target moment, thereby being beneficial to realizing real-time and high-precision motion control of the robot.
In some embodiments, the above-described process S1031 of determining the target contact force of the robot according to the target moment, the current motion information of the robot, and the target motion reference information of the robot may be described in more detail, for example. An exemplary flowchart of a process S1031 of determining a target contact force for the robot according to an embodiment of the disclosure is shown in fig. 3.
Referring to fig. 3, first, in step S1031-1, a target motion estimate of the robot is generated based on a robot centroid motion model based on current motion information of the robot and the target moment, wherein the target motion estimate is a function of a contact force of the robot.
The target motion estimation includes at least a portion of a target motion position estimator, a target motion velocity estimator, a target motion acceleration estimator, a target rotation angle estimator, a target rotation angular velocity estimator, a target rotation angular acceleration estimator.
For example, the centroid dynamics equation set of the robot is shown in the formula 1 a) and the formula 1 b), and for example, for the target time t k, the centroid motion of the robot should satisfy the following centroid motion equation:
The subscript k of the formula represents the parameter amount at the target time t k, and the meanings of the other parameters are as described above and are not described herein. The two equations of the equation set are combined to obtain the following centroid motion model:
wherein, The acceleration of the robot at the target time t k is converted into an oblique symmetry matrix, Δp k is a position gain amount, Δp k=pk-p0, where p k is the position of the robot at the target time t k, p 0 is the position of the robot at the current time t 0,To convert gravitational acceleration into a diagonally symmetric matrix. The other parameters have the meanings as described above.
If the expression of the target torque calculated via the linear function is shown in formula 5), for example, the expression of the target rotational angular acceleration can be obtained by substituting and processing formula 5) into the motion expression 6):
And wherein, AndThe target rotational angular acceleration is known as a function of the contact force f, as a constant term calculated based on the current motion information of the robot. Accordingly, when the current motion information is substituted into the expression, the estimated target rotational angular acceleration for the robot at the target time t k can be obtained
And through the relation between the rotational angular acceleration and the rotational angle and angular velocity, the expressions of the target rotational angular velocity ω k and the target rotational angle Δθ k can be further obtained:
ωk=Aω,kf+bω,k 8)
Δθk=Aθ,kf+bθ,k 9)
And wherein a ω,kf,bω,k,Aθ,kf,bθ,k is also a constant term calculated based on current motion information of the robot, and the target rotational angular velocity and the target rotational angle are also functions of the contact force f. Accordingly, when the current motion information is substituted into the expression, the estimated target rotational angular velocity of the robot at the target time t k can be obtained Target rotation angle estimator
Based on the same solving mode, the target motion position estimator, the target motion speed estimator and the target motion acceleration estimator can be obtained through solving, and are not described in detail herein.
Based on the above, a target motion estimation of the robot is generated based on the robot centroid motion model and the target moment, each estimated amount in the target motion estimation is an expression of the contact force f, and the rest parameter amounts except the contact force f which is an unknown number can be obtained through calculation of the current motion information.
Thereafter, in step S1031-2, an error function of the target motion reference information and the target motion estimate is generated based on the target motion reference information and the target motion estimate.
For example, the error function e k may have the following form, for example:
wherein, Characterizing target motion reference positionsEstimated amount of movement position with targetIs used for the difference in (a),Characterizing target motion reference velocityEstimated speed of motion with targetIs used for the difference in (a),Characterizing a target rotation reference angleEstimated rotation angle with targetIs used for the difference in (a),Characterizing a target rotational reference angular velocityEstimated rotational angular velocity from targetIs a difference in (c).
After the error function is obtained, in step S1031-3, the target contact force is determined based on the error function.
For example, the error function may be processed based on an optimization algorithm to determine an optimal contact force value and direction, and a target contact force based on the optimal contact force value and direction. Or may generate the target contact force based on other means. Embodiments of the present disclosure are not limited by the particular manner in which the target contact force is generated.
Based on the above, the application generates the target motion estimation of the robot based on the robot centroid motion model according to the current motion information of the robot and the target moment, and then generates the error function of the target motion reference information and the target motion estimation and determines the target contact force based on the target motion reference information and the target motion estimation, so that the centroid motion model of the robot, the current and the motion state of the target are comprehensively considered in the process of generating the target contact force, thereby being beneficial to generating the high-precision target contact force, enabling the robot to well execute the expected motion, and improving the reliability and precision of the motion control.
For example, in some embodiments, determining the target contact force based on the error function includes optimizing the error function based on a quadratic optimization algorithm and determining the contact force that minimizes the error function as the target contact force. By adjusting the contact force such that the error function takes a minimum value (i.e. such that the target motion reference information has a minimum error value with the target motion estimation), an optimal contact force for achieving motion control of the robot is determined as a target contact force.
It should be appreciated that the secondary optimization algorithm may be selected based on actual needs, for example, and that embodiments of the present disclosure are not limited by the particular algorithm type of secondary optimization algorithm used.
Based on the above, in the application, the optimal solution of the error function is solved by applying the quadratic optimization algorithm, and the contact force for enabling the error function to obtain the optimal solution is determined as the target contact force, so that the solution of the target contact force can be conveniently realized through the optimization process, the obtained target contact force can enable the robot to well execute the expected motion process, and the flexibility and the robustness of motion control are improved.
In some embodiments, when there are multiple contact points between the robot and the surrounding environment, i.e. there are multiple contact forces for the robot, and when there are multiple groups of contact force combinations to make the error function take the minimum value, then each group of contact force combinations will be further weighted and summed at this time to calculate the total contact force corresponding to the contact force combination, and the contact force combination calculated to the minimum total contact force is determined as the target contact force combination.
In the case that a plurality of contact points exist, and therefore a plurality of contact forces exist, if the plurality of groups of contact force combinations can realize the optimal solution of the error function, the execution of the expected motion process and the minimization of the total contact force can be considered by further comparing the total contact force of the plurality of groups of contact force combinations, so that the motion control process of the robot is further optimized.
In some embodiments, the process S1032 of determining the target torque of each joint of the robot based on the target contact force, the skeletal structure of the robot, and the pose information of the robot can be more specifically described, for example. Fig. 4 shows an exemplary flowchart of a process S1032 of determining target torques of respective joints of the robot according to an embodiment of the present disclosure.
Referring to fig. 4, first, in step S1032-1, the main torque amounts of the joints of the robot are determined based on the target contact force and the bone structure of the robot.
The main torque is the corresponding torque of each joint of the robot corresponding to the target contact force based on the target contact force and the skeleton structure of the robot. This amount of torque can be used to achieve a target contact force of the robot, i.e. to achieve a desired course of motion.
Thereafter, in step S1032-2, the target posture reference information and the current posture information of each joint of the robot are acquired.
The target posture reference information refers to data information for representing a desired posture of each joint of the robot at a target time, and is, for example, a joint reference angle of each joint of the robot at the target time.
The current gesture information refers to data information for representing the gesture of each joint of the robot at the current moment, and is, for example, the current angle of each joint of the robot at the current moment.
It will be appreciated that it is possible to obtain, for example, motion planning information of the robot and generate target pose reference information of the robot based on the motion planning information, and obtain current pose information of the robot via torque sensors provided at respective joints of the robot.
However, the above only gives an exemplary manner of obtaining the target pose reference information and the current pose information of each joint of the robot, and other manners may be selected to obtain the information according to actual needs, and the embodiments of the present disclosure are not limited by the specific manner of obtaining the target pose reference information and the current pose information of each joint of the robot.
Thereafter, in step S1032-3, an additional torque amount of each joint of the robot is determined based on the target pose reference information of the robot and the current pose information of the robot.
The additional torque amount is used to adjust the main torque amount such that the joints of the robot can also have a desired joint pose while achieving a desired course of motion.
For example, the additional torque amount may be generated based on the target pose information and the current pose information via a preset algorithm, or the target pose reference information, the current pose information of the robot, the skeleton structure information of the robot, and the main torque amount of each joint of the robot may be input into a preset algorithm, and each information is comprehensively considered to generate the additional torque amount.
After the main torque amount and the additional torque amount are obtained, in step S1032-4, the target torque of each joint of the robot is determined based on the main torque amount and the additional torque amount.
For example, for each joint, the amount of main torque it has may be superimposed with the amount of additional torque to generate the target torque for that joint. This process is illustrated, for example, by the following formula:
τm=τm ff+τm fb 10)
τ m is the target torque of the mth joint of the robot. And wherein τ m ff is the amount of main torque that the mth joint has and τ m fb is the amount of accessory torque that the mth joint has. And M is a positive integer greater than 0 and less than or equal to M, wherein M is the total number of joints of the robot.
Based on the above, by comprehensively considering the overall expected motion of the robot and the expected joint posture of each joint of the robot in the process of generating the target torque of each joint, the generated target torque of each joint can be further used for enabling each joint of the robot to be at an expected joint posture angle in the overall motion process on the basis of enabling the robot to well execute the expected motion, so that more comprehensive and accurate motion control is realized.
In some embodiments, acquiring the target pose reference information and the current pose information of each joint of the robot comprises acquiring motion planning information of the robot, generating the target pose reference information of the robot based on the motion planning information, wherein the target pose reference information comprises a joint target reference angle and a joint target reference angular velocity of each joint of the robot at a target moment, and acquiring the current pose information of the robot, wherein the current pose information comprises a joint current angle and a joint current angular velocity of each joint of the robot at the current moment.
The motion planning information is global planning information of the overall motion of the robot, and may include, for example, a starting position and a stopping position of the motion of the robot, a starting angle and a stopping angle, a total time of the motion process of the robot, an average speed of the motion process of the robot, and the like. By processing the motion planning information, expected motion information of the robot at each motion time can be obtained, for example, expected joint angles and expected joint angular velocities of each joint of the robot at each motion time, namely target attitude reference information can be obtained.
Based on the above, the motion planning information of the robot is utilized to generate the target gesture reference information, and the target gesture reference information is further set to include the joint reference angles of all joints of the robot at the target time, so that the joint reference angles of all joints and the joint target reference angular velocity can be determined by comprehensively considering global motion planning, and accurate and real-time motion control on multiple layers of global motion, local motion and the like is facilitated.
The foregoing robot control method will be described in more detail with reference to specific application scenarios. Fig. 5 shows a schematic diagram of a robot control process according to an embodiment of the present disclosure.
For example, when a four-legged robot is used to achieve linear acceleration movements in a plane, the robot has, for example, four feet for contact with the ground, and accordingly has four corresponding contact forces acting on the four feet, respectively.
And the robot is provided with, for example, a target planner, a motion controller, and a detector. The target planner is used for generating motion planning information of the robot based on preset information of the user input and the system. As previously mentioned, it is for example a global motion plan of the robot, e.g. comprising global position planning informationGlobal speed planning informationGlobal acceleration planning information
The detector comprises, for example, a torque sensor arranged at each joint of the robot, a speed sensor arranged on the robot, a displacement sensor, a vision sensor arranged in the surrounding environment of the robot, and the like, and is used for detecting the current motion information (current rotation motion information and current translation motion information) of the robot, for example, the current position p 0 and the current speedCurrent rotation euler angle data R 0 (angle data θ 0 can be calculated based on the euler angle data), current rotation angular velocity ω 0, current robot-to-environment contact point data R I, where I is, for example, the total number of contact points of the robot, and in the four-legged robot, I has a value of 4.
The motion controller, for example, acquires motion planning information of the robot and generates target motion reference information (target rotation motion reference information and target translation motion reference information) of the robot based on the motion planning information via a target motion reference information generation module, for example, including a target motion reference positionTarget motion reference speedTarget rotation reference angleTarget rotation reference angular velocityEtc.
And wherein the motion controller further obtains the aforementioned current motion information from the detector, for example, via a current motion information obtaining module, and generates a target torque based on the current motion information and a linear functionAnd generating target motion estimation of the robot based on the target moment and a mass center motion model of the robot, wherein the estimated value of each target motion information is a function of the contact force. Thereafter, an error function is generated based on the target motion estimate and the aforementioned target motion reference information, and a target contact force is determined by solving an optimal solution of the error function using a quadratic optimization algorithm.
The motion controller also receives target pose reference information from a target planner, including, for example, joint target reference angles for each joint of the robot at a target timeReference angular velocity of joint targetWherein M is the total number of joints of the robot. And the motion controller also receives current gesture information from the detector, wherein the current gesture information comprises a current joint angle q 0,M and a current joint angular speed of each joint of the robot at a current time
Finally, based on the target contact force, the target pose reference information, the current pose information, and comprehensively considering the skeletal architecture of the robot in a torque generation module, a motion controller will generate a target torque for controlling each joint of the robot to perform a desired motion.
Based on the above, in the application, on the basis of obtaining the current rotation movement information of the robot and the target rotation movement reference information of the robot, the target moment is obtained based on the current rotation movement information by applying the optimized linear function, and the target torque of each joint of the robot is determined according to the obtained target moment, so that the robot can be accurately controlled to execute the target movement process on the basis of realizing good real-time control of the robot, and the control accuracy and robustness of the robot are obviously improved on the basis of considering real-time.
According to another aspect of the present disclosure, a robot control system is presented. The robot comprises a front handle and a front handle controller, wherein the front handle controller provides steering torque for the front handle. Fig. 6 shows an exemplary block diagram of a robot control system 600 according to an embodiment of the invention.
The robot control system 600 shown in fig. 6 includes a rotational motion information acquisition module 610, a target torque generation module 620, and a joint torque generation module 630.
The rotational motion information obtaining module 610 is configured to perform the process of step S101 in fig. 1, and obtain current rotational motion information of the robot and target rotational motion reference information of the robot.
The current rotational movement information refers to data information for representing a rotational movement state of the robot at a current time. In the application, the current rotation movement information comprises at least one of a current rotation angle of the robot and a current rotation angular velocity of the robot. However, it should be appreciated that the current rotational motion information may also include, for example, the current rotational angular acceleration of the robot, etc., as desired.
The target rotational movement reference information refers to data information for characterizing a desired state of rotational movement of the robot at a target time (which may be, for example, a time next to a current time according to actual needs). In the application, the target rotation movement reference information comprises at least one of a robot target rotation reference angle, a robot target rotation reference angular speed and a robot target rotation reference angular acceleration. However, it should be appreciated that the target rotational motion reference information may also include other parameters, for example, as desired. Embodiments of the present disclosure are not limited by the specific composition of the target rotational movement desired information.
For example, current rotational motion information of the robot may be acquired via torque sensors provided at joints of the robot and vision sensors provided in the surrounding environment of the robot, and target rotational motion reference information of the robot may be obtained based on processing motion planning information generated by a motion planner of the robot. Embodiments of the present disclosure are not limited by the particular manner in which the target rotational motion reference information and the current rotational motion information are obtained.
The target torque generation module 620 is configured to perform the process of step S102 in fig. 1, and determine a target torque for controlling a target rotational movement of the robot based on a linear function according to current rotational movement information of the robot.
The linear function is, for example, a linear expression of the target torque. Specifically, for example, the rotational motion process of the robot may be approximated via exponential coordinates, so that the nonlinear parameter in the nonlinear expression of the existing torque parameter is further expanded, and the optimized linear expression obtained by extracting the linear component therein is extracted.
The joint torque generation module 630 is configured to perform the process of step S103 in fig. 1, determine a target torque for each joint of the robot based on the target torque, current rotational motion information of the robot, and target rotational motion reference information of the robot.
Based on the above, in the application, on the basis of obtaining the current rotation movement information of the robot and the target rotation movement reference information of the robot, the target moment is obtained based on the current rotation movement information by applying the optimized linear function, and the target torque of each joint of the robot is determined according to the obtained target moment, so that the robot can be accurately controlled to execute the target movement process on the basis of realizing good real-time control of the robot, and the control accuracy and robustness of the robot are obviously improved on the basis of considering real-time. Specifically, the current rotation movement information comprises at least one of the current rotation angle and the current rotation angular velocity of the robot, so that when the calculation of the target moment is realized by applying a linear function, the good representation of the target moment can be realized based on the current multi-dimensional and multi-level movement information, and the accuracy of the generated target moment is improved. In addition, the target rotation movement reference information comprises at least one of a robot target rotation reference angle, a robot target rotation reference angular speed and a robot target rotation reference angular acceleration, so that the target rotation movement process can be more comprehensively described, the generation of target torque of each joint of the robot based on the target rotation movement reference information is facilitated, and the accuracy of the generated target torque is improved.
In some embodiments, the linear function includes at least two of a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angular gain parameter, a current rotational angular velocity parameter of the robot.
The angular gain parameter is a difference between a target rotation angle of the robot and a current rotation angle of the robot, and is described in detail below with reference to specific embodiments.
By including at least two of the robot target rotational angular acceleration parameter, the robot angular velocity gain parameter, the robot angular gain parameter and the robot current rotational angular velocity parameter in the linear function, compared with the method that only one of the robot target rotational angular acceleration parameters is used when the target moment is calculated at present, more linear components capable of reflecting the target moment are reserved by further expanding the linear function expression of the target moment, so that the calculation of linearization is realized to improve the calculation speed, the calculation amount is reduced, the accuracy of the calculated target moment is obviously improved, and the method is beneficial to realizing the subsequent good motion control of the robot based on the target moment.
In some embodiments, the linear function includes a robot target rotational angular acceleration parameter, a robot angular velocity gain parameter, a robot angular gain parameter, a robot current rotational angular velocity parameter.
The robot angular velocity gain parameter is a difference value between a robot target rotation angular velocity and a current rotation angular velocity of the robot, and the angle gain parameter is a difference value between a robot target rotation angle and the current rotation angle of the robot.
Based on the above, in the application, by setting the linear function to include the robot target rotational angular acceleration parameter, the robot angular velocity gain parameter, the robot angular gain parameter, and the current rotational angular velocity parameter of the robot, the nonlinear term is directly omitted (as described above) as compared with the prior art that only one of the robot target rotational angular acceleration parameters is includedItem), by optimizing the linear function (via an exponential coordinate approximation manner), further extracting and retaining more linear components in the nonlinear item, thereby being beneficial to more accurately generating the target moment, being capable of more comprehensively and accurately reflecting the target motion state of the robot, improving the accuracy of motion control and simultaneously taking account of good real-time performance of the motion control.
In some embodiments, the robotic control system is capable of performing the methods as described above, with the functions as described above.
According to another aspect of the present disclosure, a robot is presented. And wherein the robot has a control system robot control system as described above, and is capable of executing the robot control method as described above, and realizing the robot control function as described above.
In addition, the robot may further include a bus, a memory, a sensor assembly, a controller, a communication module, an input-output device, and the like.
A bus may be a circuit that interconnects the components of the robot and communicates communication information (e.g., control messages or data) among the components.
The sensor assembly may be used to sense the physical world, including for example cameras, infrared sensors, ultrasonic sensors, and the like. The sensor assembly may further comprise means for measuring the current operation and movement state of the robot, such as hall sensors, laser position sensors, or strain sensors.
The controller is used to control the operation of the robot, for example in an artificial intelligence control manner.
The controller comprises, for example, a processing means. The processing means may include a microprocessor, digital signal processor ("DSP"), application specific integrated circuit ("ASIC"), field programmable gate array, state machine, or other processing device for processing electrical signals received from the sensor lines. Such processing devices may include programmable electronics, such as PLCs, programmable interrupt controllers ("PICs"), programmable logic devices ("PLDs"), programmable read-only memories ("PROMs"), electronically programmable read-only memories, and the like.
The communication module may be connected to a network, for example, by wire or by invalidation, to facilitate communication with the physical world (e.g., a server). The communication module may be wireless and may include a wireless interface, such as an IEEE 802.11, bluetooth, wireless local area network ("WLAN") transceiver, or a radio interface for accessing a cellular telephone network (e.g., a transceiver/antenna for accessing CDMA, GSM, UMTS or other mobile communication networks). In another example, the communication module may be wired and may include an interface such as ethernet, USB, or IEEE 1394.
The input-output means may transfer, for example, commands or data input from a user or any other external device to one or more other components of the robot, or may output commands or data received from one or more other components of the robot to the user or other external device.
Multiple robots may be grouped into a robotic system to cooperatively accomplish a task, the multiple robots being communicatively connected to a server and receiving cooperative robot instructions from the server.
According to another aspect of the present invention there is also provided a non-volatile computer readable storage medium having stored thereon computer readable instructions which when executed by a computer can perform a method as described above.
Program portions of the technology may be considered to be "products" or "articles of manufacture" in the form of executable code and/or associated data, embodied or carried out by a computer readable medium. A tangible, persistent storage medium may include any memory or storage used by a computer, processor, or similar device or related module. Such as various semiconductor memories, tape drives, disk drives, or the like, capable of providing storage functionality for software.
All or a portion of the software may sometimes communicate over a network, such as the internet or other communication network. Such communication may load software from one computer device or processor to another. Thus, another medium capable of carrying software elements may also be used as a physical connection between local devices, such as optical, electrical, electromagnetic, etc., propagating through cable, optical cable, air, etc. Physical media used for carrier waves, such as electrical, wireless, or optical, may also be considered to be software-bearing media. Unless limited to a tangible "storage" medium, other terms used herein to refer to a computer or machine "readable medium" mean any medium that participates in the execution of any instructions by a processor.
The application uses specific words to describe embodiments of the application. Reference to "a first/second embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the application are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.