CN117901090B - Visual servo method and system for lens holding robot - Google Patents
Visual servo method and system for lens holding robot Download PDFInfo
- Publication number
- CN117901090B CN117901090B CN202311805261.6A CN202311805261A CN117901090B CN 117901090 B CN117901090 B CN 117901090B CN 202311805261 A CN202311805261 A CN 202311805261A CN 117901090 B CN117901090 B CN 117901090B
- Authority
- CN
- China
- Prior art keywords
- instrument
- coordinates
- tip
- center
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The application discloses a visual servo method and a visual servo system of a lens holding robot, wherein the method comprises the steps of obtaining an image obtained by shooting an arthroscope of the lens holding robot; the method comprises the steps of determining the coordinates of the tip center of an instrument in an image, solving each joint angular velocity corresponding to a mechanical arm of a mirror-holding robot according to the coordinates of the tip center and the coordinates of a target position, and controlling the mechanical arm to drive the arthroscope to move according to each joint angular velocity. The application can determine the coordinate of the center of the tip of the instrument based on the image shot by the arthroscope, and further calculate the joint angular velocity corresponding to the coordinate, so as to control the mechanical arm to drive the arthroscope to track the coordinate of the target position in real time, realize fine control, further meet the control requirement of a complex working environment, and be widely applied to the technical field of robot control.
Description
Technical Field
The application relates to the technical field of robot control, in particular to a vision servo method and a vision servo system for a lens holding robot.
Background
The mirror holding robot is widely applied in the medical field, but the existing mirror holding robot still cannot meet the control requirement under the complex working environment.
Disclosure of Invention
The embodiment of the application mainly aims to provide a visual servo method and a visual servo system for a lens holding robot, so as to improve the visual servo performance of the lens holding robot and further realize fine control under a complex working environment.
To achieve the above object, an aspect of an embodiment of the present application provides a vision servo method for a lens holding robot, including:
acquiring an image shot by an arthroscope of a lens holding robot;
determining coordinates of a tip center of an instrument in the image;
Solving the angular velocities of all joints corresponding to the mechanical arm of the mirror holding robot according to the coordinates of the center of the tip and the coordinates of the target position;
and controlling the mechanical arm to drive the arthroscope to move according to the angular velocity of each joint.
In some embodiments, the determining coordinates of a tip center of an instrument in the image includes:
detecting one or more instrument prediction frames in the image by using a pre-trained target detection model, wherein the geometric center of each instrument prediction frame corresponds to the tip center of one instrument;
determining the coordinates of the geometric center of each instrument prediction frame as coordinates corresponding to the tip center of the instrument;
the coordinates of the tip center of the instrument are:
wherein c u,cv is the horizontal and vertical coordinates of the instrument after the tip center is adjusted, The horizontal and vertical coordinates before the tip center of the instrument is adjusted,The method comprises the following steps:
Wherein, Representing the abscissa of the ith instrument prediction box,Representing the ordinate of the ith instrument prediction box,Representing the width of the ith instrument prediction box,Representing the height of the i-th instrument prediction box,And (3) representing the type prediction probability of the instrument in the ith instrument prediction frame, wherein K is the total number of the instrument prediction frames, and lambda is a constant of a positive value.
In some embodiments, the calculating the angular velocity of each joint corresponding to the mechanical arm of the lens-holding robot according to the coordinates of the tip center and the coordinates of the target position includes:
determining a desired movement distance according to the coordinates of the tip center of the instrument and the coordinates of the target position, determining a desired speed of the end of the arthroscope according to the desired movement distance, and determining the angular velocities of the joints corresponding to the mechanical arm according to the desired speed;
Or solving the angular velocities of the joints corresponding to the mechanical arm according to the coordinates of the tip center of the instrument by using a first cyclic neural network.
In some embodiments, the determining the angular velocity of each joint corresponding to the mechanical arm according to the desired velocity includes:
determining the tail end speed of the mechanical arm according to the expected speed under the constraint of a remote movement center;
or solving the joint angular velocity corresponding to the expected velocity based on a second cyclic neural network.
In some embodiments, the solving the joint angular velocity corresponding to the tip velocity based on the jacobian matrix includes:
solving the joint angular velocity corresponding to the tail end velocity according to a first calculation formula;
the first calculation formula is:
Wherein, Representing the angular velocity of the joint, J b representing the jacobian matrix,Representing a rotation matrix of the tip of the robotic arm, S con representing the tip speed.
In some embodiments, the solving the joint angular velocity corresponding to the desired velocity based on a second recurrent neural network includes:
Inputting the expected speed into the second cyclic neural network to obtain the joint angular speed output by the second cyclic neural network;
The second recurrent neural network is:
Wherein, mu is Lagrange's operator, For the derivative of the Lagrangian operator, γ is a preset constant scalar and γ >0, t is a solution time, Γ is an activation function of the second recurrent neural network, ψ is a projection function, S is the desired speed ,U=W-1JTA-1,V=W-1(I-JTA-1JW-1),A=JW-1JT,W and J matrix are both full rank and positive definite matrix, I is a unity matrix, x=US-Vμ,X is the output of the second recurrent neural network,Representing the angular velocity of the joint.
In some embodiments, the calculating, by using a first recurrent neural network, each angular velocity of the joint corresponding to the mechanical arm according to the coordinates of the tip center of the instrument includes:
Iteratively solving a second calculation formula by using the first cyclic neural network, and solving to obtain the angular velocity of each joint;
The second calculation formula is:
Wherein J b is the jacobian matrix of the lens holding robot, R is the rotation matrix, In the form of an image jacobian matrix,For the desired speed of the arthroscope,The expected speed is determined according to the coordinates of the center of the tip.
To achieve the above object, another aspect of the embodiments of the present application provides a vision servo system of a lens holding robot, the system including:
the image acquisition module is used for acquiring an image shot by an arthroscope of the lens holding robot;
A coordinate determination module for determining coordinates of a tip center of an instrument in the image;
the speed solving module is used for solving the angular speeds of all joints corresponding to the mechanical arm of the mirror holding robot according to the coordinates of the center of the tip and the coordinates of the target position;
and the motion control module is used for controlling the mechanical arm to drive the arthroscope to move according to the angular velocity of each joint.
To achieve the above object, another aspect of the embodiments of the present application provides an electronic device, which includes a memory storing a computer program and a processor implementing the above method when executing the computer program.
To achieve the above object, another aspect of the embodiments of the present application proposes a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-mentioned method.
The embodiment of the application at least comprises the following beneficial effects:
The method comprises the steps of obtaining an image obtained by shooting an arthroscope of the endoscope-holding robot, determining the coordinate of the tip center of an instrument in the image, solving each joint angular velocity corresponding to a mechanical arm of the endoscope-holding robot according to the coordinate of the tip center and the coordinate of a target position, and controlling the mechanical arm to drive the arthroscope to move according to each joint angular velocity. The application can determine the coordinate of the center of the tip of the instrument based on the image shot by the arthroscope, and further calculate the joint angular velocity corresponding to the coordinate, so as to control the mechanical arm to drive the arthroscope to track the coordinate of the target position in real time, realize fine control, and further meet the control requirement of a complex working environment.
Drawings
Fig. 1 is a schematic flow chart of a visual servo method of a lens holding robot according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a scene of detecting a prediction frame in an image according to an embodiment of the present application;
FIG. 3 is an exemplary diagram of a system and RCM constraints for a lens-holding robot according to an embodiment of the present application;
Fig. 4 is a schematic flow chart of solving a desired joint angle based on a speed-stage inverse kinematics solving module of a lens-holding robot according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of a recurrent neural network according to an embodiment of the present application;
fig. 6 is a schematic flow chart of a speed-stage inverse kinematics solution module of a lens-holding robot for solving a desired joint angle based on a cyclic neural network according to an embodiment of the present application;
FIG. 7 is a schematic flow chart of a solution module of a multi-constraint mirror-holding robot based on a cyclic neural network for solving a desired joint angle according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a vision servo system of a lens-holding robot according to an embodiment of the present application;
FIG. 9 is a block diagram of a vision servo system of a lens holding robot according to an embodiment of the present application;
Fig. 10 is an application scenario diagram of a vision servo system of a lens-holding robot according to an embodiment of the present application;
fig. 11 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with embodiments of the application, but are merely examples of apparatuses and methods consistent with aspects of embodiments of the application as detailed in the accompanying claims.
It is to be understood that the terms "first," "second," and the like, as used herein, may be used to describe various concepts, but are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present application. The words "if", as used herein, may be interpreted as "when" or "in response to a determination", depending on the context.
The terms "at least one", "a plurality", "each", "any" and the like as used herein, at least one includes one, two or more, a plurality includes two or more, each means each of the corresponding plurality, and any one means any of the plurality.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
The embodiment of the application provides a visual servo method of a lens holding robot, and relates to the technical field of robot control. The visual servo method provided by the embodiment of the application can be applied to the terminal, the server and software running in the terminal or the server. In some embodiments, the terminal may be, but not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a vehicle terminal, etc., the server may be configured as an independent physical server, may be configured as a server cluster or a distributed system formed by a plurality of physical servers, may be configured as a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms, and the server may be a node server in a blockchain network, and the software may be an application for implementing a visual servo method, etc., but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computer system environments or configurations. Such as a personal computer, a server computer, a hand-held or portable device, a tablet device, a multiprocessor system, a microprocessor-based system, a set top box, a programmable consumer electronics, a network PC, a minicomputer, a mainframe computer, a distributed computing environment that includes any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Referring to fig. 1, an embodiment of the present application provides a lens holding robot vision servo method, which may include, but is not limited to, steps S100 to S130, specifically as follows:
S100, acquiring an image shot by an arthroscope of the lens holding robot.
Specifically, the present embodiment can acquire an image captured in real time by an arthroscope, or an image captured newly.
And S110, determining coordinates of the tip center of the instrument in the image.
Specifically, the instrument in the present embodiment may be a surgical instrument or a tool such as a small screwdriver.
Further, S110 may include:
detecting one or more instrument prediction frames in the image by using a pre-trained target detection model, wherein the geometric center of each instrument prediction frame corresponds to the tip center of one instrument;
determining the coordinates of the geometric center of each instrument prediction frame as coordinates corresponding to the tip center of the instrument;
the coordinates of the tip center of the instrument are:
wherein c u,cv is the horizontal and vertical coordinates of the instrument after the tip center is adjusted, The horizontal and vertical coordinates before the tip center of the instrument is adjusted,The method comprises the following steps:
Wherein, Representing the abscissa of the ith instrument prediction box,Representing the ordinate of the ith instrument prediction box,Representing the width of the ith instrument prediction box,Representing the height of the i-th instrument prediction box,And (3) representing the type prediction probability of the instrument in the ith instrument prediction frame, wherein K is the total number of the instrument prediction frames, and lambda is a constant of a positive value.
Specifically, the above-mentioned instruments are taken as surgical instruments for illustration, and the implementation steps can be applied to an autonomous multi-joint medical instrument recognition and positioning module in a lens holding robot, and the module can realize real-time acquisition of the positions of a plurality of surgical instruments based on an One-stage target detection model and obtain the tip centers of the surgical instruments. Firstly, acquiring training images for a target detection model through an arthroscope, and performing data enhancement operations such as labeling, image cutting, splicing, overturning and the like to obtain an enhanced data set. And then, based on the data set, dividing the training set and the verification set, inputting the training set and the verification set into a target prediction network for training, and obtaining a corresponding pre-training model, namely the optimal network weight. Further, the optimal network weight is used for detecting and classifying the instrument tip. And acquiring the central pixel coordinates of the tips of a plurality of surgical instruments through the basic information of the prediction frame.
Alternatively, assuming that the image of each frame of the arthroscope is I r, D represents the One-stage object detection model. The center (x p,yp) of the prediction frame, the width and height (w p,hp) and the prediction probability p= (p 1,p2,...,pclass) of each category can be obtained through the target detection model, so that the following steps are obtained:
(xp,yp,wp,hp,p)=D(Ir)
the type of prediction and the probability thereof are as follows:
bc=argmax(p)
The approximate position of the target is predicted by the candidate frame generated a priori, and the candidate frame can be better predicted by predicting the offset of the candidate frame. The initialized offset is random, so it is converted into a predictive weight here, which varies over [0,1], making it easier to train steadily. Accordingly, assuming the upper left corner coordinates of the cell (c x,cy), the width and height of the preset box are (p w,ph), respectively.
The offset is processed with the Sigmoid function at the same time, namely:
Accordingly, the specific updated formulas of the upper left corner coordinates (b x,by) and the width and height (b w,bh) of the actual prediction frame are as follows:
the specific prediction block updating method and parameters are shown in fig. 2, wherein the dashed boxes in fig. 2 represent prior boxes, and the solid boxes represent prediction blocks.
For a joint operation scene, K prediction frames can be generated, and the output of the K prediction frames is the center of a plurality of prediction frames for single operation instrument and multi operation instrument scenes, so that the whole instrument is ensured to be positioned at the center of the arthroscope visual field, and the specific formula is as follows:
Thereby obtaining the center coordinate of the instrument prediction frame under the arthroscope of the current frame in the image coordinate system
Because the clear visual field range of the arthroscope is smaller, a part of fuzzy visual field exists, the expected visual field center is more deviated to the prediction frame center with larger prediction probability, the influence of false recognition on a subsequent module under the fuzzy visual field can be avoided to a certain extent, the safety of the operation is improved, and the operation can be correspondingly adjusted according to the specific type of the operation and the operation manual habit of a doctor. The above formula can thus be improved as:
Wherein lambda is a positive constant of 0, 1.
And S120, solving the angular velocities of all joints corresponding to the mechanical arm of the mirror holding robot according to the coordinates of the center of the tip and the coordinates of the target position.
Specifically, the target location is the desired location of the instrument tip center.
Further, in this embodiment, S120 may include two embodiments, where a first embodiment may include S121 to S123, and a second embodiment may include S124, which is specifically as follows:
and S121, determining a desired moving distance according to the coordinates of the tip center of the instrument and the coordinates of the target position.
S122, determining the expected speed of the tail end of the arthroscope according to the expected moving distance.
Specifically, S121 and S122 in the present embodiment may be applied to a surgical instrument tip intelligent tracking module in a lens holding robot, by which S121 and S122 are performed, coordinate system conversion of an image coordinate system to an arthroscopic tip may be achieved. It is first known that the desired prediction frame center coordinates s des=[wu/2 hv/2]T, i.e., the arthroscopic field center, w u、hv are the width and height, respectively, of the arthroscopic image. Thus, the expected moving distance d exp=||sdes-stip |in the image coordinate system is obtained for the frame.
The desired speed corresponding to the arthroscopic camera coordinate system can be obtained by using proportional-differential control:
image jacobian:
Wherein f u、fv、u0、v0 is an arthroscope internal reference obtained through calibration, Is constant.
The desired velocity corresponding to the arthroscopic { ka } coordinate system (with its axis as the z-axis) is thus obtained, namely:
Due to the high precision of the joint surgical environment, the desired speed of the arthroscope is constrained, namely:
Considering that the center point of the tip of the instrument is already at the center or near the center of the arthroscopic visual field, the mechanical arm in the embodiment can be fixed, and the refining operation is convenient. According to which:
where d res is the minimum desired distance for the robotic arm to begin moving.
S123, determining the angular velocities of the joints corresponding to the mechanical arm according to the expected velocity.
Optionally, in this embodiment, S123 may include two embodiments, where the first embodiment may include S1231 to S1232, and the second embodiment may include S1233, specifically as follows:
and S1231, determining the tail end speed of the mechanical arm according to the expected speed under the constraint of the remote movement center.
Specifically, S1231 in the present embodiment may be applied to an arthroscope constraint module in a lens holding robot and a lens holding robot position-level inverse kinematics solving module.
The arthroscope motion constraint module can be used for constraining the arthroscope, so that the arthroscope only has a speed in a certain direction at the implantation point, and therefore, the speed level RCM (Remote Center of Motion, remote motion center) constraint is introduced. Wherein the linear velocity and the angular velocity of the RCM point are
The linear and angular velocities of the arthroscopic tip are The linear and angular velocities of the arthroscopic attachment were
The embodiment can use the { con } coordinate system corresponding to the connecting piece for modeling, and the specific formula is as follows:
The method can obtain:
Where μ in denotes an arthroscope implantation depth ratio, and d ka denotes an arthroscope length.
Since the x-axis and y-axis of the RCM position are constrained here, i.e. the velocity perpendicular to the axis is 0, it is again possible to obtain:
And because the arthroscope and the connecting piece can be seen as the same rigid body, the angular speeds of the end of the arthroscope and the connecting piece are equal, namely:
Accordingly, it is possible to And (3) withAndAnd (3) withThe linear velocity and angular velocity corresponding to the connector can be expressed as the linear velocity and angular velocity corresponding to the arthroscopic tip:
in this way, a velocity step relationship between the arthroscopic tip and the connector is established, wherein
Because the implantation depth mu in can generate certain change in the arthroscope planning process, certain errors can occur in long-time planning of the mechanical arm, and the reason is that RCM restraint points can also generate displacement while the implantation depth is changed. Accordingly, it is necessary to put certain constraints on it. For arthroscopic surgery, a certain threshold value exists for the variation of the axial planning of the mechanical arm in the arthroscope, so that the implantation depth of the arthroscope works in a safe area. Accordingly, after adding the constraint, the desired implantation depth may be expressed as:
Wherein K p,μ、Kd,μ is a normal number.
And then, performing joint kinematics solving of a mechanical arm of the mirror holding robot through a mirror holding robot position-level inverse kinematics solving module. Specifically, the corresponding D-H (Denavit-Hartenberg) parameters of the mechanical arm are obtained according to the initial coordinate system of the mechanical armThe D-H parameter is a method for describing the geometrical relationship between the robot joint and the link, and represents the length, torsion angle, link offset distance and joint angle of the link between the joint i-1 and the joint i, respectively, where the method can be equivalent to performing translation and rotation along the x-axis and the z-axis respectively, so as to obtain translation and rotation along the x-axis:
Translation and rotation along the z-axis:
Wherein, To correspond to the rotation angle of the joint at a certain moment.
According to the method, the homogeneous transformation matrix corresponding to each connecting rod can be obtained as follows:
Modeling is carried out on the whole based on the tail end coordinate system of the mechanical arm to obtain the corresponding relation of the coordinate systems based on the tail end coordinate system, namely:
Wherein the method comprises the steps of Respectively representing the directional vectors of the i coordinate system along the x, y, z axes based on the end coordinate system.
Since the mechanical arm is 6-DOF and is a rotary joint, the jacobian matrix J e=[J1J2...J6 column vector is obtained as follows:
accordingly, it is possible to obtain:
and further can realize the rotation S e=[ve ωe]T of the tail end of the mechanical arm to the angular velocity of the joint according to the above method Is a transition of (2).
An exemplary diagram of the system of the lens holding robot and the RCM constraint can be referred to in FIG. 3.
S1232, solving the joint angular velocity corresponding to the tail end velocity based on the jacobian matrix.
Further, S1232 may include:
solving the joint angular velocity corresponding to the tail end velocity according to a first calculation formula;
the first calculation formula is:
Wherein, Representing the angular velocity of the joint, J b representing the jacobian matrix,Representing a rotation matrix of the tip of the robotic arm, S con representing the tip speed.
Specifically, in this embodiment, the inverse kinematics solution module of the speed stage of the lens-holding robot may implement the transformation from the end of the connecting piece to the joint angle of the mechanical arm, and obtain the angle corresponding to the joint angle of the mechanical arm according to the rotation of the connecting piece obtained by the arthroscope constraint module, so as to control the movement of the lens-holding robot, where the calculated formula of the solution is as follows:
If the multi-joint medical instrument autonomous identification and positioning module does not detect a corresponding prediction frame, namely the instrument is out of view, the mechanical arm is immediately stationary. A schematic flow chart of solving the desired joint angle based on the inverse kinematics solution module of the speed stage of the lens holding robot can be referred to fig. 4.
S1233, solving the joint angular velocity corresponding to the expected velocity based on a second cyclic neural network.
Further, S1233 may include:
Inputting the expected speed into the second cyclic neural network to obtain the joint angular speed output by the second cyclic neural network;
The second recurrent neural network is:
Wherein, mu is Lagrange's operator, For the derivative of the Lagrangian operator, γ is a preset constant scalar and γ >0, t is a solution time, Γ is an activation function of the second recurrent neural network, ψ is a projection function, S is the desired speed ,U=W-1JTA-1,V=W-1(I-JTA-1JW-1),A=JW-1JT,W and J matrix are both full rank and positive definite matrix, I is a unity matrix, x=US-Vμ,X is the output of the second recurrent neural network,Representing the angular velocity of the joint.
Specifically, the embodiment can complete the solution from the end speed of the arthroscope to the angular speed of the joint of the lens-holding robot based on the inverse kinematics solution module of the speed stage of the lens-holding robot of the cyclic neural network. The module inputs a desired speed as planned by a vision-based arthroscopeThe output is the angular velocity of the joint of the lens holding robotAnd the rotation matrix is sent to the lens holding robot by the upper computer, wherein the rotation matrix is as follows:
Wherein d con is the axial length of the connecting piece at the tail end of the mechanical arm.
It is clear that the speed-stage arthroscope planning is realized according to the intelligent tracking module of the surgical instrument tip to obtain the expected speedFinally, the information is converted into information such as RCM constraint pointsThereby directly solving the tail end speed of the mechanical arm
The module describes the inverse kinematics of the end speed of the mirror-holding robot to the joint angular speed as:
θ-≤θ≤θ+
The inverse kinematics based on quadratic programming above is built in a speed stage, where θ represents the angles of the joints of the mirror-holding robot, Representing the angular velocity of the joint of the lens holding robot. v end=[vx,vy,vz,wx,wy,wz]T represents the control speed of the end of the mirror-holding robot and J acob represents the jacobian of the joint angle space to the end of the mirror-holding robot.
The joint angle and angular velocity constraints are unified, and then the method can be obtained:
Wherein:
The final velocity calculation model of the velocity-stage-based quadratic programming can thus be described as:
min xTWx/2
s.t.Jx=S
x-≤x≤x+
Wherein the method comprises the steps of And the subscripts of each matrix are banned to facilitate further calculation. dt is the communication time interval between the upper computer and the mechanical arm. W is defined as each joint weight coefficient, and both W and J matrices are full rank and positive definite matrices.
According to the Lagrangian multiplier method, the Lagrangian equation can be expressed as:
min xTWx+λ(Jx-S)+μ(Ψ(x+μ)-x)
from the KKT condition, the QP problem is equivalent to the following equation:
where λ, μ are Lagrangian operators. ψ (·) is the projection function, expressed specifically as:
Definition a=jw -1JT, then we can:
λ=-A-1(S+JW-1μ)
Substituting λ into the original formula:
x=US-Vμ
where u=w -1JTA-1,V=W-1(I-JTA-1JW-1), I is the identity matrix. To guarantee ψ (x+μ) -x=0 and solve μ, a quadratic programming solution structure based on a recurrent neural network can be proposed:
Where γ is the designed constant scalar and γ >0, t is the solution time. When t is → infinity, the sum of (gamma+t γ) and → infinity is obtained to form the convergence coefficient of the neural network. Γ (·) is the activation function of the neural network. The selection of the activation function can effectively change the convergence rate of the cyclic neural network, so that the traditional typical activation function is synthesized, the power function is favorable for the cyclic neural network to converge in a limited time, and the linear function is favorable for accelerating the convergence rate, and the following activation functions are selected:
Γ(x)=x+|x|2tanh(x)+|x|0.5tanh(x)
tan h (x) is a hyperbolic tangent function, and the specific expression is:
in summary, the constructed recurrent neural network model is as follows:
Where u=w -1JTA-1,V=W-1(I-JTA-1JW-1). And the output vector can be obtained by the following equation:
x=US-Vμ
transforming to obtain:
can be finally obtained according to multiple iterations of the cyclic neural network I.e. the angular velocity of each joint of the mechanical arm. And the angular velocity of each joint is sent to the lens holding robot through the upper computer interface of the lens holding robot, so that the arthroscope lens tracks the medical instrument in real time. The schematic structural diagram of the second cyclic neural network may refer to fig. 5, and the schematic flow diagram of the mirror-holding robot speed-stage inverse kinematics solving module for solving the desired joint angle based on the cyclic neural network may refer to fig. 6.
S124, solving the angular velocities of the joints corresponding to the mechanical arm according to the coordinates of the tip center of the instrument by using a first cyclic neural network.
Further, S124 may include:
Iteratively solving a second calculation formula by using the first cyclic neural network, and solving to obtain the angular velocity of each joint;
The second calculation formula is:
Wherein J b is the jacobian matrix of the lens holding robot, R is the rotation matrix, In the form of an image jacobian matrix,For the desired speed of the arthroscope,The expected speed is determined according to the coordinates of the center of the tip.
Specifically, the embodiment can process the information obtained by the multi-constraint mirror-holding robot solving module based on the cyclic neural network according to the multi-joint medical instrument autonomous identification and positioning module, and meanwhile consider RCM constraint to realize the resolution of the joint angular velocity of the mirror-holding robot.
The module inputs information obtained by the multi-joint medical instrument autonomous identification and positioning module and outputs the information as joint angular velocity of the mirror-holding robot.
Specifically, information obtained by the autonomous recognition and positioning module for the multi-joint medical instrument, namely, a desired prediction frame center coordinate s des=[wu/2 hv/2]T, namely, an arthroscopic visual field center is input. Thus, the expected moving distance d exp=||sdes-stip |in the image coordinate system is obtained for the frame.
The proportional-differential control can be used to obtain the corresponding expected speed in the arthroscopic camera
According to the solving result of the mirror holding robot position level inverse kinematics solving module, the joint angular velocity is realizedConversion to the end-of-arm spin S e=[ve ωe]T, namely:
Wherein the image jacobian, let u=c u-u0,v=cv-v0, can be obtained:
it can be obtained from forward kinematics:
Wherein J b is a jacobian matrix of the lens holding robot, R is a corresponding rotation matrix, In the form of an image jacobian matrix,A corresponding desired speed within the arthroscopic camera,The joint angular velocity of the lens holding robot is obtained.
Secondly, according to RCM constraint in an arthroscope constraint module, the following can be obtained:
Two sides in the formula are derived for time:
wherein according to 3.4 section mirror robot kinematics The method can obtain:
wherein J ka、Jcon∈R3×6 is the velocity component of the jacobian matrix under the arthroscope and the connection. Belongs to the direction vector of an arthroscope, a connecting piece and an RCM point under a basic coordinate system.Representing the RCM point velocity, since the RCM point constraint exists,Is a zero vector. The equation above is combined:
Order the Then it is possible to obtain:
Jx=v
The quadratic programming problem constructed according to the above described embodiments:
min xTWx/2
s.t.jx=v
x-≤x≤x+
Finally, the method can finally obtain the neural network according to multiple iterations of the cyclic neural network I.e. the angular velocity of each joint of the mechanical arm.
A schematic flow chart of a multi-constraint mirror-holding robot solving module for solving a desired joint angle based on a cyclic neural network can be referred to fig. 7.
S130, controlling the mechanical arm to drive the arthroscope to move according to the angular velocity of each joint.
Specifically, the embodiment can be applied to a mirror holding robot movement module in a mirror holding robot, and the module can transmit the input expected angles of all joints of the mechanical arm to the real mechanical arm to control the mechanical arm to make corresponding movements.
Referring to fig. 8, the embodiment of the application further provides a vision servo system of a lens holding robot, which can implement the vision servo method, and the system comprises:
the image acquisition module is used for acquiring an image shot by an arthroscope of the lens holding robot;
A coordinate determination module for determining coordinates of a tip center of an instrument in the image;
the speed solving module is used for solving the angular speeds of all joints corresponding to the mechanical arm of the mirror holding robot according to the coordinates of the center of the tip and the coordinates of the target position;
and the motion control module is used for controlling the mechanical arm to drive the arthroscope to move according to the angular velocity of each joint.
Optionally, the coordinate determination module includes:
a prediction frame detection unit, configured to detect one or more instrument prediction frames in the image using a pre-trained target detection model, where a geometric center of each instrument prediction frame corresponds to a tip center of one of the instruments;
A coordinate determination unit configured to determine coordinates of a geometric center of each of the instrument prediction frames as coordinates corresponding to a tip center of the instrument;
the coordinates of the tip center of the instrument are:
wherein c u,cv is the horizontal and vertical coordinates of the instrument after the tip center is adjusted, The horizontal and vertical coordinates before the tip center of the instrument is adjusted,The method comprises the following steps:
Wherein, Representing the abscissa of the ith instrument prediction box,Representing the ordinate of the ith instrument prediction box,Representing the width of the ith instrument prediction box,Representing the height of the i-th instrument prediction box,And (3) representing the type prediction probability of the instrument in the ith instrument prediction frame, wherein K is the total number of the instrument prediction frames, and lambda is a constant of a positive value.
Optionally, the speed solving module includes:
the system comprises a first speed solving unit, a desired speed determining unit, a first speed calculating unit, a second speed calculating unit and a first speed calculating unit, wherein the first speed solving unit is used for determining a desired movement distance according to the coordinate of the tip center of the instrument and the coordinate of a target position;
And the second speed solving unit is used for solving the angular speeds of the joints corresponding to the mechanical arm according to the coordinates of the tip center of the instrument by using the first cyclic neural network.
Optionally, the first speed solving unit includes:
the system comprises a jacobian matrix solving unit, a joint angular velocity calculating unit and a remote motion center calculating unit, wherein the jacobian matrix solving unit is used for determining the tail end velocity of the mechanical arm according to the expected velocity under the constraint of the remote motion center;
And the cyclic neural network solving unit is used for solving the joint angular velocity corresponding to the expected velocity based on a second cyclic neural network.
Optionally, the jacobian matrix solving unit includes:
the jacobian matrix solving subunit is used for solving the joint angular velocity corresponding to the tail end velocity according to a first calculation;
the first calculation formula is:
Wherein, Representing the angular velocity of the joint, J b representing the jacobian matrix,Representing a rotation matrix of the tip of the robotic arm, S con representing the tip speed.
Optionally, the recurrent neural network solving unit includes:
the cyclic neural network solving subunit is used for inputting the expected speed into the second cyclic neural network to obtain the joint angular speed output by the second cyclic neural network;
The second recurrent neural network is:
Wherein, mu is Lagrange's operator, For the derivative of the Lagrangian operator, γ is a preset constant scalar and γ >0, t is a solution time, Γ is an activation function of the second recurrent neural network, ψ is a projection function, S is the desired speed ,U=W-1JTA-1,V=W-1(I-JTA-1JW-1),A=JW-1JT,W and J matrix are both full rank and positive definite matrix, I is a unity matrix, x=US-Vμ,X is the output of the second recurrent neural network,Representing the angular velocity of the joint.
Optionally, the motion control module includes:
The motion control unit is used for iteratively solving a second calculation formula by using the first cyclic neural network to obtain the angular velocity of each joint;
The second calculation formula is:
Wherein J b is the jacobian matrix of the lens holding robot, R is the rotation matrix, In the form of an image jacobian matrix,For the desired speed of the arthroscope,The expected speed is determined according to the coordinates of the center of the tip.
It can be understood that the content in the above method embodiment is applicable to the system embodiment, and the functions specifically implemented by the system embodiment are the same as those of the above method embodiment, and the achieved beneficial effects are the same as those of the above method embodiment.
Aiming at the problems that the existing lens holding robot cannot consider the constraint conditions generated by the working environment where the redundant robot is located and the structure of the redundant robot, and the intelligent performance of the complex working environment is not high, the scheme of the embodiment of the application is described and illustrated in detail by combining specific application examples:
referring to fig. 9, the present embodiment provides a structural block diagram of a vision servo system of a lens holding robot. Referring to fig. 10, the present embodiment provides an application scenario diagram of a vision servo system of a lens holding robot.
The endoscope-holding robot of the embodiment can comprise an endoscope-holding robot body (namely a mechanical arm), an arthroscope connecting piece on the tail end of the robot body, an arthroscope and a visual servo system, wherein the visual servo system can comprise an independent multi-joint medical instrument identification and positioning module used for achieving real-time acquisition of the positions of a plurality of surgical instruments and obtaining the tip centers of the surgical instruments, an endoscope-holding robot kinematics solving module used for conducting inverse joint kinematics solving of the endoscope-holding mechanical arm body, an intelligent surgical instrument tip tracking module used for achieving expected speed conversion from an image coordinate system to a coordinate system of the tail end of the arthroscope, an arthroscope constraint module used for achieving speed conversion from a coordinate system of the connecting piece after constraint to a coordinate system of the tail end of the arthroscope, and an inverse kinematics solving module based on a jacobian matrix and an inverse kinematics solving module based on a cyclic neural network, wherein the two modules can be respectively used for achieving speed conversion between the tail end of the arthroscope and the connecting piece under the constraint of a remote movement center RCM (Remote Center of Motion).
And the multi-constraint mirror holding robot solving module is used for realizing the information processing obtained by the multi-joint medical instrument autonomous identification and positioning module, and simultaneously considering RCM constraint to realize the resolution of the joint angular velocity of the mirror holding robot. The module can replace an intelligent tracking module of the surgical instrument tip, an arthroscope constraint module and a mirror-holding robot inverse kinematics module in the embodiment, and the mirror-holding robot movement module is used for controlling the movement of the mechanical arm body.
The vision servo system of the lens holding robot can provide two structures and three different methods for planning the robot, and can realize accurate control of the lens holding robot.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the visual servo method when executing the computer program. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
It can be understood that the content in the above method embodiment is applicable to the embodiment of the present apparatus, and the specific functions implemented by the embodiment of the present apparatus are the same as those of the embodiment of the above method, and the achieved beneficial effects are the same as those of the embodiment of the above method.
Referring to fig. 11, fig. 11 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 1101 may be implemented by a general purpose CPU (central processing unit), a microprocessor, an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solution provided by the embodiments of the present application;
Memory 1102 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM). The memory 1102 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present disclosure are implemented by software or firmware, relevant program codes are stored in the memory 1102, and the processor 1101 invokes a visual servoing method for executing the embodiments of the present disclosure;
An input/output interface 1103 for implementing information input and output;
The communication interface 1104 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
Bus 1105 transmits information between the various components of the device (e.g., processor 1101, memory 1102, input/output interface 1103, and communication interface 1104);
wherein the processor 1101, memory 1102, input/output interface 1103 and communication interface 1104 enable communication connection therebetween within the device via bus 1105.
The embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program realizes the visual servo method when being executed by a processor.
It can be understood that the content of the above method embodiment is applicable to the present storage medium embodiment, and the functions of the present storage medium embodiment are the same as those of the above method embodiment, and the achieved beneficial effects are the same as those of the above method embodiment.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by persons skilled in the art that the embodiments of the application are not limited by the illustrations, and that more or fewer steps than those shown may be included, or certain steps may be combined, or different steps may be included.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" is used to describe an association relationship of an associated object, and indicates that three relationships may exist, for example, "a and/or B" may indicate that only a exists, only B exists, and three cases of a and B exist simultaneously, where a and B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one of a, b or c may represent a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. The storage medium includes various media capable of storing programs, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and are not thereby limiting the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.
Claims (9)
1. A method of vision servoing for a lens-holding robot, the method comprising:
acquiring an image shot by an arthroscope of a lens holding robot;
determining coordinates of a tip center of an instrument in the image;
Solving the angular velocities of all joints corresponding to the mechanical arm of the mirror holding robot according to the coordinates of the center of the tip and the coordinates of the target position;
controlling the mechanical arm to drive the arthroscope to move according to the angular velocity of each joint;
The determining coordinates of the tip center of the instrument in the image includes:
detecting one or more instrument prediction frames in the image by using a pre-trained target detection model, wherein the geometric center of each instrument prediction frame corresponds to the tip center of one instrument;
determining the coordinates of the geometric center of each instrument prediction frame as coordinates corresponding to the tip center of the instrument;
the coordinates of the tip center of the instrument are:
wherein c u,cv is the horizontal and vertical coordinates of the instrument after the tip center is adjusted, The horizontal and vertical coordinates before the tip center of the instrument is adjusted,The method comprises the following steps:
Wherein, Representing the abscissa of the ith instrument prediction box,Representing the ordinate of the ith instrument prediction box,Representing the width of the ith instrument prediction box,Representing the height of the i-th instrument prediction box,And (3) representing the type prediction probability of the instrument in the ith instrument prediction frame, wherein K is the total number of the instrument prediction frames, and lambda is a constant of a positive value.
2. The method according to claim 1, wherein the step of solving each joint angular velocity corresponding to the mechanical arm of the lens holding robot according to the coordinates of the tip center and the coordinates of the target position comprises:
determining a desired movement distance according to the coordinates of the tip center of the instrument and the coordinates of the target position, determining a desired speed of the end of the arthroscope according to the desired movement distance, and determining the angular velocities of the joints corresponding to the mechanical arm according to the desired speed;
Or solving the angular velocities of the joints corresponding to the mechanical arm according to the coordinates of the tip center of the instrument by using a first cyclic neural network.
3. A method of visual servoing of a mirror-holding robot according to claim 2, wherein said determining each of said joint angular velocities corresponding to said robotic arm in accordance with said desired velocity comprises:
determining the tail end speed of the mechanical arm according to the expected speed under the constraint of a remote movement center;
or solving the joint angular velocity corresponding to the expected velocity based on a second cyclic neural network.
4. A lens holding robot vision servo method as claimed in claim 3, wherein said solving said joint angular velocity corresponding to said tip velocity based on jacobian matrix comprises:
solving the joint angular velocity corresponding to the tail end velocity according to a first calculation formula;
the first calculation formula is:
Wherein, Representing the angular velocity of the joint, J b representing the jacobian matrix,Representing a rotation matrix of the tip of the robotic arm, S con representing the tip speed.
5. A lens holding robot vision servo method as claimed in claim 3, wherein said solving said joint angular velocity corresponding to said desired velocity based on a second cyclic neural network comprises:
Inputting the expected speed into the second cyclic neural network to obtain the joint angular speed output by the second cyclic neural network;
The second recurrent neural network is:
Wherein, mu is Lagrange's operator, For the derivative of the Lagrangian operator, γ is a preset constant scalar and γ >0, t is a solution time, Γ is an activation function of the second recurrent neural network, ψ is a projection function, S is the desired speed ,U=W-1JTA-1,V=W-1(I-JTA-1JW-1),A=JW-1JT,W and J matrix are both full rank and positive definite matrix, I is a unity matrix, x=US-Vμ,X is the output of the second recurrent neural network,Representing the angular velocity of the joint.
6. The method according to claim 2, wherein the calculating the angular velocities of the joints corresponding to the mechanical arm according to the coordinates of the tip center of the instrument using the first cyclic neural network comprises:
Iteratively solving a second calculation formula by using the first cyclic neural network, and solving to obtain the angular velocity of each joint;
The second calculation formula is:
Wherein J b is the jacobian matrix of the lens holding robot, R is the rotation matrix, In the form of an image jacobian matrix,For the desired speed of the arthroscope,The expected speed is determined according to the coordinates of the center of the tip.
7. A lens holding robotic vision servo system, the system comprising:
the image acquisition module is used for acquiring an image shot by an arthroscope of the lens holding robot;
A coordinate determination module for determining coordinates of a tip center of an instrument in the image;
the speed solving module is used for solving the angular speeds of all joints corresponding to the mechanical arm of the mirror holding robot according to the coordinates of the center of the tip and the coordinates of the target position;
The motion control module is used for controlling the mechanical arm to drive the arthroscope to move according to the angular velocity of each joint;
The determining coordinates of the tip center of the instrument in the image includes:
detecting one or more instrument prediction frames in the image by using a pre-trained target detection model, wherein the geometric center of each instrument prediction frame corresponds to the tip center of one instrument;
determining the coordinates of the geometric center of each instrument prediction frame as coordinates corresponding to the tip center of the instrument;
the coordinates of the tip center of the instrument are:
wherein c u,cv is the horizontal and vertical coordinates of the instrument after the tip center is adjusted, The horizontal and vertical coordinates before the tip center of the instrument is adjusted,The method comprises the following steps:
Wherein, Representing the abscissa of the ith instrument prediction box,Representing the ordinate of the ith instrument prediction box,Representing the width of the ith instrument prediction box,Representing the height of the i-th instrument prediction box,And (3) representing the type prediction probability of the instrument in the ith instrument prediction frame, wherein K is the total number of the instrument prediction frames, and lambda is a constant of a positive value.
8. An electronic device comprising a memory storing a computer program and a processor implementing the method of any of claims 1 to 6 when the computer program is executed by the processor.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311805261.6A CN117901090B (en) | 2023-12-25 | 2023-12-25 | Visual servo method and system for lens holding robot |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311805261.6A CN117901090B (en) | 2023-12-25 | 2023-12-25 | Visual servo method and system for lens holding robot |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117901090A CN117901090A (en) | 2024-04-19 |
| CN117901090B true CN117901090B (en) | 2025-05-06 |
Family
ID=90695944
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311805261.6A Active CN117901090B (en) | 2023-12-25 | 2023-12-25 | Visual servo method and system for lens holding robot |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117901090B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118593132B (en) * | 2024-06-24 | 2025-12-26 | 浙江大学 | A method for autonomous control of the laparoscopic surgical arm |
| CN118664603B (en) * | 2024-07-24 | 2025-05-06 | 中山大学 | Mirror-holding robot control method and system based on multi-mode question-answering large model |
| CN119700312B (en) * | 2024-12-09 | 2026-02-03 | 浙江大学 | Mirror holding arm control method based on interactive surgical instrument segmentation |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102764159A (en) * | 2005-05-19 | 2012-11-07 | 直观外科手术操作公司 | Software center and highly configurable robotic systems for surgery and other uses |
| CN113334390A (en) * | 2021-08-06 | 2021-09-03 | 成都博恩思医学机器人有限公司 | Control method and system of mechanical arm, robot and storage medium |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111538949B (en) * | 2020-07-10 | 2020-10-16 | 深圳市优必选科技股份有限公司 | Redundant robot inverse kinematics solving method and device and redundant robot |
| CN115847420B (en) * | 2022-12-28 | 2025-09-09 | 中山大学·深圳 | Mechanical arm vision tracking control method and system for moving target |
| CN115972208B (en) * | 2022-12-30 | 2025-05-20 | 杭州华匠医学机器人有限公司 | Target following control method, mirror-holding robot and computer-readable medium |
| CN117207197A (en) * | 2023-10-26 | 2023-12-12 | 苏州微创畅行机器人有限公司 | Robotic arm safety boundary control method and system |
-
2023
- 2023-12-25 CN CN202311805261.6A patent/CN117901090B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102764159A (en) * | 2005-05-19 | 2012-11-07 | 直观外科手术操作公司 | Software center and highly configurable robotic systems for surgery and other uses |
| CN113334390A (en) * | 2021-08-06 | 2021-09-03 | 成都博恩思医学机器人有限公司 | Control method and system of mechanical arm, robot and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117901090A (en) | 2024-04-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN117901090B (en) | Visual servo method and system for lens holding robot | |
| US12131529B2 (en) | Virtual teach and repeat mobile manipulation system | |
| Li | Human–robot interaction based on gesture and movement recognition | |
| Sayour et al. | Autonomous robotic manipulation: real‐time, deep‐learning approach for grasping of unknown objects | |
| Taryudi et al. | Eye to hand calibration using ANFIS for stereo vision-based object manipulation system | |
| CN119319568B (en) | Robotic arm control method, device, equipment and storage medium | |
| Ribeiro et al. | Second-order position-based visual servoing of a robot manipulator | |
| CN114217303B (en) | Target positioning and tracking method and device, underwater robot and storage medium | |
| CN118700156B (en) | A robotic arm control method for automatic docking of dynamic targets | |
| CN111152227A (en) | Mechanical arm control method based on guided DQN control | |
| CN120287303A (en) | Vision-based collaborative robot terminal control method and related equipment | |
| US20240377843A1 (en) | Location based change detection within image data by a mobile robot | |
| CN113858217A (en) | Multi-robot interaction three-dimensional visual pose perception method and system | |
| Tong et al. | Cascade-LSTM-based visual-inertial navigation for magnetic levitation haptic interaction | |
| CN109934155B (en) | Depth vision-based collaborative robot gesture recognition method and device | |
| Junare et al. | Deep learning based end-to-end grasping pipeline on a lowcost 5-dof robotic arm | |
| Walęcki et al. | Control system of a service robot's active head exemplified on visual servoing | |
| Svishchev | Advanced embedded systems for autonomous robots control | |
| US20240262635A1 (en) | Conveyance system for moving object based on image obtained by image capturing device | |
| Fang et al. | Learning from wearable-based teleoperation demonstration | |
| CN119916837B (en) | Automatic navigation control method and device and teaching experiment mobile platform system | |
| Chen et al. | Research on Door‐Opening Strategy Design of Mobile Manipulators Based on Visual Information and Azimuth | |
| Liu et al. | Bolster Spring Visual Servo Positioning Method Based on Depth Online Detection | |
| CN115922731B (en) | Control method of robot and robot | |
| CN121468516A (en) | Teleoperation method and robot |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |