[go: up one dir, main page]

CN113742992A - Master-slave control method based on deep learning and application - Google Patents

Master-slave control method based on deep learning and application Download PDF

Info

Publication number
CN113742992A
CN113742992A CN202110433533.9A CN202110433533A CN113742992A CN 113742992 A CN113742992 A CN 113742992A CN 202110433533 A CN202110433533 A CN 202110433533A CN 113742992 A CN113742992 A CN 113742992A
Authority
CN
China
Prior art keywords
master
slave
deep learning
execution unit
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110433533.9A
Other languages
Chinese (zh)
Other versions
CN113742992B (en
Inventor
周扬
何燕
蔡述庭
郭靖
熊晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Dinghang Information Technology Service Co ltd
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110433533.9A priority Critical patent/CN113742992B/en
Publication of CN113742992A publication Critical patent/CN113742992A/en
Application granted granted Critical
Publication of CN113742992B publication Critical patent/CN113742992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/37Leader-follower robots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Optimization (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Robotics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Hardware Design (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Manipulator (AREA)

Abstract

根据现有方案存在的问题,本发明提出了一种基于深度学习的主从控制方法及应用,在确立执行单元的运动坐标系后,建立主从映射关系,利用主从映射关系来描述所述执行单元对应的主从手的运动学正逆运算,构建主从异构型模型;在主从关系下进行目标的实时跟踪,得到跟踪结果数据,最后通过卷积神经网络进行深度学习,完成所述执行单元的控制过程。本发明能够便于操作者在主从控制模式下根据目标跟踪算法的辅助,提升柔性机器手的操控精度,便于在复杂且狭窄的手术空间中进行手术,从而降低手术风险,缓解医生压力。

Figure 202110433533

According to the problems existing in the existing solutions, the present invention proposes a master-slave control method and application based on deep learning. After establishing the motion coordinate system of the execution unit, a master-slave mapping relationship is established, and the master-slave mapping relationship is used to describe the Perform the kinematic forward and inverse operations of the master-slave hand corresponding to the execution unit to construct a master-slave heterogeneous model; perform real-time tracking of the target under the master-slave relationship to obtain the tracking result data, and finally perform deep learning through the convolutional neural network to complete all the steps. Describe the control process of the execution unit. The invention can facilitate the operator in the master-slave control mode according to the assistance of the target tracking algorithm, improve the manipulation accuracy of the flexible robot hand, facilitate the operation in the complex and narrow operation space, thus reduce the operation risk and relieve the pressure of the doctor.

Figure 202110433533

Description

基于深度学习的主从控制方法及应用Master-slave control method and application based on deep learning

技术领域technical field

本发明涉及深度学习技术领域,特别涉及一种目标跟踪算法(GOTURN) 及应用。The invention relates to the technical field of deep learning, in particular to a target tracking algorithm (GOTURN) and its application.

背景技术Background technique

目前,手术机器人渗透到手术计划、微创定位、无创治疗的各个环节。利用手术机器人进行的手术能大幅度减少术后病人因创伤过大而带来的疼痛感、便于伤口康复以及减少术后并发症,世界上大部分发达国家都已经有比较成熟的医疗遥控操作系统,这些较为先进的手术机器人提供了直观的视觉反馈和方便的操作方式使得医疗手术机器人的发展进入了一个具有划时代意义的快速发展期,但在实际手术过程中需要避开某些重要器官组织或者血管,这对机械手的灵活性有一定的要求。Da Vinci手术机器人为目前的现有方案,其主要原理是让医生通过操纵手术操作台(主手),控制机械臂(从手)进入人体内完成复杂的手术操作。该现有方案的优点如下:At present, surgical robots have penetrated into all aspects of surgical planning, minimally invasive positioning, and non-invasive treatment. The use of surgical robots for surgery can greatly reduce the pain caused by excessive trauma to patients after surgery, facilitate wound recovery and reduce postoperative complications. Most developed countries in the world already have relatively mature medical remote control operating systems , These more advanced surgical robots provide intuitive visual feedback and convenient operation, which makes the development of medical surgical robots enter an epoch-making and rapid development period, but in the actual operation process, it is necessary to avoid certain important organs and tissues or Blood vessels, which have certain requirements for the flexibility of the manipulator. The Da Vinci surgical robot is an existing solution. Its main principle is to allow doctors to control the robotic arm (slave hand) to enter the human body to complete complex surgical operations by manipulating the operating table (master hand). The advantages of this existing scheme are as follows:

完美将开放式手术跟微创手术的优点结合起来,让医生能完成复杂的大型手术,It perfectly combines the advantages of open surgery and minimally invasive surgery, allowing doctors to complete complex and large-scale operations.

灵活、快速的手术机械臂可以根据医生的操作快速响应,帮助医生轻松完成各种复杂的手术动作。The flexible and fast surgical robotic arm can respond quickly according to the operation of the doctor, helping the doctor to easily complete various complex surgical actions.

高精度、稳定的手术机械臂可以保证手术的精确性和安全性,通过相应的算法还可以消除医生手的抖动,防止出现误动作。高性能的手术机械臂可以让手术变得更方便、轻松。The high-precision and stable surgical robotic arm can ensure the accuracy and safety of the operation, and the corresponding algorithm can also eliminate the shaking of the doctor's hand and prevent misoperation. High-performance surgical robotic arms can make surgery more convenient and easier.

但是,Da Vinci手术机器人还存在某些不可忽视的缺点,当需要避开器官或者在狭小空间(如口腔)进行手术时,机械手仍然存在远端准确度不足的问题。However, the Da Vinci surgical robot still has some shortcomings that cannot be ignored. When it is necessary to avoid organs or perform surgery in a small space (such as the oral cavity), the manipulator still has the problem of insufficient distal accuracy.

发明内容SUMMARY OF THE INVENTION

根据现有方案存在的问题,本发明提出了一种基于深度学习的主从控制方法及应用,利用基于离线学习的GOTURN算法来辅助手术机器人,便于操作者在主从控制模式下根据目标跟踪算法的辅助,准确地操控柔性机器手在复杂且狭窄的手术空间中进行手术,从而降低手术风险,缓解医生压力。According to the problems existing in the existing solutions, the present invention proposes a master-slave control method and application based on deep learning. The GOTURN algorithm based on offline learning is used to assist the surgical robot, so that the operator can track the algorithm according to the target in the master-slave control mode. It can accurately manipulate the flexible robotic hand to perform surgery in complex and narrow surgical spaces, thereby reducing the risk of surgery and relieving the pressure of doctors.

本发明的技术方案是:The technical scheme of the present invention is:

一种基于深度学习的主从控制方法,包括:A master-slave control method based on deep learning, including:

确立执行单元的运动坐标系;Establish the motion coordinate system of the execution unit;

建立主从映射关系,通过主从映射关系来描述所述执行单元对应的主从手的运动学正逆运算,构建主从异构型模型;Establish a master-slave mapping relationship, describe the kinematic forward and inverse operations of the master-slave hand corresponding to the execution unit through the master-slave mapping relationship, and build a master-slave heterogeneous model;

在主从关系下进行目标的实时跟踪,得到跟踪结果数据;Real-time tracking of the target is carried out under the master-slave relationship, and the tracking result data is obtained;

通过卷积神经网络进行深度学习,完成所述执行单元的控制过程。Deep learning is performed through a convolutional neural network to complete the control process of the execution unit.

所述“建立主从映射关系,通过主从映射关系来描述所述执行单元对应的主从手的运动学正逆运算,构建主从异构型模型”,包括:The "establish a master-slave mapping relationship, describe the kinematic forward and inverse operations of the master-slave hand corresponding to the execution unit through the master-slave mapping relationship, and construct a master-slave heterogeneous model", including:

从笛卡尔空间坐标系中建立映射关系来描述所述主从手的运动学正逆运算,运用逆雅可比矩阵进行求解;A mapping relationship is established from the Cartesian space coordinate system to describe the forward and inverse kinematics operations of the master and slave hands, and the inverse Jacobian matrix is used to solve the problem;

主手末端的速度:ΔX=J(θ)·Δθ;The speed of the end of the main hand: ΔX=J(θ)·Δθ;

从手末端的速度:Δθ=J(θ)-1·ΔX;Speed from the end of the hand: Δθ=J(θ) -1 ΔX;

其中,

Figure RE-RE-GDA0003321336830000021
为关节角速度矢量,J(θ)∈R6×6为雅克比矩阵,
Figure RE-RE-GDA0003321336830000022
为末端速度矢量。in,
Figure RE-RE-GDA0003321336830000021
is the joint angular velocity vector, J(θ)∈R 6×6 is the Jacobian matrix,
Figure RE-RE-GDA0003321336830000022
is the terminal velocity vector.

所述“在主从关系下进行目标的实时跟踪,得到跟踪结果数据”,包括:The "real-time tracking of the target under the master-slave relationship to obtain tracking result data" includes:

利用GOTURN算法进行目标的实时跟踪的步骤。The steps of real-time tracking of targets using the GOTURN algorithm.

所述“通过卷积神经网络进行深度学习”,包括:The "Deep Learning through Convolutional Neural Networks" includes:

卷积层在ImageNet上预先训练,用1e-5的学习率训练该网络,其他超参数则取自CaffeNet的默认值;The convolutional layers are pre-trained on ImageNet, the network is trained with a learning rate of 1e-5, and the other hyperparameters are taken from the default values of CaffeNet;

每个训练示例会交替地取自于训练集,并利用GOTURN算法进行视频裁剪。Each training example is alternately taken from the training set and video cropped using the GOTURN algorithm.

所述“确立执行单元的运动坐标系”与所述“建立主从映射关系,通过主从映射关系来描述所述执行单元对应的主从手的运动学正逆运算,构建主从异构型模型”之间,包括:柔性测试步骤:The "establishing the motion coordinate system of the execution unit" and the "establishing a master-slave mapping relationship" describe the kinematic forward and inverse operations of the master-slave hand corresponding to the execution unit through the master-slave mapping relationship to construct a master-slave heterogeneous type. model”, including: Flexibility test steps:

同时驱动所述执行单元的一端进行弯曲运动来进行柔性测试,该连接端定义为远端;At the same time, one end of the execution unit is driven to perform a bending motion to perform a flexibility test, and the connection end is defined as the distal end;

确定r,L,θ,δ,d分别为线与中心轴的距离,柔性关节的长度,沿y方向的旋转角,沿z方向的旋转角,以及远端在平面xbOyb中的位置投影;Determine r, L, θ, δ, d as the distance between the line and the central axis, the length of the flexible joint, the rotation angle in the y direction, the rotation angle in the z direction, and the position of the distal end in the plane x b Oy b projection;

位置投影可由下式计算:The position projection can be calculated by:

Figure RE-RE-GDA0003321336830000031
Figure RE-RE-GDA0003321336830000031

其中,dx,dy分别表示x、y轴上的位置投影,dx,dy能在手术过程中测量,柔性关节远端的姿态公式为:Among them, d x , dy represent the position projections on the x and y axes, respectively, d x , dy can be measured during the operation, and the posture formula of the distal end of the flexible joint is:

Figure RE-RE-GDA0003321336830000032
Figure RE-RE-GDA0003321336830000032

将夹爪连接在所述远端,夹爪尖端的坐标为:Attach the jaws to the distal end, the coordinates of the jaw tips are:

Figure RE-RE-GDA0003321336830000033
Figure RE-RE-GDA0003321336830000033

其中,s代表刚性杆和加工弹簧之间的连接长度,Lg代表夹爪的长度;当s=0时,所述执行单元被视为可弯曲接头。Among them, s represents the connection length between the rigid rod and the machining spring, and Lg represents the length of the clamping jaw; when s=0, the execution unit is regarded as a bendable joint.

所述“柔性测试步骤”后,包括:性能测试步骤:After the "flexibility test steps", it includes: performance test steps:

配置提供旋转动力的驱动装置,与所述执行单元连接;A drive device for providing rotational power is configured and connected with the execution unit;

确保所述执行单元为弯曲状态,同时在所述夹持器的远端加载拉伸力,对所述执行单元进行性能测试。Make sure that the execution unit is in a bent state, and at the same time load a tensile force on the distal end of the gripper, and perform a performance test on the execution unit.

所述“在主从关系下进行目标的实时跟踪,得到跟踪结果数据”中,包括:跟踪效果的平滑化步骤:The "real-time tracking of the target under the master-slave relationship to obtain tracking result data" includes: smoothing steps of the tracking effect:

首先将当前帧(c'x,c'y)中边界框的中心相对于前一帧(cx,cy)中边界框的中心进行建模:First model the center of the bounding box in the current frame (c' x , c' y ) relative to the center of the bounding box in the previous frame (c x , c y ):

c'x=cx+w·Δxc' x =c x +w·Δx

c'y=cy+h·Δyc' y = cy +h·Δy

其中w和h分别是前一帧的边界框的宽度和高度;Δx和y是随机变量;where w and h are the width and height of the bounding box of the previous frame, respectively; Δx and y are random variables;

同样,通过下式模拟尺寸变化:Again, the size change is simulated by:

w'=w·γw w'= w ·γw

h'=h·γh h'= h ·γh

其中w'和h'是边界框的当前宽度和高度,w和h是边界框的先前的宽度和高度,γw和γh随机变量;where w' and h' are the current width and height of the bounding box, w and h are the previous width and height of the bounding box, and γw and γh are random variables;

在训练集中,利用平均值为1的γw和γh来进行拉普拉斯分布建模,并运用所述拉普拉斯分布中提取的随机目标来扩充训练集;In the training set, use γw and γh with a mean of 1 to model the Laplacian distribution, and augment the training set with random targets extracted from the Laplacian distribution;

所述拉普拉斯分布的比例参数为:The scale parameter of the Laplace distribution is:

对于边界框中心的运动:

Figure RE-RE-GDA0003321336830000041
对于边界框大小的变化:
Figure RE-RE-GDA0003321336830000042
For motion in the center of the bounding box:
Figure RE-RE-GDA0003321336830000041
For bounding box size changes:
Figure RE-RE-GDA0003321336830000042

所述“通过卷积神经网络进行深度学习”后,包括:利用PD环节来消除不断积累的主从跟随误差和消除医生手部抖动的步骤;After the "deep learning through convolutional neural network", it includes: using the PD link to eliminate the accumulated master-slave follow-up error and the steps of eliminating the doctor's hand shake;

在“利用PD环节来消除不断积累的主从跟随误差”中:In "Using PD Links to Eliminate Accumulated Master-Slave Following Errors":

调控规律为:The regulation rule is:

Figure RE-RE-GDA0003321336830000051
Figure RE-RE-GDA0003321336830000051

其中

Figure RE-RE-GDA0003321336830000052
分别表示主、从手末端执行器位姿速度,Xm,Xs分别表示主、从手末端执行器位姿,同时kp和kd分别代表比例参数和微分参数;in
Figure RE-RE-GDA0003321336830000052
respectively represent the pose and velocity of the master and slave hand end effectors, X m , X s represent the master and slave hand end effector poses respectively, while k p and k d represent proportional parameters and differential parameters respectively;

在“消除医生手部抖动”中,将对主手和从手分别采用两次滑动均值滤波,具体为:In "Remove Doctor's Hand Shake", two sliding mean filters will be applied to the master hand and the slave hand respectively, as follows:

Figure RE-RE-GDA0003321336830000053
Figure RE-RE-GDA0003321336830000053

Ni为滤波计算结果,n为均值数字滤波器阶数(i≥n),i为在第i次采样周期。N i is the filtering calculation result, n is the average digital filter order (i≥n), and i is the i-th sampling period.

一种如上所述的基于深度学习的主从控制方法在医用柔性机器人方向上的应用。An application of the above-mentioned master-slave control method based on deep learning in the direction of medical flexible robots.

本发明的有益效果是:The beneficial effects of the present invention are:

本发明在确立执行单元的运动坐标系后,建立主从映射关系,利用主从映射关系来描述所述执行单元对应的主从手的运动学正逆运算,构建主从异构型模型;在主从关系下进行目标的实时跟踪,得到跟踪结果数据,最后通过卷积神经网络进行深度学习,完成所述执行单元的控制过程。本发明能够便于操作者在主从控制模式下根据目标跟踪算法的辅助,提升柔性机器手的操控精度,便于在复杂且狭窄的手术空间中进行手术,从而降低手术风险,缓解医生压力。In the present invention, after establishing the motion coordinate system of the execution unit, a master-slave mapping relationship is established, and the master-slave mapping relationship is used to describe the kinematic forward and inverse operations of the master-slave hand corresponding to the execution unit, and a master-slave heterogeneous model is constructed; The real-time tracking of the target is carried out under the master-slave relationship to obtain the tracking result data, and finally deep learning is carried out through the convolutional neural network to complete the control process of the execution unit. The invention can facilitate the operator in the master-slave control mode according to the assistance of the target tracking algorithm, improve the manipulation accuracy of the flexible robot hand, facilitate the operation in the complex and narrow operation space, thus reduce the operation risk and relieve the pressure of the doctor.

同时,本发明使用的跟踪算法可以以离线的方式进行训练,这保证了使用者能以前馈方式调控网络,而不需要在线微调,并且跟踪器能以100fps的速度运行,从而促使网络能实时跟踪目标对象。At the same time, the tracking algorithm used in the present invention can be trained in an offline manner, which ensures that the user can control the network in a feed-forward manner without online fine-tuning, and the tracker can run at a speed of 100fps, thereby enabling the network to track in real time target.

附图说明Description of drawings

图1为执行单元的坐标系示意图。FIG. 1 is a schematic diagram of a coordinate system of an execution unit.

图2为本发明所述方法的流程图。Figure 2 is a flow chart of the method of the present invention.

具体实施方式Detailed ways

下面结合附图对本申请进行进一步的说明。The present application will be further described below with reference to the accompanying drawings.

本发明的核心思想是:由于柔性手术机械手能够在狭小且复杂的手术环境中利用其灵活性进行手术操作;所以,通过主从控制系统利用目标跟踪算法对手术区域进行实时跟踪并反馈深度信息,便于医生进行实时调控。实际操作时,使用者能够通过目标跟踪得到的AR图像来调控柔性手术机械手,从而在准确的手术区域(目标跟踪区域)进行操作,顺利地完成手术。The core idea of the present invention is: since the flexible surgical manipulator can use its flexibility to perform surgical operations in a narrow and complex surgical environment; therefore, the master-slave control system uses the target tracking algorithm to track the surgical area in real time and feed back depth information, It is convenient for doctors to control in real time. In actual operation, the user can control the flexible surgical manipulator through the AR image obtained by target tracking, so as to operate in the accurate surgical area (target tracking area) and successfully complete the operation.

具体实施例I:在本实施例中,所述执行单元可以是柔性机器人或柔性机械手或基于记忆合金驱动手术机器人或基于力反馈设备的气动柔性手术机器人等柔性机构中的一种;优选的,所述的柔性机械手可以是:如中国专利: CN207055546,公开的《一种微创机械手结构》中涉及的微创机械手结构。Specific embodiment 1: In this embodiment, the execution unit may be one of a flexible robot or a flexible manipulator or a flexible mechanism such as a memory alloy-driven surgical robot or a pneumatic flexible surgical robot based on a force feedback device; preferably, The flexible manipulator can be: such as Chinese Patent: CN207055546, the structure of the minimally invasive manipulator involved in the disclosed "A Minimally Invasive Manipulator Structure".

如图1~2,本发明所述基于深度学习的主从控制方法的具体步骤如下:As shown in Figures 1 to 2, the specific steps of the deep learning-based master-slave control method of the present invention are as follows:

步骤1:确立柔性机械手的运动坐标系。柔性关节的坐标系如图1所示, xb,yb,zb分别代表柔性关节的基础坐标系,与世界坐标系重合。而xe,ye,ze和 xg,yg,zg分别代表末端执行器和夹爪的坐标系。Step 1: Establish the motion coordinate system of the flexible manipulator. The coordinate system of the flexible joint is shown in Figure 1, where x b , y b , and z b respectively represent the base coordinate system of the flexible joint, which coincides with the world coordinate system. And x e , y e , z e and x g , y g , z g represent the coordinate systems of the end effector and the gripper, respectively.

步骤2:根据步骤1的坐标系,为了共同驱动执行单元,把四根钢丝分布在加工好的弹簧外侧,并通过共同拉动四根钢丝来驱使柔性接头的另一端(简称为远端)进行弯曲运动。确定r,L,θ,δ,d为线与中心轴的距离、柔性关节的长度、沿y方向的旋转角、沿z方向的旋转角以及远端在平面xbOyb中的位置投影。因此,位置投影可由下式(1-1)计算:Step 2: According to the coordinate system of step 1, in order to jointly drive the execution unit, the four steel wires are distributed on the outside of the processed spring, and the other end (referred to as the distal end) of the flexible joint is driven to bend by pulling the four steel wires together. sports. Determine r, L, θ, δ, d as the distance between the line and the central axis, the length of the flexible joint, the rotation angle in the y direction, the rotation angle in the z direction, and the projection of the position of the distal end in the plane x b Oy b . Therefore, the position projection can be calculated by the following formula (1-1):

Figure RE-RE-GDA0003321336830000071
Figure RE-RE-GDA0003321336830000071

其中,dx,dy分别表示x、y轴上的位置投影,dx,dy能在手术过程中测量,柔性关节远端的姿态公式(1-2)为:Among them, d x , dy represent the position projections on the x and y axes, respectively, d x , dy can be measured during the operation, and the posture formula (1-2) of the distal end of the flexible joint is:

Figure RE-RE-GDA0003321336830000072
Figure RE-RE-GDA0003321336830000072

把夹爪连接在柔性接头的远端,夹爪尖端的坐标由公式(1-3)所得。Connect the jaws to the distal end of the flexible joint, and the coordinates of the jaw tips are obtained from formula (1-3).

Figure RE-RE-GDA0003321336830000073
Figure RE-RE-GDA0003321336830000073

其中,s代表刚性杆和加工弹簧之间的连接长度,Lg代表夹爪的长度。柔性可弯曲关节保证夹持器实现全方位的弯曲运动,可弯曲段的长度取决于加工弹簧和刚性杆之间的连接距离,柔性接头的弯曲曲率随连接距离的增加而逐渐增加,当s=0时,整个柔性接头被视为可弯曲接头,相反,当 s=16毫米时,整个柔性接头将被视为刚体。where s is the length of the connection between the rigid rod and the machined spring, and Lg is the length of the jaws. The flexible bendable joint ensures that the gripper can achieve a full range of bending motion. The length of the bendable segment depends on the connection distance between the processing spring and the rigid rod. The bending curvature of the flexible joint increases gradually with the increase of the connection distance. When s = When 0, the entire flexible joint is considered as a bendable joint, and conversely, when s = 16 mm, the entire flexible joint will be considered as a rigid body.

步骤3:通过实验验证机械手的性能。配置四个谐波伺服电机 (RSF-100-E050-C)用于提供旋转动力。在拉伸实验中,柔性接头被控制为弯曲状态,同时在夹持器的远端加载拉伸力;柔性接头由机械加工的弹簧、弹性骨架和刚性基杆组成。因此,当机械加工的弹簧和刚性杆之间的耦合距离为零时,柔性接头将是最弱的刚性状态。在该实验中,柔性关节分别沿x轴和y轴弯曲到60°,并且在外科手术操作器的远端施加3.5N、5N 的拉伸力,每个拉伸力有5个轨迹。得到的测试数据如下表(1-1):Step 3: Verify the performance of the manipulator through experiments. Four harmonic servo motors (RSF-100-E050-C) are configured to provide rotational power. In the tensile experiments, the flexible joint was controlled into a bent state while a tensile force was applied to the distal end of the gripper; the flexible joint consisted of a machined spring, an elastic backbone, and a rigid base rod. Therefore, when the coupling distance between the machined spring and the rigid rod is zero, the flexible joint will be the weakest rigid state. In this experiment, the flexible joint was bent to 60° along the x-axis and y-axis, respectively, and 3.5N, 5N tensile forces were applied at the distal end of the surgical manipulator, with 5 trajectories for each tensile force. The obtained test data are as follows (1-1):

Figure RE-RE-GDA0003321336830000081
Figure RE-RE-GDA0003321336830000081

表(1-1)手术机械手拉伸实验Table (1-1) Tensile experiment of surgical manipulator

根据表(1-1)的数据,当拉伸力为3.5N时,手术机械手可以实现良好的刚性保持;当柔性关节在5N载荷下沿y轴弯曲60°时,一条轨迹失败;当柔性接头在5N载荷下沿x轴和y轴弯曲60°耦合角时,两次试验均失败。由于夹持器的几何形状为上/下侧为平面,左/右侧为弧形,因此,当连接在上/下侧(沿x轴弯曲)时,手术机械手能保持稳定的负载能力,左/右侧的负载则会由于连接平面的形状而导致滑动。在实验过程中,柔性关节在所有路径上都没有产生明显的形状变形。According to the data in Table (1-1), when the tensile force is 3.5N, the surgical manipulator can achieve good rigidity retention; when the flexible joint is bent 60° along the y-axis under a load of 5N, one trajectory fails; when the flexible joint is bent 60° along the y-axis; Both tests failed when bent at a coupling angle of 60° along the x- and y-axes under a 5N load. Since the geometry of the gripper is flat on the upper/lower side and curved on the left/right side, the surgical manipulator maintains a stable load capacity when attached to the upper/lower side (bending along the x-axis), and the left/right side is curved. / The load on the right side will slide due to the shape of the connecting plane. During the experiments, the flexible joints did not produce significant shape deformation on all paths.

步骤4:建立主从映射关系。本发明的机器人为主从异构型,需要从笛卡尔空间坐标系中建立映射关系来描述主从手的运动学正逆运算,其中基于笛卡尔空间坐标系下的主从控制的关键点是求解逆运动学,以下讲运用逆雅可比矩阵进行求解。Step 4: Establish a master-slave mapping relationship. The robot of the present invention is a master-slave heterogeneous type, and a mapping relationship needs to be established from the Cartesian space coordinate system to describe the kinematic forward and inverse operations of the master-slave hand. The key point of the master-slave control based on the Cartesian space coordinate system is To solve the inverse kinematics, we will use the inverse Jacobian matrix to solve it.

关节空间速度末端向笛卡尔空间速度的映射为(公式1-4):The mapping of joint space velocity end to Cartesian space velocity is (Equation 1-4):

Figure RE-RE-GDA0003321336830000082
Figure RE-RE-GDA0003321336830000082

其中,

Figure RE-RE-GDA0003321336830000091
为关节角速度矢量,J(θ)∈R6×6为雅克比矩阵,
Figure RE-RE-GDA0003321336830000092
为末端速度矢量,将机器人末端和关节角在短时间内的位移和速度分别替代瞬时末端速度和关节速度,即把式(1-4)转变为式(1-5):in,
Figure RE-RE-GDA0003321336830000091
is the joint angular velocity vector, J(θ)∈R 6×6 is the Jacobian matrix,
Figure RE-RE-GDA0003321336830000092
is the terminal velocity vector, the displacement and velocity of the robot terminal and the joint angle in a short time are replaced by the instantaneous terminal velocity and joint velocity respectively, that is, formula (1-4) is transformed into formula (1-5):

ΔX=J(θ)·Δθ 式(1-5)ΔX=J(θ)·Δθ Equation (1-5)

同理,由式(1-5)能得出逆运动学的求解式(1-6):In the same way, the inverse kinematics equation (1-6) can be obtained from equation (1-5):

Δθ=J(θ)-1·ΔX 式(1-6)Δθ=J(θ) -1 ·ΔX Equation (1-6)

主手末端的速度可由式(1-5)得出,在经过主从映射关系后,进而能得出从手末端的速度,并且根据式(1-6)求得关节速度。The velocity of the end of the master hand can be obtained from Equation (1-5). After passing through the master-slave mapping relationship, the velocity of the end of the slave hand can be obtained, and the joint velocity can be obtained according to Equation (1-6).

步骤5:在主从关系下利用GOTURN算法进行目标的实时跟踪。根据步骤4中得到的主从映射关系,假设在前一帧中,跟踪器预测目标位于以 c=(cx,cy)为中心的目标框中,宽度为w,高度为h。当前时刻,根据前一帧得到的c并以此作为中心再截取一个新的目标框,宽度为k1w,高度为k1h。这种裁剪方式可以让网络辨识出图中哪个对象正在被跟踪,而且通过对比前后帧的宽和高可以看出目标对象的运动状态如何变化。随后,在当前帧以c'=(c'x,c'y)为中心裁剪一片区域,其中c'是目标对象当前所在的位置。当前帧的裁剪宽度和高度分别为k2w和k2h,其中w和h是前一帧中的目标框的宽和高,k2定义了目标对象的搜索范围。对于快速移动的对象,可以通过增加网络复杂性为代价来增加搜索区域的大小;而对于遮挡问题,则可以通过训练来进行优化。Step 5: Use the GOTURN algorithm to track the target in real time under the master-slave relationship. According to the master-slave mapping relationship obtained in step 4, it is assumed that in the previous frame, the tracker predicted that the target is located in the target frame centered on c=(c x , c y ), with a width of w and a height of h. At the current moment, a new target frame is intercepted according to c obtained in the previous frame and taken as the center, with a width of k 1w and a height of k 1h . This cropping method allows the network to identify which object is being tracked in the image, and how the motion state of the target object changes by comparing the width and height of the frames before and after. Then, in the current frame, a region is cropped with c'=(c' x , c' y ) as the center, where c' is the current position of the target object. The cropping width and height of the current frame are respectively k 2w and k 2h , where w and h are the width and height of the target box in the previous frame, and k 2 defines the search range of the target object. For fast-moving objects, the size of the search region can be increased at the cost of increasing network complexity; for occlusion problems, it can be optimized by training.

步骤6:跟踪效果的平滑化。首先将当前帧(c'x,c'y)中边界框的中心相对于前一帧(cx,cy)中边界框的中心进行建模,得式(1-7)和式(1-8):Step 6: Smoothing of the tracking effect. First, model the center of the bounding box in the current frame (c' x , c' y ) relative to the center of the bounding box in the previous frame (c x , c y ), and obtain equations (1-7) and (1) -8):

c'x=cx+w·Δx 式(1-7)c' x =c x +w·Δx Equation (1-7)

c'y=cy+h·Δy 式(1-8)c' y = cy +h·Δy Formula (1-8)

其中w和h分别是前一帧的边界框的宽度和高度。Δx和y是随机变量,用于捕捉边界框相对于其大小的位置变化。在训练集中,目标的位置变化可以用Δx和Δy的平均值都为0的拉普拉斯分布来进行建模,这种分布对较小幅度的运动比较大幅度的运动有着更高的概率。where w and h are the width and height of the bounding box of the previous frame, respectively. Δx and y are random variables that capture changes in the position of the bounding box relative to its size. In the training set, the position change of the target can be modeled by a Laplacian distribution with both Δx and Δy having a mean of 0, which has a higher probability for smaller movements than for larger movements.

同样,通过以下的式(1-9)和式(1-10)来模拟尺寸变化:Likewise, dimensional changes are simulated by the following equations (1-9) and (1-10):

w'=w·γw 式(1-9)w'= w ·γw Formula (1-9)

h'=h·γh 式(1-10)h'=h·γ h formula (1-10)

其中w'和h'是边界框的当前宽度和高度,w和h是边界框的先前的宽度和高度,γw和γh随机变量,这些变量用于记录边界框的大小变化。在训练集中,利用用平均值为1的γw和γh来进行拉普拉斯分布建模能使在前后两帧的边界框大小相近时会有着更高的概率。为了教会网络更偏向于选择幅度小的运动而不是幅度大的运动,运用从上述的拉普拉斯分布中提取的随机目标来扩充训练集便可以实现。因为这些训练样本是从拉普拉斯分布中采样的,所以幅度小的运动将比幅度大的运动采样得更多,因此网络将学会在其他条件相同的情况下,更趋向于幅度小的运动而不是幅度大的运动。实验表明,与分类任务中使用的标准均匀裁剪程序相比,拉普拉斯裁剪程序提高了跟踪器的性能。另外还要注意,拉普拉斯分布的比例参数是通过交叉验证而求得分别为

Figure RE-RE-GDA0003321336830000101
对于边界框中心的运动;和
Figure RE-RE-GDA0003321336830000102
对于边界框大小的变化。通过约束随机裁剪,使得它必须在每个维度中包含至少一半的目标对象。而且还得限制大小变化,如γwh∈(0.6,1.4),以避免网络过度地拉伸或收缩边界框。where w' and h' are the current width and height of the bounding box, w and h are the previous width and height of the bounding box, and γw and γh are random variables used to record the size changes of the bounding box. In the training set, modeling the Laplacian distribution with γw and γh with a mean of 1 makes it possible to have a higher probability of having similar bounding boxes in the two frames before and after. In order to teach the network to prefer small movements over large ones, augmenting the training set with random targets drawn from the Laplacian distribution described above can be achieved. Because these training samples are sampled from a Laplacian distribution, motions of small magnitude will be sampled more than motions of large magnitude, so the network will learn to favor small motions more, all else being equal rather than large movements. Experiments show that the Laplacian cropping procedure improves tracker performance compared to the standard uniform cropping procedure used in classification tasks. Also note that the scale parameter of the Laplace distribution is obtained through cross-validation as
Figure RE-RE-GDA0003321336830000101
for the motion of the center of the bounding box; and
Figure RE-RE-GDA0003321336830000102
For bounding box size changes. Random cropping by constraining it so that it must contain at least half of the target objects in each dimension. And it is necessary to limit the size changes, such as γ w , γ h ∈ (0.6, 1.4), to avoid the network stretching or shrinking the bounding box excessively.

步骤7:通过卷积神经网络进行深度学习。该算法网络中的卷积层需要在ImageNet上预先训练,用1e-5的学习率训练这个网络,其他超参数则取自CaffeNet的默认值。为了训练网络,每个训练示例会交替地取自于视频和图像组成的训练集。当使用一个视频训练例子时,需要随机地选择一个视频,并且在这个视频中随机选择一对连续的帧。然后,根据步骤 5中描述的方法来裁剪视频。为了取得更好的效果,还需要对当前帧进行随机裁剪,用额外的例子来扩充数据集。完成了视频的训练后,随机采样一幅图像,并重复上面描述的过程。每当一个视频或图像被采样,新的随机样品就会被动态产生,以在训练过程中创造更多的多样性。Step 7: Deep Learning through Convolutional Neural Networks. The convolutional layers in the algorithm network need to be pre-trained on ImageNet, the network is trained with a learning rate of 1e-5, and other hyperparameters are taken from the default values of CaffeNet. To train the network, each training example is alternately taken from a training set of videos and images. When using a video training example, a video needs to be randomly selected, and a pair of consecutive frames in this video is randomly selected. Then, crop the video according to the method described in step 5. For better results, random cropping of the current frame is also required to augment the dataset with additional examples. After training on the video, randomly sample an image and repeat the process described above. Every time a video or image is sampled, new random samples are dynamically generated to create more diversity during training.

步骤8:利用PD环节来消除不断积累的主从跟随误差。在这个环节中,利用比例系数kp和微分系数kd进行调节能迅速地使系统达到稳态,从而让从手末端的柔性机械手能够快捷并精准地跟随主手末端位姿坐标的变化,其调控规律如式(1-11):Step 8: Use the PD link to eliminate the accumulated master-slave following error. In this link, the use of proportional coefficient k p and differential coefficient k d for adjustment can quickly make the system reach a steady state, so that the flexible manipulator at the end of the slave hand can quickly and accurately follow the change of the pose coordinates of the end of the master hand. The regulation law is as formula (1-11):

Figure RE-RE-GDA0003321336830000111
Figure RE-RE-GDA0003321336830000111

其中

Figure RE-RE-GDA0003321336830000112
分别表示主、从手末端执行器位姿速度,Xm,Xs分别表示主、从手末端执行器位姿,同时kp和kd都不应该过大,kp过大会使系统动态性能变差,kd过大会使系统抑制干扰能力降低。in
Figure RE-RE-GDA0003321336830000112
Respectively represent the pose and velocity of the master and slave hand end effectors, X m , X s represent the master and slave hand end effector poses respectively, at the same time k p and k d should not be too large, if k p is too large, it will make the system dynamic performance If k d is too large, it will reduce the ability of the system to suppress interference.

步骤9:消除医生手部抖动(抖动将通过主从映射反映到从手)带来的影响。本发明将对主手和从手分别采用两次滑动均值滤波(滑动均值算法对周期性的干扰具有较好的抑制)来过滤掉抖动,设定在一定时间内进行连续采样,之后每计算一次测量数据只需要进行一次采样,其具体计算如式(1-12):Step 9: Eliminate the influence of the doctor's hand shaking (the shaking will be reflected to the slave hand through the master-slave mapping). In the present invention, the master hand and the slave hand are respectively filtered by two sliding mean values (the sliding mean algorithm has better suppression of periodic interference) to filter out the jitter, and it is set to perform continuous sampling within a certain period of time, and then each calculation is performed once. The measurement data only needs to be sampled once, and its specific calculation is shown in formula (1-12):

Figure RE-RE-GDA0003321336830000113
Figure RE-RE-GDA0003321336830000113

在第i次采样周期中,将采样得到的每一个离散点在进入下一步采样之前,利用上式计算得到均值采样结果;其中,Ni为滤波计算结果,n为均值数字滤波器阶数(i≥n)。In the i-th sampling cycle, each discrete point obtained by sampling is calculated by the above formula to obtain the mean sampling result before entering the next sampling; wherein, N i is the filtering calculation result, n is the mean value digital filter order ( i≥n).

以上所述仅为发明的较佳实施例而己,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.

Claims (10)

1. The master-slave control method based on deep learning is characterized by comprising the following steps:
establishing a motion coordinate system of an execution unit;
establishing a master-slave mapping relation, describing the kinematic forward and inverse operation of a master hand and a slave hand corresponding to the execution unit through the master-slave mapping relation, and constructing a master-slave heterogeneous model;
tracking the target in real time under the master-slave relation to obtain tracking result data;
and carrying out deep learning through a convolutional neural network to complete the control process of the execution unit.
2. The deep learning-based master-slave control method according to claim 1, wherein the establishing of the master-slave mapping relationship, the describing of the kinematic forward-inverse operation of the master-slave hand corresponding to the execution unit through the master-slave mapping relationship, and the establishing of the master-slave heterogeneous model comprises:
establishing a mapping relation from a Cartesian space coordinate system to describe the kinematics positive and inverse operation of the master hand and the slave hand, and solving by using an inverse Jacobian matrix;
velocity of the end of the master hand: Δ X ═ J (θ) · Δ θ;
velocity from the end of the hand: Δ θ ═ J (θ)-1·ΔX;
Wherein,
Figure FDA0003027763530000011
is the angular velocity vector of the joint, J (theta) is epsilon R6×6The matrix of the Jacobian is obtained,
Figure FDA0003027763530000012
is the terminal velocity vector.
3. The master-slave control method based on deep learning of claim 1, wherein the "real-time tracking of the target under master-slave relationship to obtain tracking result data" comprises:
and utilizing a GOTURN algorithm to perform real-time tracking of the target.
4. The master-slave control method based on deep learning of claim 1, wherein the deep learning through convolutional neural network comprises:
pre-training the convolutional layer on ImageNet, training the network by using the learning rate of 1e-5, and taking other hyper-parameters from the default value of CaffeNet;
each training example is alternately taken from the training set and video cropping is performed using the GOTURN algorithm.
5. The deep learning based master-slave control method according to claim 4, comprising:
random cropping of the current frame is also required, with additional examples to augment the data set.
6. The deep learning-based master-slave control method according to claim 1, wherein the establishing of the motion coordinate system of the execution unit and the establishing of the master-slave mapping relationship describe the kinematics positive and inverse operation of the master and slave hands corresponding to the execution unit through the master-slave mapping relationship, and the establishing of the master-slave heterogeneous model comprises: a flexibility testing step:
simultaneously driving one end of the execution unit to perform bending motion to perform flexibility test, wherein the connecting end is defined as a far end;
determining r, L, theta, delta, d as the distance of the line from the central axis, the length of the flexible joint, the rotation angle in the y-direction, the rotation angle in the z-direction, respectivelyAnd distal end in plane xbOybPosition projection of (1);
the position projection can be calculated by:
Figure FDA0003027763530000021
wherein d isx,dyRepresenting the projection of the position on the x, y axis, respectively, dx,dyCan measure in the operation process, the posture formula of the far end of the flexible joint is as follows:
Figure FDA0003027763530000022
attaching a jaw to the distal end, the jaw tip having coordinates of:
Figure FDA0003027763530000023
wherein s represents the length of the connection between the rigid rod and the machining spring, and Lg represents the length of the clamping jaw; when s is 0, the execution unit is considered as a bendable joint.
7. The master-slave control method based on deep learning of claim 6, wherein after the "flexible testing step", the method comprises: and (3) performance testing:
the driving device is configured to provide rotary power and is connected with the execution unit;
and ensuring that the execution unit is in a bending state, and loading tensile force at the far end of the clamp holder to perform performance test on the execution unit.
8. The deep learning-based master-slave control method according to claim 1, wherein the step of performing real-time tracking of the target in a master-slave relationship to obtain tracking result data comprises: smoothing of tracking effect:
first, the current frame (c'x,c'y) Center of middle bounding box relative to previous frame (c)x,cy) The center of the middle bounding box is modeled:
c'x=cx+w·Δx
c'y=cy+h·Δy
where w and h are the width and height, respectively, of the bounding box of the previous frame; Δ x and y are random variables;
also, the dimensional change was simulated by:
w′=w·γw
h'=h·γh
where w 'and h' are the current width and height of the bounding box, w and h are the previous width and height of the bounding box, γwAnd gammahA random variable;
in the training set, γ with an average value of 1 is usedwAnd gammahPerforming Laplace distribution modeling, and expanding a training set by using a random target extracted from the Laplace distribution;
the proportion parameters of the Laplace distribution are as follows:
for motion of the bounding box center:
Figure FDA0003027763530000031
for variations in bounding box size:
Figure FDA0003027763530000032
9. the master-slave control method based on deep learning of claim 1, wherein after deep learning by convolutional neural network, the method comprises: eliminating the master-slave following error accumulated continuously and eliminating the hand shake of the doctor by utilizing a PD link;
in the process of eliminating the continuously accumulated master-slave following errors by utilizing the PD link, the method comprises the following steps:
the regulation and control rule is as follows:
Figure FDA0003027763530000041
wherein
Figure FDA0003027763530000042
Respectively represents the pose speed, X, of the master and slave hand-end actuatorsm,XsRespectively representing the poses of the master hand end effector and the slave hand end effector, and k is the same timepAnd kdRespectively representing a proportional parameter and an integral parameter;
in the process of eliminating hand shake of doctors, two times of sliding mean filtering are respectively adopted for the master hand and the slave hand, specifically:
Figure FDA0003027763530000043
Nifor the filtering calculation result, n is the order of the mean digital filter (i is more than or equal to n), and i is the ith sampling period.
10. Use of the master-slave control method based on deep learning according to claim 1 in the direction of a medical flexible robot.
CN202110433533.9A 2021-04-19 2021-04-19 Master-slave control method based on deep learning and application Active CN113742992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110433533.9A CN113742992B (en) 2021-04-19 2021-04-19 Master-slave control method based on deep learning and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110433533.9A CN113742992B (en) 2021-04-19 2021-04-19 Master-slave control method based on deep learning and application

Publications (2)

Publication Number Publication Date
CN113742992A true CN113742992A (en) 2021-12-03
CN113742992B CN113742992B (en) 2024-03-01

Family

ID=78728284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110433533.9A Active CN113742992B (en) 2021-04-19 2021-04-19 Master-slave control method based on deep learning and application

Country Status (1)

Country Link
CN (1) CN113742992B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113907693A (en) * 2021-12-10 2022-01-11 极限人工智能有限公司 Operation mapping ratio adjusting method and device, electronic equipment and storage medium
CN114569252A (en) * 2022-03-02 2022-06-03 中南大学 Master-slave mapping proportion control system and method for surgical robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104440864A (en) * 2014-12-04 2015-03-25 深圳先进技术研究院 Master-slaver teleoperation industrial robot system and control method thereof
CN109968310A (en) * 2019-04-12 2019-07-05 重庆渝博创智能装备研究院有限公司 A kind of mechanical arm interaction control method and system
US20210034353A1 (en) * 2019-07-29 2021-02-04 Seiko Epson Corporation Program transfer system and robot system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104440864A (en) * 2014-12-04 2015-03-25 深圳先进技术研究院 Master-slaver teleoperation industrial robot system and control method thereof
CN109968310A (en) * 2019-04-12 2019-07-05 重庆渝博创智能装备研究院有限公司 A kind of mechanical arm interaction control method and system
US20210034353A1 (en) * 2019-07-29 2021-02-04 Seiko Epson Corporation Program transfer system and robot system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾百龙 等: "柔性主从手臂系统自适应混合控制的研究", 振动与冲击, vol. 30, no. 12, pages 51 - 55 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113907693A (en) * 2021-12-10 2022-01-11 极限人工智能有限公司 Operation mapping ratio adjusting method and device, electronic equipment and storage medium
CN114569252A (en) * 2022-03-02 2022-06-03 中南大学 Master-slave mapping proportion control system and method for surgical robot
CN114569252B (en) * 2022-03-02 2024-01-30 中南大学 Master-slave mapping proportion control system and method for surgical robot

Also Published As

Publication number Publication date
CN113742992B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
US11801100B2 (en) Estimation of a position and orientation of a frame used in controlling movement of a tool
JP6087368B2 (en) Application of force feedback at the input device that prompts the operator of the input device to command the joint device to take a suitable posture
TWI695765B (en) Robotic arm
CN113977602B (en) A method for controlling the admittance of force feedback end gripper
CN111315309A (en) System and method for controlling a robotic manipulator or related tool
CN113876434A (en) Master-slave motion control method, robot system, equipment and storage medium
Lee et al. Modeling and control of robotic surgical platform for single-port access surgery
JP2013049102A (en) Robot control device and method of determining robot attitude
CN114343847A (en) Hand-eye calibration method of surgical robot based on optical positioning system
CN113742992B (en) Master-slave control method based on deep learning and application
CN115500957A (en) Method for adjusting telecentric fixed point of surgical robot
CN114391793A (en) A method, system and medium for autonomous control of endoscope visual field
Ren et al. A master-slave control system with workspaces isomerism for teleoperation of a snake robot
CN115781690A (en) Control method and device for multi-joint mechanical arm, electronic equipment and storage medium
EP4289379A1 (en) Method and device for planning initial poses of surgical arms of laparoscopic surgical robot
CN113876433A (en) Robot system and control method
CN114376726A (en) Path planning method and related device for transcranial magnetic stimulation navigation process
Bihlmaier et al. Endoscope robots and automated camera guidance
Capolei et al. Positioning the laparoscopic camera with industrial robot arm
Alassi et al. Development and kinematic analysis of a redundant, modular and backdrivable laparoscopic surgery robot
CN114366313B (en) A mirror-holding robot control method based on the pose of laparoscopic surgical instruments
Karadimos et al. Virtual reality simulation of a robotic laparoscopic surgical system
Wang et al. Design, control and analysis of a dual-arm continuum flexible robot system
CN115429432A (en) Readable storage medium, surgical robot system and adjustment system
Kuo et al. A Design for Remote Center of Motion Control Using a Software Computing Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240626

Address after: 510000 No. 106 Fengze East Road, Nansha District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou dinghang Information Technology Service Co.,Ltd.

Country or region after: China

Address before: 510062 Dongfeng East Road, Yuexiu District, Guangzhou, Guangdong 729

Patentee before: GUANGDONG University OF TECHNOLOGY

Country or region before: China

TR01 Transfer of patent right