[go: up one dir, main page]

CN107097231A - A kind of concentric tube robot precise motion control method of view-based access control model servo - Google Patents

A kind of concentric tube robot precise motion control method of view-based access control model servo Download PDF

Info

Publication number
CN107097231A
CN107097231A CN201710546729.2A CN201710546729A CN107097231A CN 107097231 A CN107097231 A CN 107097231A CN 201710546729 A CN201710546729 A CN 201710546729A CN 107097231 A CN107097231 A CN 107097231A
Authority
CN
China
Prior art keywords
image
matrix
jacobian matrix
motion control
concentric tube
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710546729.2A
Other languages
Chinese (zh)
Inventor
宋霜
卢意
张长春
孟庆虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN201710546729.2A priority Critical patent/CN107097231A/en
Publication of CN107097231A publication Critical patent/CN107097231A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40005Vision, analyse image at one station during manipulation at next station

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

本发明公开了一种基于视觉伺服的同心管机器人精确运动控制方法,此方法基于图像建立eye‑in‑hand的视觉伺服系统,无需摄像头的标定而旨在提取图像的特征,并通过图像雅克比矩阵的映射完成控制量的提取;利用Kalman滤波器完成对图像雅克比矩阵的在线估计,以图像特征变化作为观测矢量,以图像雅克比矩阵为状态矢量;然后进而使用图像雅克比矩阵的估计值作为Kalman滤波器的观测矩阵,以关节角度误差作为状态矢量,图像误差作为观测矢量,这样滤波估计可得到关节角度的控制量;当图像误差趋于零时,将完成对目标点的定位。本发明依赖于较为可靠的图像信息对运动控制进行反馈,最后实现在手术过程中对微小颤动的目标点实现精确定位,以此来完成手术操作。

The invention discloses a precise motion control method for a concentric tube robot based on visual servoing. The method establishes an eye-in-hand visual servoing system based on images, and aims at extracting image features without the need for camera calibration, and through the image Jacobian The mapping of the matrix completes the extraction of the control quantity; the Kalman filter is used to complete the online estimation of the image Jacobian matrix, the image feature change is used as the observation vector, and the image Jacobian matrix is used as the state vector; and then the estimated value of the image Jacobian matrix is used As the observation matrix of the Kalman filter, the joint angle error is used as the state vector, and the image error is used as the observation vector, so that the filter estimation can obtain the control amount of the joint angle; when the image error tends to zero, the positioning of the target point will be completed. The present invention relies on relatively reliable image information to feed back motion control, and finally achieves precise positioning of tiny trembling target points during the operation, thereby completing the operation.

Description

一种基于视觉伺服的同心管机器人精确运动控制方法A precise motion control method for concentric tube robot based on visual servoing

技术领域technical field

本发明涉及一种同心管医疗机器人精确运动控制方法,尤其涉及一种基于视觉伺服的同心管机器人精确运动控制方法。The invention relates to a precise motion control method for a concentric tube medical robot, in particular to a precise motion control method for a concentric tube robot based on visual servoing.

背景技术Background technique

现阶段关于同心管机器人的运动控制主流为遥操作算法,主要基于其运动学模型的建立,其对运动学模型的准确性以及考虑摩擦、扭转的因素要求较高,在此基础上,操作者很难对手术目标点进行精准定位,只能大致到达。近年来,X射线、超声波、MRI、立体成像等相关技术被应用于进行同心管运动形状的反馈,但是基于图像的视觉伺服方法无需对硬件进行太多的修正并且可以提供准确的图像信息作为运动环的反馈。同时,基于图像的方法建立于较少的运动学模型基础上,而且其无需机器人本体与摄像头之间的标定,可以准确的定位目标点,使其始终保持于图像中心。At present, the mainstream of the motion control of the concentric tube robot is the teleoperation algorithm, which is mainly based on the establishment of its kinematic model, which has high requirements for the accuracy of the kinematic model and the consideration of factors such as friction and torsion. It is difficult to accurately locate the surgical target point, and it can only be roughly reached. In recent years, related technologies such as X-ray, ultrasound, MRI, and stereoscopic imaging have been applied to feedback the moving shape of concentric tubes, but the image-based visual servoing method does not require much modification of the hardware and can provide accurate image information as a moving image. ring feedback. At the same time, the image-based method is based on fewer kinematic models, and it does not require calibration between the robot body and the camera, and can accurately locate the target point so that it is always kept at the center of the image.

发明内容Contents of the invention

本发明的目的是为了解决现有技术中存在的缺点,而提出的一种基于视觉伺服的同心管机器人精确运动控制方法。The object of the present invention is to solve the shortcomings in the prior art, and propose a precise motion control method for a concentric tube robot based on visual servoing.

为了实现上述目的,本发明采用了如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:

一种基于视觉伺服的同心管机器人精确运动控制方法,此方法基于图像建立eye-in-hand的视觉伺服系统,无需摄像头的标定而旨在提取图像的特征,并通过图像雅克比矩阵的映射完成控制量的提取;利用Kalman滤波器完成对图像雅克比矩阵的在线估计,以图像特征变化作为观测矢量,以图像雅克比矩阵为状态矢量;进而使用图像雅克比矩阵的估计值作为Kalman滤波器的观测矩阵,以关节角度误差作为状态矢量,图像误差作为观测矢量,这样滤波估计可得到关节角度的控制量;当图像误差趋于零时,将完成对目标点的定位。A precise motion control method for concentric tube robots based on visual servoing. This method establishes an eye-in-hand visual servoing system based on images. It does not require camera calibration and aims to extract image features, and completes it through the mapping of the image Jacobian matrix. The extraction of the control quantity; use the Kalman filter to complete the online estimation of the image Jacobian matrix, take the image feature change as the observation vector, and use the image Jacobian matrix as the state vector; then use the estimated value of the image Jacobian matrix as the Kalman filter The observation matrix uses the joint angle error as the state vector, and the image error as the observation vector, so that the filter estimation can obtain the control amount of the joint angle; when the image error tends to zero, the positioning of the target point will be completed.

本发明的目的是解决遥操作算法对机器人未能实现精确定位目标点的缺陷,并且依赖于较为可靠的图像信息对运动控制进行反馈,最后实现在手术过程中对微小颤动的目标点实现精确定位,以此来完成手术操作。本发明针对现有技术未能保证机器人对手术位置的精确定位的缺陷,利用机器人末端摄像头采集的图像信息引入视觉反馈,虽然在手术过程中,灶点是会伴随患者的呼吸而发生微小移动,而应用此方法亦可完成对运动点的追踪达到其准确定位的目的。The purpose of the present invention is to solve the defect that the teleoperation algorithm fails to accurately locate the target point for the robot, and rely on more reliable image information to feed back the motion control, and finally realize the precise positioning of the micro-vibrating target point during the operation , to complete the operation. The present invention aims at the defect that the prior art fails to guarantee the precise positioning of the surgical position by the robot, and introduces visual feedback by using the image information collected by the camera at the end of the robot. The application of this method can also complete the tracking of the moving point to achieve the purpose of its accurate positioning.

附图说明Description of drawings

图1为本发明提出的一种应用于微创手术的同心管医疗机器人的结构示意图。FIG. 1 is a schematic structural view of a concentric tube medical robot applied to minimally invasive surgery proposed by the present invention.

图2为本发明提出的一种基于视觉伺服的同心管机器人精确运动控制方法的流程图。Fig. 2 is a flowchart of a precise motion control method for a concentric tube robot based on visual servoing proposed by the present invention.

图3为本发明提出的基于视觉伺服的同心管机器人的实验平台。Fig. 3 is the experimental platform of the concentric tube robot based on visual servoing proposed by the present invention.

具体实施方式detailed description

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention.

参照图1,一种应用于微创手术的同心管医疗机器人,包括四个伺服电机1、电机端盖2、联轴器3、后端盖4、移动载体6、微型轴承7、同心管机构、方形柱体11、滚珠丝杠16和传动齿轮组17, 四个伺服电机1设置在电机端盖2上,其中两个相对设置的伺服电机1的输出轴通过联轴器3与后端盖4连接,后端盖4上设有两个滚珠丝杠16,两个滚珠丝杠16通过联轴器3与上述两个伺服电机1的输出轴连接,移动载体6设置在两个滚珠丝杠16中部,滚珠丝杠16通过法兰盘15固定在移动载体6上,移动载体6上设置有限位开关5,在两个滚珠丝杠16外侧设有两根方形柱体11,余下两个相对的伺服电机1通过电机端盖2上的传动齿轮组17带动方形柱体11;同心管机构包括外管8和插入外管8中的内管9,同心管机构通过固定管固定在基座上;内管9通过齿轮14固定在移动载体6上,方形柱体11上设有与齿轮14相啮合的齿轮传动模块12;两个伺服电机通过联轴器3带动滚珠丝杠16进行转动,从而滚珠丝杠16上的移动载体6跟着轴向移动,固定于移动载体6上的同心管机构也就随之进行轴向伸缩;余下两个伺服电机通过齿轮传动,带动方形柱体11转动,再通过移动载体6上齿轮传动模块12、14带动固定于移动载体6上的同心管机构转动。Referring to Figure 1, a concentric tube medical robot for minimally invasive surgery, including four servo motors 1, motor end cover 2, coupling 3, rear end cover 4, mobile carrier 6, miniature bearing 7, concentric tube mechanism , a square cylinder 11, a ball screw 16 and a transmission gear set 17, four servo motors 1 are arranged on the motor end cover 2, and the output shafts of the two oppositely arranged servo motors 1 are connected to the rear end cover through a coupling 3 4 connection, the rear end cover 4 is provided with two ball screws 16, the two ball screws 16 are connected with the output shafts of the above two servo motors 1 through the coupling 3, and the mobile carrier 6 is set on the two ball screws In the middle of 16, the ball screw 16 is fixed on the mobile carrier 6 through the flange 15, the limit switch 5 is arranged on the mobile carrier 6, and two square cylinders 11 are arranged on the outer sides of the two ball screws 16, and the remaining two are opposite to each other. The servo motor 1 drives the square cylinder 11 through the transmission gear set 17 on the motor end cover 2; the concentric tube mechanism includes an outer tube 8 and an inner tube 9 inserted into the outer tube 8, and the concentric tube mechanism is fixed on the base through a fixed tube The inner tube 9 is fixed on the mobile carrier 6 through the gear 14, and the square cylinder 11 is provided with a gear transmission module 12 meshed with the gear 14; two servo motors drive the ball screw 16 to rotate through the coupling 3, thereby The mobile carrier 6 on the ball screw 16 moves axially, and the concentric tube mechanism fixed on the mobile carrier 6 also expands and contracts axially; the remaining two servo motors drive the square cylinder 11 to rotate through gear transmission, and then The concentric pipe mechanism fixed on the mobile carrier 6 is driven to rotate by the gear transmission modules 12 and 14 on the mobile carrier 6 .

本发明在内管9末端设置一个微型内窥镜摄像头10,利用其对目标点的识别定位来实现整个方法的过程。实验搭建平台如图3所示。In the present invention, a micro-endoscopic camera 10 is arranged at the end of the inner tube 9, and the process of the whole method is realized by using its identification and positioning of the target point. The experimental platform is shown in Figure 3.

参照图2,本发明提出的一种基于视觉伺服的同心管机器人精确运动控制方法,此方法基于图像建立eye-in-hand的视觉伺服系统,无需摄像头的标定而旨在提取图像的特征,并通过图像雅克比矩阵的映射完成控制量的提取;利用Kalman滤波器完成对图像雅克比矩阵的在线估计,以图像特征变化作为观测矢量,以图像雅克比矩阵为状态矢量;然后进而使用图像雅克比矩阵的估计值作为Kalman滤波器的观测矩阵,以关节角度误差作为状态矢量,图像误差作为观测矢量,这样滤波估计可得到关节角度的控制量;当图像误差趋于零时,将完成对目标点的定位。Referring to Fig. 2, the present invention proposes a method for precise motion control of concentric tube robots based on visual servoing. This method establishes an eye-in-hand visual servoing system based on images, and aims at extracting image features without camera calibration, and The extraction of the control quantity is completed through the mapping of the image Jacobian matrix; the Kalman filter is used to complete the online estimation of the image Jacobian matrix, the image feature change is used as the observation vector, and the image Jacobian matrix is used as the state vector; and then the image Jacobian is used The estimated value of the matrix is used as the observation matrix of the Kalman filter, the joint angle error is used as the state vector, and the image error is used as the observation vector, so that the filter estimation can obtain the control amount of the joint angle; when the image error tends to zero, the target point will be completed positioning.

本方法中,利用放置于机器人内管末端的摄像头实时采集图像并提取图像特征,利用公式(1)求取目标中心坐标。In this method, the camera placed at the end of the inner tube of the robot is used to collect images in real time and extract image features, and formula (1) is used to obtain the coordinates of the center of the target.

(1) (1)

通过单独控制每个伺服电机一定的控制量,图像平面目标点位置依次发生变化,因此,图像雅克比矩阵的初始值为。以图像特征变化为状态为观测向量,以为状态向量进行估计,因此状态方程为,By individually controlling a certain amount of control of each servo motor, the position of the target point on the image plane changes sequentially. Therefore, the image Jacobian matrix The initial value of . Variations in image features is the state as the observation vector, with is estimated for the state vector, so the state equation is,

(2) (2)

迭代过程分为状态预测及预测更新,如下,The iterative process is divided into state prediction and prediction update, as follows,

最后将估计的图像雅克比矩阵作为Kalman滤波器的观测矩阵,以以关节角度误差作为状态矢量,图像误差作为观测矢量,重新经过上述的滤波器迭代过程,因此可估计得到关节角度的控制量。Finally, the estimated image Jacobian matrix is used as the observation matrix of the Kalman filter, the joint angle error is used as the state vector, and the image error is used as the observation vector, and the above filter iteration process is re-passed, so the control amount of the joint angle can be estimated.

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都应涵盖在本发明的保护范围之内。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto, any person familiar with the technical field within the technical scope disclosed in the present invention, according to the technical solution of the present invention Any equivalent replacement or change of the inventive concepts thereof shall fall within the protection scope of the present invention.

Claims (1)

1.一种基于视觉伺服的同心管机器人精确运动控制方法,其特征在于,此方法基于图像建立eye-in-hand的视觉伺服系统,无需摄像头的标定而旨在提取图像的特征,并通过图像雅克比矩阵的映射完成控制量的提取;利用Kalman滤波器完成对图像雅克比矩阵的在线估计,以图像特征变化作为观测矢量,以图像雅克比矩阵为状态矢量;进而使用图像雅克比矩阵的估计值作为Kalman滤波器的观测矩阵,以关节角度误差作为状态矢量,图像误差作为观测矢量,这样滤波估计可得到关节角度的控制量;当图像误差趋于零时,将完成对目标点的定位。1. A concentric tube robot precise motion control method based on visual servo, it is characterized in that, this method establishes the visual servo system of eye-in-hand based on image, does not need the demarcation of camera and aims at extracting the feature of image, and through image The mapping of the Jacobian matrix completes the extraction of the control quantity; the Kalman filter is used to complete the online estimation of the image Jacobian matrix, taking the image feature change as the observation vector, and the image Jacobian matrix as the state vector; and then using the image Jacobian matrix to estimate The value is used as the observation matrix of the Kalman filter, the joint angle error is used as the state vector, and the image error is used as the observation vector, so that the filter estimation can obtain the control amount of the joint angle; when the image error tends to zero, the positioning of the target point will be completed.
CN201710546729.2A 2017-07-06 2017-07-06 A kind of concentric tube robot precise motion control method of view-based access control model servo Pending CN107097231A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710546729.2A CN107097231A (en) 2017-07-06 2017-07-06 A kind of concentric tube robot precise motion control method of view-based access control model servo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710546729.2A CN107097231A (en) 2017-07-06 2017-07-06 A kind of concentric tube robot precise motion control method of view-based access control model servo

Publications (1)

Publication Number Publication Date
CN107097231A true CN107097231A (en) 2017-08-29

Family

ID=59663767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710546729.2A Pending CN107097231A (en) 2017-07-06 2017-07-06 A kind of concentric tube robot precise motion control method of view-based access control model servo

Country Status (1)

Country Link
CN (1) CN107097231A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107703756A (en) * 2017-11-03 2018-02-16 广州视源电子科技股份有限公司 Kinetic model parameter identification method and device, computer equipment and storage medium
CN108972550A (en) * 2018-07-10 2018-12-11 哈尔滨工业大学(深圳) A kind of concentric tube robot inverse kinematics method
CN109015633A (en) * 2018-07-12 2018-12-18 深圳市寒武纪智能科技有限公司 A kind of visual servo tracking, apparatus and system
CN111319037A (en) * 2018-12-17 2020-06-23 中国科学院长春光学精密机械与物理研究所 Redundant robot uncalibrated visual servo control method and system
CN111546344A (en) * 2020-05-18 2020-08-18 北京邮电大学 Mechanical arm control method for alignment
CN114711968A (en) * 2022-03-31 2022-07-08 广东工业大学 Non-calibration target area positioning and tracking method based on surgical robot system
CN115246125A (en) * 2022-01-13 2022-10-28 聊城大学 Robot Vision Servo Control Method and System Based on Hybrid Feedback

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100097881A (en) * 2009-02-27 2010-09-06 숭실대학교산학협력단 The scheme of balance control using the kalman filter prediction of the zmp state for the stable walking of a biped robot
CN106041926A (en) * 2016-06-12 2016-10-26 哈尔滨工程大学 Industrial mechanical arm force/position mixed control method based on Kalman filter
CN106155113A (en) * 2016-08-29 2016-11-23 中国科学院自动化研究所 Bionic eye moves eye control system and bionic eye moves eye control method
CN106651949A (en) * 2016-10-17 2017-05-10 中国人民解放军63920部队 Teleoperation method and system for grabbing objects using space mechanical arm based on simulation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100097881A (en) * 2009-02-27 2010-09-06 숭실대학교산학협력단 The scheme of balance control using the kalman filter prediction of the zmp state for the stable walking of a biped robot
CN106041926A (en) * 2016-06-12 2016-10-26 哈尔滨工程大学 Industrial mechanical arm force/position mixed control method based on Kalman filter
CN106155113A (en) * 2016-08-29 2016-11-23 中国科学院自动化研究所 Bionic eye moves eye control system and bionic eye moves eye control method
CN106651949A (en) * 2016-10-17 2017-05-10 中国人民解放军63920部队 Teleoperation method and system for grabbing objects using space mechanical arm based on simulation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王洪斌: "在线估计雅可比矩阵的视觉伺服控制算法", 《系统仿真学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107703756A (en) * 2017-11-03 2018-02-16 广州视源电子科技股份有限公司 Kinetic model parameter identification method and device, computer equipment and storage medium
CN107703756B (en) * 2017-11-03 2021-03-02 广州视源电子科技股份有限公司 Kinetic model parameter identification method and device, computer equipment and storage medium
CN108972550A (en) * 2018-07-10 2018-12-11 哈尔滨工业大学(深圳) A kind of concentric tube robot inverse kinematics method
CN109015633A (en) * 2018-07-12 2018-12-18 深圳市寒武纪智能科技有限公司 A kind of visual servo tracking, apparatus and system
CN111319037A (en) * 2018-12-17 2020-06-23 中国科学院长春光学精密机械与物理研究所 Redundant robot uncalibrated visual servo control method and system
CN111546344A (en) * 2020-05-18 2020-08-18 北京邮电大学 Mechanical arm control method for alignment
CN115246125A (en) * 2022-01-13 2022-10-28 聊城大学 Robot Vision Servo Control Method and System Based on Hybrid Feedback
CN114711968A (en) * 2022-03-31 2022-07-08 广东工业大学 Non-calibration target area positioning and tracking method based on surgical robot system
CN114711968B (en) * 2022-03-31 2024-07-16 广东工业大学 Calibration-free target area positioning and tracking method based on surgical robot system

Similar Documents

Publication Publication Date Title
CN107097231A (en) A kind of concentric tube robot precise motion control method of view-based access control model servo
CN111801198B (en) Hand-eye calibration method, system and computer storage medium
US8971597B2 (en) Efficient vision and kinematic data fusion for robotic surgical instruments and other applications
CN108972559B (en) Hand-eye calibration method based on infrared stereoscopic vision positioning system and mechanical arm
Qian et al. ARssist: augmented reality on a head‐mounted display for the first assistant in robotic surgery
US8064669B2 (en) Fast 3D-2D image registration system with application to continuously guided endoscopy
CN110919658B (en) Robot calibration method based on vision and multi-coordinate system closed-loop conversion
US20150320514A1 (en) Surgical robots and control methods thereof
Richter et al. Augmented reality predictive displays to help mitigate the effects of delayed telesurgery
CN104354167A (en) Robot hand-eye calibration method and device
CN100579479C (en) Puncture hole positioning device for endoscope-operated surgical robot
CN106308946A (en) Augmented reality device applied to stereotactic surgical robot and method of augmented reality device
CN105196292B (en) Visual servo control method based on iterative duration variation
Zhang et al. An optical tracker based robot registration and servoing method for ultrasound guided percutaneous renal access
CN106335061A (en) Hand-eye relation calibration method based on four-freedom-degree robot
US20210398724A1 (en) Parallel mobile coil mechanism for magnetic manipulation in large workspace
Peng et al. Endoscope FOV autonomous tracking method for robot-assisted surgery considering pose control, hand–eye coordination, and image definition
CN103226693B (en) The identification of fishing for object based on full-view stereo vision and space positioning apparatus and method
CN102768541B (en) The control method of operating robot and system
CN114711968A (en) Non-calibration target area positioning and tracking method based on surgical robot system
Sauvée et al. Three-dimensional heart motion estimation using endoscopic monocular vision system: From artificial landmarks to texture analysis
Dumpert et al. Semi-autonomous surgical tasks using a miniature in vivo surgical robot
Vitrani et al. Robust ultrasound-based visual servoing for beating heart intracardiac surgery
CN113742992A (en) Master-slave control method based on deep learning and application
CN108616744B (en) Bionic binocular vision calibration system and calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170829

RJ01 Rejection of invention patent application after publication