[go: up one dir, main page]

CN115105202B - Focus confirming method and system for endoscope operation - Google Patents

Focus confirming method and system for endoscope operation

Info

Publication number
CN115105202B
CN115105202B CN202210534840.0A CN202210534840A CN115105202B CN 115105202 B CN115105202 B CN 115105202B CN 202210534840 A CN202210534840 A CN 202210534840A CN 115105202 B CN115105202 B CN 115105202B
Authority
CN
China
Prior art keywords
organ
focus
information
focal
bed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210534840.0A
Other languages
Chinese (zh)
Other versions
CN115105202A (en
Inventor
魏云海
李草禹
邵婕
魏忠民
梅敬泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huzhou Central Hospital
Original Assignee
Huzhou Central Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huzhou Central Hospital filed Critical Huzhou Central Hospital
Priority to CN202210534840.0A priority Critical patent/CN115105202B/en
Publication of CN115105202A publication Critical patent/CN115105202A/en
Application granted granted Critical
Publication of CN115105202B publication Critical patent/CN115105202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3937Visible markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3954Markers, e.g. radio-opaque or breast lesions markers magnetic, e.g. NMR or MRI
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3983Reference marker arrangements for use with image guided surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3991Markers, e.g. radio-opaque or breast lesions markers having specific anchoring means to fixate the marker to the tissue, e.g. hooks

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Robotics (AREA)
  • Gynecology & Obstetrics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本发明提出了一种用于腔镜手术中的病灶确认方法及系统,涉及病灶定位技术领域。通过获取病灶器官的扫描图像;然后根据病灶器官的扫描图像对病灶器官进行三维建模;然后将病灶器官的扫描图像输入至预置的床位调节模型中,生成并根据床位角度信息对手术床进行调节;然后获取并根据视觉腔镜杆的实时画面信息对病灶器官三维模型进行调节,使得病灶器官三维模型的显示视角与实时画面的视角保持一致;最后获取并根据当前定位腔镜杆位置与初步定位病灶位置计算得到定位腔镜杆与初步定位病灶位置的相对位置,以控制定位腔镜杆和视觉腔镜杆确定病灶位置,从而不需要花较多时间去调整手术床,就可以辅助医生确定病灶的位置,缩短了手术时间。

The present invention proposes a method and system for confirming lesions in laparoscopic surgery, and relates to the technical field of lesion positioning. By acquiring a scanning image of a lesion organ; then three-dimensionally modeling the lesion organ according to the scanning image of the lesion organ; then inputting the scanning image of the lesion organ into a preset bed adjustment model, generating and adjusting the operating bed according to the bed angle information; then acquiring and adjusting the three-dimensional model of the lesion organ according to the real-time picture information of the visual laparoscope rod, so that the display perspective of the three-dimensional model of the lesion organ is consistent with the perspective of the real-time picture; finally, acquiring and calculating the relative position of the positioning laparoscope rod and the preliminary positioning lesion position according to the current positioning laparoscope rod position and the preliminary positioning lesion position, so as to control the positioning laparoscope rod and the visual laparoscope rod to determine the lesion position, so that it does not need to spend much time to adjust the operating bed, and can assist doctors in determining the position of the lesion, thereby shortening the operation time.

Description

Focus confirming method and system for endoscope operation
Technical Field
The invention relates to the technical field of focus positioning, in particular to a focus confirmation method and a focus confirmation system for endoscopic surgery.
Background
Laparoscopic surgery is a newly developed minimally invasive method, is a necessary trend for the development of future surgical methods, is a common minimally invasive surgery, and has been rapidly popularized clinically in recent years. Laparoscopic surgery can replace traditional open surgery in some specific situations, and relies on special surgical instruments to perform surgical treatment on focal tissue in the body with a small incision. The operation injury is small, and the postoperative recovery is fast.
In the process of searching the focus, a doctor also needs to adjust the position of the operating table frequently in order to observe, and some focuses are positioned at positions which are not easy to perceive, so that more time is needed to adjust the operating table so as to determine the position of the focus.
Disclosure of Invention
The invention aims to provide a focus confirmation method and a focus confirmation system for endoscopic surgery, which are used for solving the problem that a focus in the prior art is located at a position which is not easy to perceive and takes more time to adjust an operating table.
In a first aspect, an embodiment of the present application provides a method for confirming a lesion in an endoscopic surgery, including the steps of:
acquiring a scanning image of a focus organ, wherein the scanning image comprises a preliminary positioning focus position;
three-dimensional modeling is carried out on the focal organ according to the scanned image of the focal organ, so as to obtain a focal organ three-dimensional model, wherein the focal organ three-dimensional model comprises preliminary positioning focal positions;
Inputting a scanning image of a focus organ into a preset bed adjusting model, generating and adjusting an operating bed according to bed angle information;
the method comprises the steps of obtaining and adjusting a focus organ three-dimensional model according to real-time picture information of a visual cavity mirror rod, so that a display view angle of the focus organ three-dimensional model is consistent with a view angle of a real-time picture;
and obtaining and calculating the relative positions of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position.
In the implementation process, the scanning image of the focus organ is obtained, the scanning image comprises a preliminary positioning focus position, so that the preliminary positioning focus position can be displayed, then the focus organ is subjected to three-dimensional modeling according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, then the scanning image of the focus organ is input into a preset bed adjusting model, the operation table is generated and adjusted according to bed angle information, the most suitable bed angle information can be obtained through the preset bed adjusting model, the operation table is adjusted, and therefore the adjustment time of the operation table is saved, then the focus organ three-dimensional model is obtained and adjusted according to real-time picture information of the vision cavity mirror rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of a real-time picture, so that a doctor can be assisted in focus observation, finally the relative positions of the positioning cavity mirror rod and the preliminary positioning focus position are obtained according to the current positioning cavity mirror rod position and the doctor, the relative positions of the preliminary positioning focus position of the focus organ are calculated, so that the positioning cavity mirror rod and vision cavity position can be controlled, the relative position of the operation table can be determined, the position can be determined, and the operation position can be quickly, and the position can be adjusted in a mode, and the operation position can be more can be confirmed.
Based on the first aspect, in some embodiments of the invention, the method further comprises the steps of:
And inputting the scanning image of the focus organ into a preset operation parameter preconditioning model to generate optimal bed angle information, optimal positioning endoscopic rod parameter information and optimal visual endoscopic rod parameter information.
Based on the first aspect, in some embodiments of the invention, the method further comprises the steps of:
acquiring patient information corresponding to a scanned image of a focus organ;
And matching in a preset bed adjustment model library according to the sex information in the patient information to obtain a corresponding bed adjustment model.
Based on the first aspect, in some embodiments of the invention, the method further comprises the steps of:
acquiring and taking final operating table angle information of multiple endoscopic surgeries and scanning image information of corresponding focal organs as sample information;
training the sample information by adopting a neural network algorithm to obtain a bed adjusting model.
Based on the first aspect, in some embodiments of the invention, the method further comprises the steps of:
Acquiring and taking final operating table angle information of multiple endoscopic surgeries and scanning image information of corresponding focal organs as initial samples;
acquiring medical record information of each endoscopic surgery;
classifying the initial sample according to medical record information and preset classification rules to obtain sub-sample information of a plurality of categories;
Training the subsampled information of each category by adopting a neural network algorithm to obtain a plurality of category bed adjustment models so as to form a bed adjustment model library.
Based on the first aspect, in some embodiments of the invention, obtaining a scanned image of a focal organ, the step of including a preliminary localization of the focal location in the scanned image includes the steps of:
Obtaining focus organ information;
And scanning the focal organ according to the focal organ information to obtain a scanning image of the focal organ, wherein the scanning image comprises preliminary positioning focal positions.
Based on the first aspect, in some embodiments of the present invention, the step of three-dimensionally modeling a focal organ according to a scanned image of the focal organ to obtain a three-dimensional model of the focal organ, wherein the three-dimensional model of the focal organ includes the steps of:
Obtaining an image sequence of a plurality of angles according to the scanned image of the focus organ;
And carrying out three-dimensional modeling according to the image sequences of the multiple angles to obtain a focus organ three-dimensional model, wherein the focus organ three-dimensional model comprises preliminary positioning focus positions.
In a second aspect, embodiments of the present application provide a lesion confirmation system for use in endoscopic surgery, comprising:
the scanning image acquisition module is used for acquiring a scanning image of a focus organ, wherein the scanning image comprises a preliminary positioning focus position;
The three-dimensional modeling module is used for carrying out three-dimensional modeling on the focal organ according to the scanned image of the focal organ to obtain a focal organ three-dimensional model, wherein the focal organ three-dimensional model comprises a preliminary positioning focal position;
The bed angle adjusting module is used for inputting the scanning image of the focus organ into a preset bed adjusting model, generating and adjusting the operating bed according to the bed angle information;
The visual adjusting module is used for acquiring and adjusting the focus organ three-dimensional model according to the real-time picture information of the visual cavity mirror rod so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of the real-time picture;
The focus determining module is used for obtaining and calculating the relative positions of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position.
In the implementation process, a scanning image of a focus organ is acquired through a scanning image acquisition module, the scanning image comprises a preliminary positioning focus position, so that the scanning image can display the preliminary positioning focus position, a three-dimensional modeling module carries out three-dimensional modeling on the focus organ according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, a bed angle adjustment module inputs the scanning image of the focus organ into a preset bed adjustment model to generate and adjust an operation bed according to bed angle information, the most suitable bed angle information can be obtained through the preset bed adjustment model, and then the operation bed is adjusted, so that the adjustment time of the operation bed is saved, a visual adjustment module acquires and adjusts the focus organ three-dimensional model according to real-time picture information of the visual cavity mirror rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of the real-time picture, the focus organ three-dimensional model is obtained by a focus determination module and is beneficial to observe by a doctor, the focus mirror rod position and preliminary positioning focus position is calculated according to the preliminary positioning focus position, the relative position of the focus mirror rod is acquired by a focus mirror rod and the doctor is beneficial to observe, the focus position of the focus organ, the relative position is determined by a doctor, the preliminary positioning position and the focus position is needed to be fast, and the position is controlled by a control mirror, and the control mirror is relatively and the position is required to be fast, and a position is relatively and a position is shortened.
In a third aspect, an embodiment of the present application provides an electronic device including a memory for storing one or more programs, and a processor. The method of any of the first aspects described above is implemented when one or more programs are executed by a processor.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as in any of the first aspects described above.
The embodiment of the invention has at least the following advantages or beneficial effects:
The embodiment of the invention provides a focus confirmation method and a focus confirmation system for endoscopic surgery, wherein a scanning image of a focus organ is acquired, the scanning image comprises a preliminary positioning focus position, so that the scanning image can display the preliminary positioning focus position, then the focus organ is subjected to three-dimensional modeling according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, then the scanning image of the focus organ is input into a preset bed position adjusting model, an operating table is regulated according to bed position angle information, the most suitable bed position angle information can be obtained through the preset bed position adjusting model, the operating table is regulated further, the operating table adjusting time is saved, then the focus organ three-dimensional model is regulated according to real-time picture information of a vision endoscope rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of a real-time picture, the doctor is assisted to observe the focus organ three-dimensional model, finally the doctor obtains and calculates the relative position of a positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position, the position of the focus endoscope is controlled by the preset bed position adjusting model, the vision endoscope position can be controlled by the control rod, and the relative positioning endoscope position can be determined, and the auxiliary position of the focus can be quickly confirmed by the vision endoscope position is not needed, and the operating table is regulated by the auxiliary position is determined, and the position of the operating table can be regulated. By matching the corresponding bed adjusting model, the obtained bed angle information is more signed with the operation requirement, and the adjusting time of the operation bed is further shortened.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for confirming a focus in an endoscopic surgery according to an embodiment of the present invention;
FIG. 2 is a flowchart of the steps for adjusting a model according to an organ matched couch, according to an embodiment of the present invention;
FIG. 3 is a flowchart of steps for adjusting a model according to a gender-matched bed provided in an embodiment of the present invention;
Fig. 4 is a block diagram of a focus confirming system for use in endoscopic surgery according to an embodiment of the present invention;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present invention.
The icons comprise a 110-scanning image acquisition module, a 120-three-dimensional modeling module, a 130-bed angle adjustment module, a 140-vision adjustment module, a 150-focus determination module, a 101-memory, a 102-processor and a 103-communication interface.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Examples
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The various embodiments and features of the embodiments described below may be combined with one another without conflict.
Referring to fig. 1, fig. 1 is a flowchart of a method for confirming a focus in an endoscopic surgery according to an embodiment of the present invention. The focus confirmation method for the endoscopic surgery comprises the following steps:
step S110, obtaining a scanning image of a focus organ, wherein the scanning image comprises preliminary focus position positioning, the scanning image can be image data output by medical image equipment such as CT, MRI and the like, in the embodiment, preliminary examination can be carried out on the organ before scanning, focus positioning identification can be set, and the method specifically comprises the following steps:
firstly, when a focus is detected by an endoscope, a positioning hook with a marking function is clamped at the focus, so that focus organ information is obtained, for example, in a gastroscopy operation, the positioning hook is clamped at the focus inside a stomach wall, and the positioning hook can be mounted through the endoscope and is clamped in the stomach wall by the endoscope operation, so that the device has the characteristics of small volume, light weight, safety, no toxicity and the like. The positioning tag with the positioning recognition function is arranged at the grip of the hook, and the implementation modes can be two, namely, the first mode is a miniature RFID tag, the recognition needs to install a positioning antenna on a positioning endoscope rod, and the second mode is a small permanent magnet, and the positioning recognition can be carried out on the endoscope rod by using a magnetic force sensor. The two technologies can be simultaneously used for double guarantee.
Then, the focal organ is scanned according to the focal organ information, a scanning image of the focal organ is obtained, and the scanning image comprises preliminary positioning focal positions. Because the metal component in the hook has high brightness imaging in a scanning image such as a CT image, the high brightness position in the image is the initial focus position. By acquiring a scanned image of a target organ having a preliminary localization lesion location, a preliminary localization of the lesion location may be achieved.
Step S120, three-dimensional modeling is carried out on the focal organ according to the scanned image of the focal organ to obtain a focal organ three-dimensional model, the focal organ three-dimensional model comprises a preliminary positioning focal position, and as the metal component in the hook has highlight imaging in the CT image, the obtained stomach three-dimensional model also has obvious highlight marks on the focal position. The three-dimensional modeling can be performed through volume rendering or surface rendering, the three-dimensional reconstruction can be performed by utilizing a visualization tool package (Visualization Toolkit, VTK), the three-dimensional reconstruction of the image based on the VTK mainly comprises two types of surface rendering and volume rendering, a ray projection algorithm belongs to the volume rendering, the volume rendering is facing to the whole volume data, and each voxel in a volume data field is processed, so that the reconstruction effect is more accurate. The moving cube algorithm belongs to surface drawing, wherein the surface drawing is to extract surface profile information of a part to be reconstructed in an image to perform three-dimensional drawing, and only reconstruct the surface of an object. Rendering using a surface may be performed by:
Firstly, according to the obtained scanning image and corresponding image parameters, calculating pixel values of all converted pixel points, then according to the pixel values of all converted pixel points, constructing a new image, and making binarization treatment and median filtering treatment on the new image so as to obtain basic CT image, then calculating similarity between adjacent basic CT images so as to obtain the CT image sequence of multiple angles of the part to be treated.
Then, three-dimensional modeling is carried out according to the image sequences of the angles, so as to obtain a focus organ three-dimensional model, wherein the focus organ three-dimensional model comprises preliminary positioning focus positions. The modeling is firstly to calculate pixel intervals contained in each CT image in each angle image sequence, and according to the pixel intervals, a three-dimensional model of the part to be processed is established by using a moving cube algorithm. The mobile cube algorithm belongs to the prior art and will not be described in detail here.
Step 130, inputting a scanning image of a focus organ into a preset bed adjustment model, generating and adjusting an operation bed according to bed angle information, wherein the operation bed can be an operation bed capable of automatically controlling an adjustment angle and a height, the preset bed adjustment model can be a model obtained by training by adopting a neural network model according to historical bed data, and the model can calculate the corresponding bed angle information through the input scanning image. When the bed adjusting model is calculated, firstly focus position information in a scanning image of focus organs is calculated, and then bed angle information is obtained according to the focus position information. The bed angle information includes an inclination angle of the operating bed. The lesion position information may be coordinate information, and a coordinate system may be established on the lesion organ, thereby determining the lesion position information. The adjusting operation table can acquire current angle information through the angle sensor on the operation table, then calculate the adjusting angle according to the angle information of the table and the current angle information, and generate an angle adjusting command according to the adjusting angle so as to adjust the operation table. The bed adjusting model can be obtained through the following steps:
Firstly, acquiring and taking final operation bed angle information of multiple endoscopic operations and scanning image information of corresponding focal organs as sample information, wherein the sample information comprises information of multiple completed operations, including final operation angle information, scanning image information of corresponding focal organs and the like. The scanned image information for the focal organ includes image information, focal position information, etc., which may be coordinate information.
And then training the sample information by adopting a neural network algorithm to obtain a bed adjustment model. When training is performed, firstly focus position information in scanning image information of focus organs in sample information is extracted, and then the focus position information and corresponding final operation bed angle information are trained by adopting a neural network algorithm to obtain a bed adjustment model. The neural network algorithm belongs to the prior art, and will not be described in detail herein.
Step 140, the three-dimensional model of the focus organ is obtained and regulated according to the real-time picture information of the vision cavity mirror rod, so that the display view angle of the three-dimensional model of the focus organ is consistent with the view angle of the real-time picture, the real-time picture information can be obtained in a wired transmission mode, the angle information of the vision cavity mirror rod, which is monitored in real time by using a displacement sensor at the tail part of the vision cavity mirror rod, is real-time vision information which is acquired by an inclined plane camera at the head part of the vision cavity mirror rod, and in the step, the inclined plane camera is arranged at the head part of the vision cavity mirror rod, and the imaging angle of the inclined plane camera forms an angle of 30 degrees with the vision cavity mirror rod. In the moving and rotating process of the vision cavity mirror rod, the position of the vision cavity mirror rod from a focus, the rotation detection angle and the zooming multiple of the camera are monitored in real time by a displacement sensor at the tail part and sent to the main control unit for processing, so that the relative distance between the current vision cavity mirror rod and the focus position and the current view angle direction are known, and a basis is provided for the follow-up adjustment of the three-dimensional model so that the display view angle of the three-dimensional model is kept synchronous with the current display view angle of the vision cavity mirror rod. And adjusting and transforming the three-dimensional model according to the angle information and the visual information of the visual cavity mirror rod, so that the display view angle of the three-dimensional model is consistent with the view angle of the return visual picture of the visual cavity mirror rod. The above-mentioned adjustment transformation includes rotation, scaling, etc.
Step S150, obtaining and calculating the relative positions of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position. The focus is used as a center to establish a three-dimensional coordinate system, real-time position information of the positioning endoscope rod is monitored through a displacement sensor on the positioning endoscope rod, coordinates of the positioning endoscope rod are updated in real time and are sent to a main control unit for processing in a wired transmission mode, so that the position of the current positioning endoscope rod is obtained, the position information of the current positioning endoscope rod can be projected onto a three-dimensional image, further, the relative position of the current positioning endoscope rod and the position of a preliminary positioning focus is calculated, a corresponding control command can be generated according to the relative position, and the control command is sent to the positioning endoscope rod to control the movement of the positioning endoscope rod and the vision endoscope rod, so that a doctor is assisted in rapidly determining the focus position.
In the implementation process, the scanning image of the focus organ is obtained, the scanning image comprises a preliminary positioning focus position, so that the preliminary positioning focus position can be displayed, then the focus organ is subjected to three-dimensional modeling according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, then the scanning image of the focus organ is input into a preset bed adjusting model, the operation table is generated and adjusted according to bed angle information, the most suitable bed angle information can be obtained through the preset bed adjusting model, the operation table is adjusted, and therefore the adjustment time of the operation table is saved, then the focus organ three-dimensional model is obtained and adjusted according to real-time picture information of the vision cavity mirror rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of a real-time picture, so that a doctor can be assisted in focus observation, finally the relative positions of the positioning cavity mirror rod and the preliminary positioning focus position are obtained according to the current positioning cavity mirror rod position and the doctor, the relative positions of the preliminary positioning focus position of the focus organ are calculated, so that the positioning cavity mirror rod and vision cavity position can be controlled, the relative position of the operation table can be determined, the position can be determined, and the operation position can be quickly, and the position can be adjusted in a mode, and the operation position can be more can be confirmed.
The scanning image of the focus organ can be input into a preset operation parameter preconditioning model to generate optimal bed angle information, optimal positioning endoscopic rod parameter information and optimal visual endoscopic rod parameter information. The preset operation parameter preconditioning model can be a model obtained by training a neural network model according to historical operation data, and the model can calculate and obtain optimal bed angle information, optimal positioning endoscopic rod parameter information and optimal visual endoscopic rod parameter information through input scanning images. The historical operation data comprises scanning images during operation, bed parameter information during final determination of focus positions, positioning endoscope rod parameter information and vision endoscope rod parameter information, the historical operation data is trained by adopting a neural network model, and an operation parameter preconditioning model is obtained through machine learning. The optimal bed angle information comprises optimal inclination angle of an operating bed, optimal height information of the operating bed and the like, the optimal positioning endoscope rod parameter information comprises optimal stretching length of a positioning endoscope rod, optimal stretching angle of the positioning endoscope rod and the like, and the optimal vision endoscope rod parameter information comprises optimal stretching length of a vision endoscope rod, optimal stretching of the vision endoscope rod and the like.
Before an operation is performed, the operation table, the positioning endoscope rod and the vision endoscope rod can be respectively adjusted through the obtained optimal bed angle information, the optimal positioning endoscope rod parameter information and the optimal vision endoscope rod parameter information, so that a doctor can quickly find a focus during the operation, and the operation time is further saved.
In order to further make the bed angle information meet the operation requirement, a preset bed adjustment model may be further matched, please refer to fig. 2, fig. 2 is a flowchart illustrating steps according to the organ matching bed adjustment model according to an embodiment of the present invention. The method specifically comprises the following steps:
first, organ information corresponding to a scanned image of a focal organ, which is an organ requiring surgery, such as a stomach, an intestine, etc., is acquired.
Then, matching is carried out in a preset bed adjustment model library according to the organ information, and a corresponding bed adjustment model is obtained. The preset bed adjusting model library comprises a plurality of types of bed adjusting models, and can be different bed adjusting models aiming at different organs. The matching is to find the bed adjusting model of the corresponding organ. By matching the bed adjusting model of the corresponding organ, the obtained bed angle information is more signed with the operation requirement, and the adjusting time of the operation bed is further shortened.
In order to further make the bed angle information meet the operation requirement, the preset bed adjustment model may be matched through patient information, please refer to fig. 3, fig. 3 is a flowchart illustrating steps of matching the bed adjustment model according to gender according to an embodiment of the present invention. The method specifically comprises the following steps:
firstly, patient information corresponding to a scanned image of a focus organ is acquired, wherein the patient information comprises information such as name, age, sex and the like.
Then, matching is carried out in a preset bed adjustment model library according to the sex information in the patient information, and a corresponding bed adjustment model is obtained. The preset bed adjusting model library comprises a plurality of types of bed adjusting models, which can be divided into a bed adjusting model suitable for men and a bed adjusting model suitable for women according to sexes, and the matching is to find the bed adjusting model suitable for the corresponding sexes, so that the obtained bed angle information is more symbolized with the operation requirement, and the operation bed adjusting time is further shortened. Meanwhile, corresponding bed adjusting models can be respectively set according to different age groups, and similar to the above, the description is omitted here.
Wherein, the method for obtaining the bed adjustment model library can be completed by the following steps:
firstly, acquiring and taking final operating table angle information of multiple endoscopic operations and scanning image information of corresponding focal organs as initial samples;
and then obtaining medical record information of each endoscopic surgery, wherein the medical record information comprises patient information, focus organ information and the like.
And classifying the initial sample according to medical record information and preset classification rules to obtain sub-sample information of a plurality of categories, wherein the classification rules can be classification according to sex information in patient information, classification according to focus organ information and the like, and can be specifically set according to a bed adjustment model library established according to actual needs. By classifying the initial samples, sub-sample information of multiple categories can be obtained, and the corresponding bed adjustment model library can be obtained by training the sub-sample information after the sub-sample information is convenient to carry out.
And finally, training the subsampled information of each category by adopting a neural network algorithm to obtain a plurality of category bed adjustment models so as to form a bed adjustment model library. When training is performed, focus position information in scanning image information of focus organs in subsampled information is firstly extracted, and then the focus position information and corresponding final operating table angle information are trained by adopting a neural network algorithm to obtain a bed adjusting model. The neural network algorithm belongs to the prior art, and will not be described in detail herein. By classifying the initial samples, training the information of each sub-sample respectively to obtain different bed adjustment models so as to form a bed adjustment model library.
Based on the same inventive concept, the invention also provides a focus confirming system for use in endoscopic surgery, please refer to fig. 4, and fig. 4 is a block diagram of a focus confirming system for use in endoscopic surgery according to an embodiment of the invention. The focus confirming system for use in endoscopic surgery includes:
a scan image acquisition module 110, configured to acquire a scan image of a focal organ, where the scan image includes a preliminary localization focal position;
The three-dimensional modeling module 120 is configured to perform three-dimensional modeling on a focal organ according to the scanned image of the focal organ, so as to obtain a focal organ three-dimensional model, where the focal organ three-dimensional model includes a preliminary positioning focal position;
the bed angle adjusting module 130 is used for inputting the scanning image of the focus organ into a preset bed adjusting model, generating and adjusting the operating bed according to the bed angle information;
The vision adjusting module 140 is configured to acquire and adjust the focal organ three-dimensional model according to the real-time image information of the vision cavity mirror rod, so that the display view angle of the focal organ three-dimensional model is consistent with the view angle of the real-time image;
The focus determining module 150 is configured to obtain and calculate a relative position of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position, so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position.
In the implementation process, a scanning image of a focus organ is acquired through a scanning image acquisition module 110, the scanning image comprises a preliminary positioning focus position, so that the scanning image can display the preliminary positioning focus position, a three-dimensional modeling module 120 carries out three-dimensional modeling on the focus organ according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, a bed angle adjustment module 130 inputs the scanning image of the focus organ into a preset bed adjustment model, generates and adjusts an operation bed according to bed angle information, the most suitable bed angle information can be obtained through the preset bed adjustment model, and then adjusts the operation bed, so that the adjustment time of the operation bed is saved, a visual adjustment module 140 acquires and adjusts the focus organ three-dimensional model according to real-time picture information of the vision cavity mirror rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of the real-time picture, thereby assisting doctor to observe the focus, a focus determination module 150 acquires and calculates the focus position of the focus organ according to the current positioning cavity mirror rod position and the preliminary positioning cavity position, generates a focus position and the preliminary positioning cavity mirror, and the position of the focus position is controlled by a control mirror, and the relative positioning cavity position of the preliminary positioning cavity is determined, and the position of the focus position and the focus position is required to be controlled, so that the position is fast and the position is determined.
Referring to fig. 5, fig. 5 is a schematic block diagram of an electronic device according to an embodiment of the present application. The electronic device comprises a memory 101, a processor 102 and a communication interface 103, wherein the memory 101, the processor 102 and the communication interface 103 are electrically connected with each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules, such as program instructions/modules corresponding to a focus identification system for use in endoscopic surgery according to embodiments of the present application, and the processor 102 executes the software programs and modules stored in the memory 101, thereby performing various functional applications and data processing. The communication interface 103 may be used for communication of signaling or data with other node devices.
The Memory 101 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 102 may be an integrated circuit chip with signal processing capabilities. The processor 102 may be a general purpose processor including a central Processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc., or may be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the configuration shown in fig. 5 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 5, or have a different configuration than shown in fig. 5. The components shown in fig. 5 may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (8)

1. A lesion confirmation system for use in endoscopic surgery, comprising:
the scanning image acquisition module is used for acquiring a scanning image of a focus organ, wherein the scanning image comprises a preliminary positioning focus position;
the three-dimensional modeling module is used for carrying out three-dimensional modeling on the focal organ according to the scanned image of the focal organ to obtain a focal organ three-dimensional model, wherein the focal organ three-dimensional model comprises a preliminary positioning focal position;
The bed angle adjusting module is used for inputting the scanning image of the focus organ into a preset bed adjusting model, and when the bed adjusting model is used for calculating, focus position information in the scanning image of the focus organ is calculated firstly, then bed angle information is obtained according to the focus position information, and finally the operating bed is adjusted according to the bed angle information;
The visual adjusting module is used for monitoring the position of the focal distance, the rotation detection angle and the zoom multiple of the camera in real time by the displacement sensor at the tail part and sending the displacement sensor to the main control unit for processing in the moving and rotating process of the visual endoscope rod so as to obtain the relative distance between the current visual endoscope rod and the focal position and the current visual angle direction, and acquiring and adjusting the focal organ three-dimensional model according to the real-time picture information of the visual endoscope rod, so that the display visual angle of the focal organ three-dimensional model is consistent with the visual angle of the real-time picture;
The focus determining module is used for obtaining and calculating the relative positions of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position.
2. An electronic device, comprising:
a memory for storing one or more programs;
A processor;
when the one or more programs are executed by the processor, a method of lesion confirmation for use in endoscopic surgery is achieved, the method comprising the steps of:
acquiring a scanning image of a focus organ, wherein the scanning image comprises a preliminary positioning focus position;
Three-dimensional modeling is carried out on the focal organ according to the scanned image of the focal organ, so as to obtain a focal organ three-dimensional model, wherein the focal organ three-dimensional model comprises preliminary positioning focal positions;
Inputting a scanning image of a focus organ into a preset bed adjusting model, firstly calculating focus position information in the scanning image of the focus organ when the bed adjusting model calculates, then obtaining bed angle information according to the focus position information, and finally adjusting the operating bed according to the bed angle information;
In the moving and rotating process of the vision cavity mirror rod, the position of the vision cavity mirror rod from a focus, the rotation detection angle and the zoom multiple of the camera are monitored in real time by a displacement sensor at the tail part and sent to a main control unit for processing, so that the relative distance between the current vision cavity mirror rod and the focus position and the current view angle direction are known, and the three-dimensional model of the focus organ is obtained and adjusted according to the real-time picture information of the vision cavity mirror rod, so that the display view angle of the three-dimensional model of the focus organ is consistent with the view angle of a real-time picture;
and obtaining and calculating the relative positions of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position.
3. An electronic device according to claim 2, wherein a focus validation method for use in laparoscopic surgery is implemented, further comprising the steps of:
And inputting the scanning image of the focus organ into a preset operation parameter preconditioning model to generate optimal bed angle information, optimal positioning endoscopic rod parameter information and optimal visual endoscopic rod parameter information.
4. An electronic device according to claim 2, wherein a focus validation method for use in laparoscopic surgery is implemented, further comprising the steps of:
acquiring patient information corresponding to a scanned image of a focus organ;
And matching in a preset bed adjustment model library according to the sex information in the patient information to obtain a corresponding bed adjustment model.
5. An electronic device according to claim 2, wherein a focus validation method for use in laparoscopic surgery is implemented, further comprising the steps of:
acquiring and taking final operating table angle information of multiple endoscopic surgeries and scanning image information of corresponding focal organs as sample information;
training the sample information by adopting a neural network algorithm to obtain a bed adjusting model.
6. An electronic device according to claim 2, wherein a focus validation method for use in laparoscopic surgery is implemented, further comprising the steps of:
Acquiring and taking final operating table angle information of multiple endoscopic surgeries and scanning image information of corresponding focal organs as initial samples;
acquiring medical record information of each endoscopic surgery;
classifying the initial sample according to medical record information and preset classification rules to obtain sub-sample information of a plurality of categories;
Training the subsampled information of each category by adopting a neural network algorithm to obtain a plurality of category bed adjustment models so as to form a bed adjustment model library.
7. An electronic device according to claim 2, wherein the step of obtaining a scanned image of the focal organ, the scanned image including the preliminary location of the focal location comprises the steps of:
Obtaining focus organ information;
And scanning the focal organ according to the focal organ information to obtain a scanning image of the focal organ, wherein the scanning image comprises the preliminary positioning focal position.
8. An electronic device according to claim 2, wherein the step of three-dimensionally modeling the focal organ based on the scanned image of the focal organ to obtain a three-dimensional model of the focal organ, the three-dimensional model of the focal organ including the step of initially locating the focal position comprises the steps of:
Obtaining an image sequence of a plurality of angles according to the scanned image of the focus organ;
and carrying out three-dimensional modeling according to the image sequences of the plurality of angles to obtain a focus organ three-dimensional model, wherein the focus organ three-dimensional model comprises preliminary positioning focus positions.
CN202210534840.0A 2022-05-17 2022-05-17 Focus confirming method and system for endoscope operation Active CN115105202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210534840.0A CN115105202B (en) 2022-05-17 2022-05-17 Focus confirming method and system for endoscope operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210534840.0A CN115105202B (en) 2022-05-17 2022-05-17 Focus confirming method and system for endoscope operation

Publications (2)

Publication Number Publication Date
CN115105202A CN115105202A (en) 2022-09-27
CN115105202B true CN115105202B (en) 2025-07-18

Family

ID=83325560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210534840.0A Active CN115105202B (en) 2022-05-17 2022-05-17 Focus confirming method and system for endoscope operation

Country Status (1)

Country Link
CN (1) CN115105202B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861298B (en) * 2023-02-15 2023-05-23 浙江华诺康科技有限公司 Image processing method and device based on endoscopic visualization

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103356155A (en) * 2013-06-24 2013-10-23 清华大学深圳研究生院 Virtual endoscope assisted cavity lesion examination system
CN106255465A (en) * 2014-01-27 2016-12-21 美国医软科技公司 Guidance and tracking method and system with position and orientation calibration for surgical instruments

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9572519B2 (en) * 1999-05-18 2017-02-21 Mediguide Ltd. Method and apparatus for invasive device tracking using organ timing signal generated from MPS sensors
WO2019163890A1 (en) * 2018-02-21 2019-08-29 オリンパス株式会社 Medical system and medical system activation method
CN113143459B (en) * 2020-01-23 2023-07-25 海信视像科技股份有限公司 Navigation method and device for laparoscopic augmented reality operation and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103356155A (en) * 2013-06-24 2013-10-23 清华大学深圳研究生院 Virtual endoscope assisted cavity lesion examination system
CN106255465A (en) * 2014-01-27 2016-12-21 美国医软科技公司 Guidance and tracking method and system with position and orientation calibration for surgical instruments

Also Published As

Publication number Publication date
CN115105202A (en) 2022-09-27

Similar Documents

Publication Publication Date Title
US20240205377A1 (en) Augmented reality display for fluorescence guided surgery
US10022199B2 (en) Registration correction based on shift detection in image data
WO2019181632A1 (en) Surgical assistance apparatus, surgical method, non-transitory computer readable medium and surgical assistance system
CN108701170B (en) Image processing system and method for generating a three-dimensional (3D) view of an anatomical portion
US10055848B2 (en) Three-dimensional image segmentation based on a two-dimensional image information
JP6083103B2 (en) Image complementation system for image occlusion area, image processing apparatus and program thereof
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
KR20210051141A (en) Method, apparatus and computer program for providing augmented reality based medical information of patient
JP2003265408A (en) Endoscope guiding apparatus and method
US20230196595A1 (en) Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
US10102638B2 (en) Device and method for image registration, and a nontransitory recording medium
KR102433473B1 (en) Method, apparatus and computer program for providing augmented reality based medical information of patient
KR101767005B1 (en) Method and apparatus for matching images using contour-based registration
KR100346363B1 (en) Method and apparatus for 3d image data reconstruction by automatic medical image segmentation and image guided surgery system using the same
CN114126531B (en) Medical imaging system, medical imaging processing method and medical information processing equipment
CN115105202B (en) Focus confirming method and system for endoscope operation
JP6476125B2 (en) Image processing apparatus and surgical microscope system
WO2009027088A1 (en) Augmented visualization in two-dimensional images
US10631948B2 (en) Image alignment device, method, and program
US10049480B2 (en) Image alignment device, method, and program
US20210082164A1 (en) Endoscopic surgery support apparatus, endoscopic surgery support method, and endoscopic surgery support system
CN113614607B (en) Medical observation system, method and medical observation device
US20240016365A1 (en) Image processing device, method, and program
US20250268661A1 (en) Apparatus and methods for performing a medical procedure
US20230210627A1 (en) Three-dimensional instrument pose estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant