CN111658142A - MR-based focus holographic navigation method and system - Google Patents
MR-based focus holographic navigation method and system Download PDFInfo
- Publication number
- CN111658142A CN111658142A CN201910171406.9A CN201910171406A CN111658142A CN 111658142 A CN111658142 A CN 111658142A CN 201910171406 A CN201910171406 A CN 201910171406A CN 111658142 A CN111658142 A CN 111658142A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- image
- camera
- holographic
- converting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 7
- 230000000007 visual effect Effects 0.000 claims abstract description 4
- 238000003384 imaging method Methods 0.000 claims description 17
- 230000003902 lesion Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 4
- 238000011161 development Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 13
- 238000013507 mapping Methods 0.000 description 8
- 230000003190 augmentative effect Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/108—Computer aided selection or customisation of medical implants or cutting guides
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Robotics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a focus holographic navigation method and system based on MR, wherein the method comprises the following steps: step S100, collecting DICOM original data; step S200, importing CT and MRI data; step S300, processing the data; step S400, modeling a three-dimensional grid based on the data processed in the step S; step S500, establishing a world coordinate system of a unified virtual and actual operation scene; step S600, uniformly fusing virtual and real characteristic parts in the three-dimensional grid; step S700, operating surgical instruments and dynamically matching the tracking tracks of the surgical instruments; and step S800, performing holographic motion visual navigation. The invention ensures that the preoperative planning is more complete and reliable, improves the safety of the operation and promotes the operation level of doctors and the development of the medical industry.
Description
Technical Field
The invention relates to the technical field of medical instruments, in particular to a focus holographic navigation method and system based on MR.
Background
With the continuous development of technologies in the related field of computer vision, technologies such as virtual reality and augmented reality are gradually advancing into human life, and mixed reality has also made a rapid progress as a technical solution which is more perfect than virtual reality and augmented reality and has a wider application field. Mixed reality presents some sensory effects that virtual reality and augmented reality technologies cannot achieve, giving users a very strong realistic experience, such as: and multiple persons share and operate the real-time three-dimensional space content, and the like. Meanwhile, the mixed reality technology can be applied to a wider scene than virtual reality and augmented reality.
Mixed Reality (MR) is a further development of virtual reality technology that builds an interactive feedback information loop between the virtual world, the real world and the user by presenting virtual scene information in a real scene to enhance the realism of the user experience. Mixed Reality (MR), which includes both augmented reality and augmented virtual, refers to a new visualization environment created by the merging of real and virtual worlds. Physical and digital objects coexist in the new visualization environment and interact in real time.
The mixed reality technology brings great convenience to human life and can be embodied in the medical field, for example, the invention patent with the publication number of CN106109015A discloses a head-mounted medical system and an operation method thereof, which comprises an AR/MR head-mounted display device and a laser generator, wherein the AR/MR head-mounted display device is worn on the head of a doctor, the laser generator is arranged above an operating table, an artificial bone mark is made on the body surface of a patient, the laser generator irradiates the body surface of the diseased part of the patient in an aligning way, a 3D model of the diseased part obtained by CT scanning is input into the AR/MR head-mounted display device, the AR/MR head-mounted display device is worn and started by the doctor, the sight line is aligned with the artificial bone mark, a camera in the AR/MR head-mounted display device is worn to observe and recognize the artificial bone mark, and the 3D model of the diseased part is displayed at the corresponding position of the body of the patient, the doctor wearing the AR/MR head-mounted display device walks around the operating table at will, observes the 3D model of the affected part, and guides the operation. Compared with the prior art, the invention has the characteristics of higher efficiency, more intuition, higher safety and stronger functionality.
However, the above-mentioned techniques only present the lesion body, and the medical practitioner cannot perform interactive and simulation operations, and cannot generate substantial medical benefits, so a method and system capable of performing simulation operations are urgently needed in the art.
Disclosure of Invention
The invention aims to provide a MR-based focus holographic navigation method and system, so that preoperative planning is more complete and reliable, the operation safety is improved, and the operation level of doctors and the development of the medical industry are promoted.
The technical scheme adopted by the invention for solving the technical problems is as follows: an MR-based focal holographic navigation method and system, the method comprises the following steps:
step S100, collecting DICOM original data;
step S200, importing CT and MRI data;
step S300, processing the data;
step S400, modeling a three-dimensional grid based on the data processed in the step S;
step S500, establishing a world coordinate system of a unified virtual and actual operation scene;
step S600, uniformly fusing virtual and real characteristic parts in the three-dimensional grid;
step S700, operating surgical instruments and dynamically matching the tracking tracks of the surgical instruments;
and step S800, performing holographic motion visual navigation.
Further, the step S500 includes a coordinate conversion step:
step S501, the position of a point in a world coordinate system;
step S502, converting the position of the midpoint of the camera coordinate system;
step S503, converting the position of the midpoint of the imaging plane coordinate system;
step S504, converting to the position of the point in the image coordinate system.
Further, the formula for converting the world coordinate system to the camera coordinate system is:
Pc=R*PW+T
wherein, PwBeing a point in the world coordinate system, PCFor a point in the camera coordinate system, T ═ Tx,Ty,Tz) R (α, γ) is a rotation matrix.
Further, the formula for converting the camera coordinate system to the imaging plane coordinate system is:
wherein f is the focal length, and u, x and z are vectors in three directions in the coordinate system.
The formula for converting the imaging plane coordinate system to the image coordinate system is:
where sx and sy refer to the distance between adjacent pixels in the horizontal and vertical directions of the image sensor, and the corrected (u, v) is converted from the imaging plane into the image coordinate system with the coordinates (r, c).
Further, a holographic navigation of focus based on MR, the system includes high in the clouds server, image working machine, camera, projecting apparatus and head-mounted device, and high in the clouds server passes through the cable with the image working machine wired connection, and the camera passes through the cable with the image working machine and is connected, and projecting apparatus and head-mounted device all pass through wireless connection with the image working machine.
The invention has the beneficial effects that: the virtual reconstruction of human tissues and virtual surgical instrument images is realized, the virtual and real surgical instrument images are automatically identified and fused with real human bodies and real surgical instruments, the virtual instrument images are automatically identified and tracked during operation of real moving surgical instruments, and navigation during operation is realized, so that the operation risk is reduced, and the operation time is shortened. The surgeon can master patient's internal information under the circumstances of insufficient opening to overlap and show on augmented reality equipment, realize accurate operation treatment scheme, and then make medical staff constantly improve the medical means based on this application, thereby promote whole medical treatment level, improve the operation success rate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be further described with reference to the accompanying drawings and embodiments, wherein the drawings in the following description are only part of the embodiments of the present invention, and for those skilled in the art, other drawings can be obtained without inventive efforts according to the accompanying drawings:
fig. 1 is a flowchart illustrating the steps of a holographic MR-based lesion navigation method according to embodiment 1 of the present invention;
fig. 2 is a block diagram of an MR-based lesion holographic navigation system according to embodiment 1 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the following will clearly and completely describe the technical solutions in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without inventive step, are within the scope of the present invention.
In embodiment 1, as shown in fig. 1 and 2, a camera is used for real-time image acquisition, image information with strong environmental spatial features is selected as an anchor point, and tracking and positioning are performed on the anchor point by using an image recognition and tracking technology, so that the problem of NP is solved, and world coordinate mapping of spatial feature points can be realized. Matching object feature points and 3D model surface feature points through an image recognition technology, and realizing registration of the 3D model and an entity by performing topological transformation on the 3D model on the basis of a mapping relation between the matching points. For the 3D model with only internal structure, the near surface point is selected for approximate processing or the real surface point is artificially calibrated, and then the indirect mapping transformation is realized. And finally, realizing the dynamic registration of the 3D model and the physical store. For data fusion between CT and MR, similarly, one of the CT and MR data can be automatically fused by selecting one of the image feature identification pairs as a reference and correspondingly transforming the CT and MR data by taking the mapping relation between the matched points as a reference.
And carrying out manual contour extraction on the loaded Dicom data, namely acquiring data through a camera to ensure the accuracy of a reconstructed model and realize simultaneous cutting of multiple layers of data to reduce the workload. The method and the device have the advantages that the working data cut every time are stored, multiple contour extraction work can be conveniently carried out on the same data, data loss caused by unexpected interruption in work is avoided, and the working efficiency and the disaster recovery capability are improved.
The world coordinate system, also called real or real world coordinate system, is the absolute coordinates of the objective world. A typical 3D scene is represented by this coordinate system. The camera coordinate system is a coordinate system established by taking the camera as a center, and an optical axis of the camera is taken as a z-axis. The imaging plane coordinate system refers to an image plane coordinate system formed in the camera, i.e., a uv coordinate system in the image. The imaging plane is parallel to the xy-plane of the camera coordinate system so that the origin of the imaging plane is on the optical axis (z-axis) of the camera.
The image coordinate system refers to the coordinate system used for the digital image inside the computer, i.e. the cr coordinate system. The image coordinate system is on the same plane as the imaging plane coordinate system, and includes an image physical coordinate system (in units of millimeters or the like) and an image pixel coordinate system (in units of pixels).
A point in the world coordinate system needs to be first converted into the camera coordinate system in order to project it onto the imaging plane. The transformation from the world coordinate system to the camera coordinate system consists of translation and rotation.
The point in the world coordinate system is a point in the camera coordinate system, and is a translation vector, that is, the origin of the world coordinate system is translated to the origin of the camera coordinate system, and is a rotation matrix. The 6 parameters of R and T are external parameters of the camera.
Next, projecting the three-dimensional space point from the camera coordinate system to the imaging plane coordinate system, where the projection is a perspective projection, and the formula is that where f is the focal length:
after the image is projected to an imaging plane, u and v are changed due to the distortion of a lens, correction is needed, three-dimensional information is not needed at the moment, and distortion parameter vectors are formed, wherein the first three are radial and the second two are tangential.
Finally, the corrected (u, v) is transformed from the imaging plane into the image coordinate system with coordinates (r, c) as:
where is the image principal point, sx and sy refer to the distance between adjacent pixels in the horizontal and vertical directions of the image sensor.
So to summarize, the internal reference 6 parameters are calibrated to determine how the camera realizes the mapping from the three-dimensional space point to the two-dimensional image point. And 6 external parameters are calibrated to determine the relative position relationship between the camera coordinate system and the world coordinate system.
Projector calibration refers to that the projector is regarded as a reverse camera for calibration. It is important to obtain the homography matrix H, but the projector itself does not have the capability to capture images, requiring modeling by means of a camera. And then obtaining a homography matrix by using the known physical coordinates and pixel coordinates of the points, searching the physical coordinates corresponding to the projection points on the image through the homography matrix, and finally calibrating by using the physical coordinates and the pixel coordinates of the standard chessboard projection image to obtain the internal and external parameters of the camera.
The mixed reality navigation technology also becomes a frameless stereo orientation technology, and adopts an interactive image navigation mode to combine computer image processing and visualization technology with clinical operation. The basic principle of the operation navigation system is to form a holographic three-dimensional visual image by processing the imaging data obtained before and during the operation through a computer. Selecting a characteristic part on the image, wherein the corresponding characteristic part is the same as the characteristic part on the real patient, and the sizes, the 3D structures and other details of the two parts are completely consistent; when the computer scans an operation scene and establishes a world coordinate system of a unified virtual scene and a real scene, a target characteristic part set before an operation in the scene can be searched.
Determining a real characteristic part, calculating parameter information of the part in an established world coordinate system, modifying the coordinate parameter information of the virtual image characteristic part under the guidance of a navigation system in the operation process to make the coordinate parameter information consistent with the coordinate parameter information of the real characteristic part, thereby realizing virtual-real fusion, and observing a perspective virtual image, namely observing a real condition without perspective. The operation navigation process is a matching process of two coordinate spaces, and the more the two coordinate spaces are matched, the higher the navigation accuracy is.
The camera collects images in real time, image information with strong environmental spatial characteristics is selected as an anchor point, and tracking and positioning are carried out on the anchor point by utilizing an image recognition and tracking technology, so that the problem of NP is solved, and the world coordinate mapping of spatial characteristic points can be realized. Matching object feature points and 3D model surface feature points through an image recognition technology, and realizing registration of the 3D model and an entity by performing topological transformation on the 3D model on the basis of a mapping relation between the matching points. For the 3D model with only internal structure, the near surface point is selected for approximate processing or the real surface point is artificially calibrated, and then the indirect mapping transformation is realized. And finally, realizing the dynamic registration of the 3D model and the entity.
The system comprises a cloud server 1, an image working machine 2, a camera 3, a projector 4 and a head-mounted device 5, wherein the cloud server 1 is in wired connection with the image working machine 2, the camera 3 is in cable connection with the image working machine 2, and the projector 4 and the head-mounted device 5 are both in wireless connection with the image working machine 2.
Through the small server, a wireless local area network is formed, an image working machine (namely, a tablet personal computer and mixed reality hardware equipment) is linked, the visible state display or hiding of the model posture and the model components of the tablet control model is realized, and meanwhile, the relative position relation between a user and the model is tracked in real time by the program, so that the control is more intuitive and accords with the intuition of an observer, namely, the unknown coordinate of the observer is positioned on the pitch rotating surface of the model. The model data file adopts a high compression algorithm to reduce the size of the file data, so that the wireless transmission is facilitated and the transmission efficiency is improved. The model file and the control command are transmitted by adopting an HTTP protocol so as to ensure that the transmission is correct, reliable, safe and effective.
Claims (6)
1. An MR-based holographic focal navigation method, characterized in that the method comprises the following steps:
step S100, collecting DICOM original data;
step S200, importing CT and MRI data;
step S300, processing the data;
step S400, modeling a three-dimensional grid based on the data processed in the step S;
step S500, establishing a world coordinate system of a unified virtual and actual operation scene;
step S600, uniformly fusing virtual and real characteristic parts in the three-dimensional grid;
step S700, operating surgical instruments and dynamically matching the tracking tracks of the surgical instruments;
and step S800, performing holographic motion visual navigation.
2. The MR-based holographic lesion navigation method according to claim 1, wherein the step S500 includes a coordinate transformation step of:
step S501, the position of a point in a world coordinate system;
step S502, converting the position of the midpoint of the camera coordinate system;
step S503, converting the position of the midpoint of the imaging plane coordinate system;
step S504, converting to the position of the point in the image coordinate system.
3. The MR-based holographic lesion navigation method according to claim 2, wherein the formula for converting the world coordinate system into the camera coordinate system is:
Pc=R*PW+T
wherein, PwBeing a point in the world coordinate system, PCFor a point in the camera coordinate system, T ═ Tx,Ty,Tz) R (α, γ) is a rotation matrix.
5. The MR-based holographic lesion navigation method according to claim 2, wherein the formula for converting the imaging plane coordinate system into the image coordinate system is:
where sx and sy refer to the distance between adjacent pixels in the horizontal and vertical directions of the image sensor, and the corrected (u, v) is converted from the imaging plane into the image coordinate system with the coordinates (r, c).
6. The utility model provides a holographic navigation of focus based on MR, its characterized in that, the system includes high in the clouds server (1), image working machine (2), camera (3), projecting apparatus (4) and head-mounted device (5), and high in the clouds server (1) and image working machine (2) wired connection, and camera (3) pass through the cable with image working machine (2) and are connected, and projecting apparatus (4) and head-mounted device (5) all are connected through wireless with image working machine (2).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910171406.9A CN111658142A (en) | 2019-03-07 | 2019-03-07 | MR-based focus holographic navigation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910171406.9A CN111658142A (en) | 2019-03-07 | 2019-03-07 | MR-based focus holographic navigation method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111658142A true CN111658142A (en) | 2020-09-15 |
Family
ID=72381939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910171406.9A Pending CN111658142A (en) | 2019-03-07 | 2019-03-07 | MR-based focus holographic navigation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111658142A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348851A (en) * | 2020-11-04 | 2021-02-09 | 无锡蓝软智能医疗科技有限公司 | Moving target tracking system and mixed reality operation auxiliary system |
CN115105207A (en) * | 2022-06-28 | 2022-09-27 | 北京触幻科技有限公司 | Surgical holographic navigation method and system based on mixed reality |
WO2023141800A1 (en) * | 2022-01-26 | 2023-08-03 | Warsaw Orthopedic, Inc. | Mobile x-ray positioning system |
WO2023246521A1 (en) * | 2022-06-20 | 2023-12-28 | 上海市胸科医院 | Method, apparatus and electronic device for lesion localization based on mixed reality |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846496A (en) * | 2017-01-19 | 2017-06-13 | 杭州古珀医疗科技有限公司 | DICOM images based on mixed reality technology check system and operating method |
CN108294814A (en) * | 2018-04-13 | 2018-07-20 | 首都医科大学宣武医院 | Intracranial puncture positioning method based on mixed reality |
CN109034748A (en) * | 2018-08-09 | 2018-12-18 | 哈尔滨工业大学 | The building method of mold attaching/detaching engineering training system based on AR technology |
CN109410680A (en) * | 2018-11-19 | 2019-03-01 | 叶哲伟 | A kind of virtual operation training method and system based on mixed reality |
-
2019
- 2019-03-07 CN CN201910171406.9A patent/CN111658142A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846496A (en) * | 2017-01-19 | 2017-06-13 | 杭州古珀医疗科技有限公司 | DICOM images based on mixed reality technology check system and operating method |
CN108294814A (en) * | 2018-04-13 | 2018-07-20 | 首都医科大学宣武医院 | Intracranial puncture positioning method based on mixed reality |
CN109034748A (en) * | 2018-08-09 | 2018-12-18 | 哈尔滨工业大学 | The building method of mold attaching/detaching engineering training system based on AR technology |
CN109410680A (en) * | 2018-11-19 | 2019-03-01 | 叶哲伟 | A kind of virtual operation training method and system based on mixed reality |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348851A (en) * | 2020-11-04 | 2021-02-09 | 无锡蓝软智能医疗科技有限公司 | Moving target tracking system and mixed reality operation auxiliary system |
CN112348851B (en) * | 2020-11-04 | 2021-11-12 | 无锡蓝软智能医疗科技有限公司 | Moving target tracking system and mixed reality operation auxiliary system |
WO2023141800A1 (en) * | 2022-01-26 | 2023-08-03 | Warsaw Orthopedic, Inc. | Mobile x-ray positioning system |
WO2023246521A1 (en) * | 2022-06-20 | 2023-12-28 | 上海市胸科医院 | Method, apparatus and electronic device for lesion localization based on mixed reality |
CN115105207A (en) * | 2022-06-28 | 2022-09-27 | 北京触幻科技有限公司 | Surgical holographic navigation method and system based on mixed reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10258427B2 (en) | Mixed reality imaging apparatus and surgical suite | |
US11275249B2 (en) | Augmented visualization during surgery | |
US11310480B2 (en) | Systems and methods for determining three dimensional measurements in telemedicine application | |
US20230355312A1 (en) | Method and system for computer guided surgery | |
US11961193B2 (en) | Method for controlling a display, computer program and mixed reality display device | |
CN110033465B (en) | Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image | |
Hu et al. | Head-mounted augmented reality platform for markerless orthopaedic navigation | |
US8704827B2 (en) | Cumulative buffering for surface imaging | |
CN111658142A (en) | MR-based focus holographic navigation method and system | |
US20230114385A1 (en) | Mri-based augmented reality assisted real-time surgery simulation and navigation | |
JP5934070B2 (en) | Virtual endoscopic image generating apparatus, operating method thereof, and program | |
CN113034700A (en) | Anterior cruciate ligament reconstruction surgery navigation method and system based on mobile terminal | |
US10078906B2 (en) | Device and method for image registration, and non-transitory recording medium | |
CN110638525B (en) | Operation navigation system integrating augmented reality | |
CN113662663B (en) | AR holographic surgery navigation system coordinate system conversion method, device and system | |
Maharjan et al. | A novel visualization system of using augmented reality in knee replacement surgery: Enhanced bidirectional maximum correntropy algorithm | |
Shi et al. | Augmented reality for oral and maxillofacial surgery: The feasibility of a marker‐free registration method | |
CN117918955B (en) | Augmented reality surgical navigation device, method, system equipment and medium | |
Vogt et al. | Light fields for minimal invasive surgery using an endoscope positioning robot | |
CN118628539A (en) | A method for position and pose registration of objects under microscope based on 3D contour matching | |
Hu et al. | Artificial Intelligence-driven Framework for Augmented Reality Markerless Navigation in Knee Surgery | |
US20230363830A1 (en) | Auto-navigating digital surgical microscope | |
Salb et al. | INPRES (intraoperative presentation of surgical planning and simulation results): augmented reality for craniofacial surgery | |
CN114270408A (en) | Method for controlling a display, computer program and mixed reality display device | |
CN116919594A (en) | Structured light optical planting navigation system and navigation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |