[go: up one dir, main page]

CN114681057A - Spatial registration method and device and neurosurgical navigation system - Google Patents

Spatial registration method and device and neurosurgical navigation system Download PDF

Info

Publication number
CN114681057A
CN114681057A CN202011642180.5A CN202011642180A CN114681057A CN 114681057 A CN114681057 A CN 114681057A CN 202011642180 A CN202011642180 A CN 202011642180A CN 114681057 A CN114681057 A CN 114681057A
Authority
CN
China
Prior art keywords
spatial
point cloud
patient
target part
scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011642180.5A
Other languages
Chinese (zh)
Inventor
旷雅唯
刘文博
李赞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinovation Beijing Medical Technology Co ltd
Original Assignee
Sinovation Beijing Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinovation Beijing Medical Technology Co ltd filed Critical Sinovation Beijing Medical Technology Co ltd
Priority to CN202011642180.5A priority Critical patent/CN114681057A/en
Publication of CN114681057A publication Critical patent/CN114681057A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明实施例公开一种空间配准方法、装置以及神经外科手术导航系统。该方法中,接收患者目标部位的医学影像数据,建立患者目标部位的三维模型,提取三维模型的医学影像点云,通过手持式扫描装置获得至少包含患者目标部位的空间信息数据,根据空间信息数据生成患者目标部位的融合点云,将医学影像点云和融合点云进行配准,得到坐标变换关系。由于本发明中根据手持式扫描装置获得的至少包含患者目标部位的空间信息数据生成患者目标部位的融合点云,然后根据医学影像点云和融合点云即可进行配准,无需在病人颅骨上打入螺钉或者在脸部贴若干标记物,也无需对标记物进行选择,大大提高了配准速度,节约了手术时间,满足临床的需要。

Figure 202011642180

Embodiments of the present invention disclose a spatial registration method, a device and a neurosurgery navigation system. In the method, the medical image data of the target part of the patient is received, a three-dimensional model of the target part of the patient is established, the medical image point cloud of the three-dimensional model is extracted, and the spatial information data including at least the target part of the patient is obtained through a handheld scanning device, and according to the spatial information data Generate the fusion point cloud of the target part of the patient, register the medical image point cloud and the fusion point cloud, and obtain the coordinate transformation relationship. In the present invention, the fusion point cloud of the patient's target part is generated according to the spatial information data obtained by the hand-held scanning device including at least the patient's target part, and then the registration can be carried out according to the medical image point cloud and the fusion point cloud, without the need for the patient's skull. When screwing in or sticking several markers on the face, there is no need to select the markers, which greatly improves the registration speed, saves the operation time, and meets the clinical needs.

Figure 202011642180

Description

Spatial registration method and device and neurosurgery navigation system
Technical Field
The invention relates to the technical field of medical equipment, in particular to a spatial registration method and device and a neurosurgical navigation system.
Background
The nerve navigation system can help a doctor to determine the position and the boundary of a focus, when the nerve navigation system is used, three-dimensional modeling needs to be carried out on medical image data of a preoperative patient to obtain a virtual model, and then spatial registration is carried out on an operation position and the virtual model, so that the accurate position of the focus is determined.
In the prior art, a surface registration method is adopted for spatial registration, and the surface registration principle is that a laser scanner is used for acquiring partial point clouds of a patient face in a patient space, a surface point cloud representing partial features (such as eyes and a nose) of the patient face is extracted in an image space, and the two point clouds are matched through an iterative method. However, clinically, the scanner is fixed on a tripod and needs to be tracked, which limits the scanning range of the scanner, causes the scanner to extract only face information, causes low registration accuracy of parts outside the face, and cannot meet the clinical requirement.
Disclosure of Invention
The invention provides a spatial registration method, a spatial registration device and a neurosurgery navigation system, which can provide rapid registration and meet clinical requirements. The specific technical scheme is as follows.
In a first aspect, the present invention provides a spatial registration method, including:
receiving medical image data containing a target part of a patient, establishing a three-dimensional model of the target part of the patient, and extracting a medical image point cloud of the three-dimensional model;
obtaining spatial information data at least containing the target part of the patient through a handheld scanning device, and generating a fusion point cloud of the target part of the patient according to the spatial information data;
and registering the medical image point cloud and the fused point cloud to obtain a coordinate transformation relation.
Optionally, the spatial information data obtained by the handheld scanning device includes scanning point clouds obtained by the handheld scanning device at different positions.
Optionally, the handheld scanning device has a spatial positioning structure, and the step of generating the fused point cloud of the target portion of the patient according to the spatial information data includes:
and converting the spatial information data into the spatial coordinate system of the optical tracking equipment according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking equipment to obtain converted spatial information data, and generating a fusion point cloud of the target part of the patient according to the converted spatial information data.
Optionally, the step of converting the spatial information data into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device to obtain the converted spatial information data, and generating the fusion point cloud of the target portion of the patient according to the converted spatial information data includes:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into the space coordinate system of the optical tracking equipment according to the space position information of the space positioning structure in the space coordinate system of the optical tracking equipment to obtain a plurality of converted scanning point clouds, and generating a fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds.
Optionally, the step of respectively converting the scanning point clouds obtained by the handheld scanning device at different positions into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device to obtain a plurality of converted scanning point clouds, and generating the fused point cloud of the target portion of the patient according to the plurality of converted scanning point clouds includes:
aiming at each position of the scanning point cloud obtained by the handheld scanning device, determining a first conversion matrix from the handheld scanning device to a space positioning structure according to the relative position relation between the space positioning structure connecting the handheld scanning device and the handheld scanning device at the position;
determining a second conversion matrix of the space positioning structure to the optical tracking device according to the space position information of the space positioning structure in the space coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the first conversion matrix and the second conversion matrix to obtain the scanning point cloud after the position conversion;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Optionally, the spatial information data further includes spatial position information of a position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, and the step of generating a fused point cloud of the target portion of the patient according to the spatial information data includes:
for each position of the spatial information data obtained by the handheld scanning device, determining a third conversion matrix from the handheld scanning device to the position indicating device according to the spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at the position;
determining a fourth conversion matrix of the position indicating device to the optical tracking device according to the spatial position information of the position indicating device in the spatial coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the third conversion matrix and the fourth conversion matrix to obtain the scanning point cloud after the position conversion;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Optionally, at least two scanning point clouds obtained at different positions have overlapping portions, and the step of generating the fused point cloud of the target part of the patient according to the spatial information data includes:
and splicing the scanning point clouds with the overlapped parts according to the overlapped parts to generate a fused point cloud of the target part of the patient.
Optionally, the spatial information data further includes spatial position information of a position indicating device in a coordinate system of the handheld scanning device, and the step of generating the fused point cloud of the target portion of the patient according to the spatial information data includes:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into a position indicating device coordinate system with the position indicating device as an origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system, and generating a fused point cloud of the target part of the patient.
Optionally, the step of registering the medical image point cloud and the fused point cloud to obtain a coordinate transformation relationship includes:
and registering the medical image point cloud and the fusion point cloud in a point and point registration mode and a point and surface registration mode to obtain a coordinate transformation relation.
In a second aspect, the present invention provides a spatial registration apparatus, comprising:
the medical image point cloud extraction module is used for receiving medical image data containing a target part of a patient, establishing a three-dimensional model of the target part of the patient and extracting a medical image point cloud of the three-dimensional model;
the fusion point cloud generating module is used for acquiring spatial information data at least comprising the target part of the patient through a handheld scanning device and generating fusion point cloud of the target part of the patient according to the spatial information data;
and the registration module is used for registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relation.
Optionally, the spatial information data obtained by the handheld scanning device includes scanning point clouds obtained by the handheld scanning device at different positions.
Optionally, the handheld scanning device has a spatial positioning structure, and the fused point cloud generating module is specifically configured to:
and converting the spatial information data into the spatial coordinate system of the optical tracking equipment according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking equipment to obtain converted spatial information data, and generating a fusion point cloud of the target part of the patient according to the converted spatial information data.
Optionally, the fused point cloud generating module is specifically configured to:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into the space coordinate system of the optical tracking equipment according to the space position information of the space positioning structure in the space coordinate system of the optical tracking equipment to obtain a plurality of converted scanning point clouds, and generating a fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds.
Optionally, the fused point cloud generating module includes:
the first conversion matrix determining unit is used for obtaining each position of the scanning point cloud aiming at the handheld scanning device and determining a first conversion matrix from the handheld scanning device to the space positioning structure according to the relative position relation between the space positioning structure connecting the handheld scanning device and the handheld scanning device at the position;
a second transformation matrix determining unit, configured to determine a second transformation matrix from the spatial localization structure to the optical tracking device according to spatial location information of the spatial localization structure in a spatial coordinate system of the optical tracking device at the location;
the first conversion unit is used for converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the first conversion matrix and the second conversion matrix to obtain the scanning point cloud after the position conversion;
and the first generation unit is used for fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Optionally, the spatial information data further includes spatial position information of a position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, and the fused point cloud generating module includes:
a third conversion matrix determining unit, configured to determine, for each position where the handheld scanning device obtains spatial information data, a third conversion matrix from the handheld scanning device to the position indicating device according to spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at the position;
a fourth conversion matrix determination unit, configured to determine a fourth conversion matrix from the position indication apparatus to the optical tracking apparatus according to the spatial position information of the position indication apparatus in the spatial coordinate system of the optical tracking apparatus at the position;
the second conversion unit is used for converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the third conversion matrix and the fourth conversion matrix to obtain the scanning point cloud after the position conversion;
and the second generation unit is used for fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Optionally, at least two scanning point clouds obtained at different positions have an overlapping portion therebetween, and the fused point cloud generating module is specifically configured to:
and splicing the scanning point clouds with the overlapped parts according to the overlapped parts to generate a fused point cloud of the target part of the patient.
Optionally, the spatial information data further includes spatial position information of a position indicating device in a coordinate system of the handheld scanning device, and the fused point cloud generating module is specifically configured to:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into a position indicating device coordinate system taking the position indicating device as an origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system, and generating a fused point cloud of the target part of the patient.
Optionally, the registration module is specifically configured to:
and registering the medical image point cloud and the fusion point cloud in a point and point registration mode and a point and surface registration mode to obtain a coordinate transformation relation.
In a third aspect, the invention provides a neurosurgical navigation system, which comprises a host computer and a handheld scanning device, wherein the handheld scanning device is connected with the host computer in a communication way;
the handheld scanning device collects spatial information data at least containing the target part of the patient and sends the spatial information data to the host;
the host computer performs the spatial registration method of the first aspect described above.
Optionally, the neurosurgical navigation system further comprises an optical tracking device, and the optical tracking device is in communication connection with the host;
the optical tracking device tracks the spatial location information of the spatial positioning structure or the position indicating device and sends the spatial location information to the host.
As can be seen from the above, the embodiment of the present invention can receive medical image data of a target portion of a patient, establish a three-dimensional model of the target portion of the patient, extract a medical image point cloud of the three-dimensional model, obtain spatial information data at least including the target portion of the patient through a handheld scanning device, generate a fusion point cloud of the target portion of the patient according to the spatial information data, and register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship. According to the embodiment of the invention, the fusion point cloud of the target part of the patient can be generated according to the spatial information data at least containing the target part of the patient, which is obtained by the handheld scanning device, and then the registration can be carried out according to the medical image point cloud and the fusion point cloud without driving screws on the skull of the patient or pasting a plurality of markers on the face or selecting the markers, so that the registration speed is improved, the operation time is saved, and the clinical requirements are met. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. the embodiment can receive medical image data of a target part of a patient, establish a three-dimensional model of the target part of the patient, extract medical image point clouds of the three-dimensional model, obtain spatial information data at least containing the target part of the patient through a handheld scanning device, generate fusion point clouds of the target part of the patient according to the spatial information data, and register the medical image point clouds and the fusion point clouds to obtain a coordinate transformation relation. According to the embodiment of the invention, the fusion point cloud of the target part of the patient can be generated according to the spatial information data at least containing the target part of the patient, which is obtained by the handheld scanning device, and then the registration can be carried out according to the medical image point cloud and the fusion point cloud without driving screws on the skull of the patient or pasting a plurality of markers on the face or selecting the markers, so that the registration speed is improved, the operation time is saved, and the clinical requirements are met.
2. In the process of two times of coordinate system conversion, only the relative position relationship between the position indicating device and the handheld scanning device is used, or the relative position relationship between the space positioning structure and the handheld scanning device, the position indicating device and the space positioning structure are only used as a medium for completing scanning of a target part of a patient to obtain scanning point clouds, and whether identification points are set on the position indicating device and the space positioning structure or not and the number and the positions of the identification points are not limited, so that the calculation process is simplified, and the calculation amount is reduced.
3. Compared with the laser point cloud registration mode which can only acquire the point cloud information of the face of the patient for registration, the embodiment of the invention can acquire the point cloud information of the face of the patient and can also acquire the point cloud information of other parts of the skull of the patient for registration, so that the registration precision is improved, and the number of points in the point cloud information acquired by the handheld scanning device is 10-200 times of the data volume of the points acquired by the laser pen in the laser point cloud registration mode, so that the registration precision is improved.
4. The handheld scanning device in the neurosurgical navigation system provided by the embodiment of the invention collects spatial information data at least containing a target part of a patient and sends the spatial information data to the host, the host can receive medical image data of the target part of the patient, establish a three-dimensional model of the target part of the patient, extract medical image point cloud of the three-dimensional model, receive spatial information data at least containing the target part of the patient, generate fusion point cloud of the target part of the patient according to the spatial information data, and register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relation. The neurosurgical navigation system provided by the embodiment of the invention can perform registration according to the medical image point cloud and the fusion point cloud, does not need to drive screws on the skull of a patient or paste a plurality of markers on the face, and also does not need to select the markers, thereby improving the registration speed, saving the operation time and meeting the clinical requirements.
5. Because the plurality of different positions in the embodiment of the invention at least comprise the face and the non-face part of the patient, the embodiment of the invention adopts the information of the non-face part during the registration, improves the registration precision of the non-face part and meets the clinical requirement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flow chart of a spatial registration method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a spatial positioning structure attached to a handheld scanning device;
FIG. 3 is a schematic view of a first structure of a position indicating device used in combination with a handheld scanning device;
FIG. 4 is a schematic diagram of a second structure of the position indicating device used in combination with the handheld scanning device;
fig. 5 is a schematic structural diagram of a spatial registration apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a neurosurgical navigation system according to an embodiment of the present invention.
In fig. 1-6, 1 a spatial positioning structure, 2 a handheld scanning device, 3 an optical tracking apparatus, 4 a position indicating device, 10 a host computer.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. A process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a spatial registration method, a spatial registration device and a neurosurgical navigation system, which can provide rapid and high-precision registration and meet clinical requirements. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flowchart of a spatial registration method according to an embodiment of the present invention. The method is applied to the electronic equipment. The method specifically comprises the following steps.
S110: receiving medical image data containing a target part of a patient, establishing a three-dimensional model of the target part of the patient, and extracting a medical image point cloud of the three-dimensional model.
In order to perform spatial registration, medical imaging equipment is required to detect a head target part of a patient to obtain head medical image data of the head target part of the patient, and the electronic equipment is in communication connection with the medical imaging equipment, so that the medical imaging equipment can transmit the medical image data of the head target part of the patient to the electronic equipment, and the electronic equipment receives the medical image data containing the head target part of the patient and establishes a three-dimensional model of the head target part of the patient according to the received medical image data of the head target part of the patient.
For example, the medical Imaging data acquisition method adopted by the medical Imaging device may be MRI (Magnetic Resonance Imaging), CT (Computed Tomography), X-ray Imaging, electron emission Tomography or ultrasound scanning.
The medical image data received by the electronic device may be one or more, and when the electronic device receives more than one type of medical image data, the electronic device may fuse the different types of medical image data to obtain a three-dimensional model of the target portion of the patient.
Illustratively, when the target site of the patient is the head, the three-dimensional model created may be an overall three-dimensional model of the head of the patient, including head surface structures as well as structures within the head.
In some embodiments, extracting the medical image point cloud of the three-dimensional model specifically includes: the skin part on the three-dimensional model is extracted through an image segmentation technology, and then the skin part is converted into a medical image point cloud.
In some embodiments, extracting the medical image point cloud of the three-dimensional model specifically includes: establishing an external sphere of the three-dimensional model, uniformly sampling the spherical surface of the external sphere to obtain spherical surface points, calculating a first surface point of the intersection of a connecting line of the spherical surface points and a spherical center point and the three-dimensional model, and extracting the first surface point to be used as a medical image point cloud.
S120: spatial information data at least containing a target part of a patient are obtained through a handheld scanning device, and fusion point cloud of the target part of the patient is generated according to the spatial information data.
In order to perform the spatial registration, the handheld scanning device is required to scan the target region of the patient to obtain spatial information data at least including the target region of the patient, and then the obtained spatial information data is sent to the electronic device, that is, the electronic device obtains the spatial information data at least including the target region of the patient through the handheld scanning device.
The spatial information data obtained by the handheld scanning device comprises scanning point clouds obtained by the handheld scanning device at different positions.
For example, when the target portion of the patient is the head, the spatial information data obtained by the handheld scanning device includes scanning point clouds of different portions of the head of the patient.
The electronic device receives spatial information data at least containing a target part of a patient, and then generates a fused point cloud of the target part of the patient according to the spatial information data, wherein the point cloud information can also comprise color information besides a geometric position, and the color information is obtained by a camera generally and then the color information of a pixel at a corresponding position is given to a corresponding point in the point cloud.
For example, when the target region of the patient is the head, the scanning of the head of the patient at a plurality of different positions by the handheld scanning device may be: an operator holds the handheld scanning device and scans different positions of the head of a patient. In order to solve the problem of low precision of the position registration such as the back part of the brain of the existing face registration, different positions are set in the embodiment of the invention and at least comprise the position of the non-face of the patient.
In one implementation, a fused point cloud of a target site of a patient may be generated with a handheld scanning device, a spatial localization structure, and an optical tracking apparatus. The handheld scanning device has a spatial positioning structure, and the above-mentioned generating of the fusion point cloud of the target part of the patient according to the spatial information data may include:
and converting the spatial information data into the spatial coordinate system of the optical tracking equipment according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking equipment to obtain the converted spatial information data, and generating a fusion point cloud of the target part of the patient according to the converted spatial information data.
Since the optical tracking device is fixed and does not move during the whole process of spatial registration, the spatial coordinate system of the optical tracking device is a fixed coordinate system. Therefore, in the embodiment of the present invention, the spatial information data may be converted into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device, so as to obtain the converted spatial information data, so that the spatial information data are unified into the same fixed coordinate system, and then the fusion point cloud of the target portion of the patient may be generated according to the converted spatial information data in the same coordinate system.
Because the spatial information data obtained by the handheld scanning device includes scanning point clouds obtained by the handheld scanning device at different positions, the above-mentioned converting the spatial information data into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device to obtain the converted spatial information data, and generating a fused point cloud of the target portion of the patient according to the converted spatial information data may include:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into the space coordinate system of the optical tracking equipment according to the space position information of the space positioning structure in the space coordinate system of the optical tracking equipment to obtain a plurality of converted scanning point clouds, and generating a fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds.
That is to say, according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device, the scanning point clouds at the positions are all converted into the spatial coordinate system of the optical tracking device, so as to obtain a plurality of converted scanning point clouds, so that the scanning point clouds at the positions are unified to the same fixed coordinate system, and then the fused point clouds at the target part of the patient can be generated according to the plurality of converted scanning point clouds in the same coordinate system.
Fig. 2 is a schematic structural diagram of a spatial positioning structure attached to a handheld scanning device, referring to fig. 2, two straight lines emitted from the handheld scanning device 2 in fig. 2 represent a scanning area of the handheld scanning device 2, a straight line between an optical tracking device 3 and the spatial positioning structure 1 represents that the optical tracking device 3 can track the spatial positioning structure 1, the spatial positioning structure 1 is clamped on the upper portion of the handheld scanning device 2, an operator holds the handheld scanning device 2 to scan different positions of the head of a patient, and the optical tracking device 3 tracks the spatial positioning structure 1.
Specifically, the above-mentioned scanning point clouds obtained by the handheld scanning device at different positions are respectively converted into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device, so as to obtain a plurality of converted scanning point clouds, and a fused point cloud of the target part of the patient is generated according to the plurality of converted scanning point clouds, which may include:
obtaining each position of the scanning point cloud aiming at the handheld scanning device, and determining a first conversion matrix from the handheld scanning device to a space positioning structure according to the relative position relation between the space positioning structure connected with the handheld scanning device and the handheld scanning device at the position;
determining a second conversion matrix from the space positioning structure to the optical tracking device according to the space position information of the space positioning structure in the space coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into a space coordinate system of the optical tracking equipment according to the first conversion matrix and the second conversion matrix to obtain the scanning point cloud converted at the position;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
The space positioning structure is attached to the handheld scanning device, and a fixed relative position relationship exists between the space positioning structure and the handheld scanning device, so that each position of scanning point cloud is obtained for the handheld scanning device, and a first conversion matrix from the handheld scanning device to the space positioning structure is determined according to the relative position relationship between the space positioning structure connected with the handheld scanning device and the handheld scanning device at the position.
The handheld scanning device can be used for acquiring the surface point cloud instantly at a certain position, the optical tracker equipment can track the space positioning structure, so that a second conversion matrix from the space positioning structure to the optical tracking equipment can be determined according to the space position information of the space positioning structure in a space coordinate system of the optical tracking equipment at the position, and the position of the space positioning structure in the space coordinate system of the optical tracking equipment is changed at different positions, so that the second conversion matrix corresponding to each position is different.
After the first conversion matrix and the second conversion matrix are obtained, the scanning point cloud obtained by the handheld scanning device at the position can be converted into the space coordinate system of the optical tracking device according to the first conversion matrix and the second conversion matrix, and the scanning point cloud after the position conversion is obtained.
For example, converting the scanning point cloud obtained by the handheld scanning device at the position into the spatial coordinate system of the optical tracking device according to the first conversion matrix and the second conversion matrix, and obtaining the scanning point cloud after the position conversion may include:
and calculating a product between the first conversion matrix and the second conversion matrix as a target conversion matrix, and converting the scanning point cloud obtained by the handheld scanning device at the position into a space coordinate system of the optical tracking equipment according to the target conversion matrix to obtain the scanning point cloud converted at the position.
After the scanning point clouds of all the converted positions are obtained, the scanning point clouds of different converted positions can be fused in a space coordinate system of the optical tracking equipment to generate a fused point cloud of the target part of the patient.
Therefore, under the condition that the handheld scanning device has a space positioning structure, aiming at scanning point clouds obtained by the handheld scanning device at different positions, the scanning point clouds at the positions are converted into a space positioning structure coordinate system through a first conversion matrix, then the scanning point clouds at the positions are converted into an optical tracking device space coordinate system through a second conversion matrix, the scanning point clouds at the positions are converted into the optical tracking device space coordinate system through two times of coordinate system conversion, the scanning point clouds after the position conversion are obtained, then the scanning point clouds after the different position conversion are fused in the optical tracking device space coordinate system, and fusion point clouds of a target part of a patient are generated.
In another implementation, a fused point cloud of a target site of a patient may be generated with a handheld scanning device, a position pointing device, and an optical tracking apparatus.
Fig. 3 is a schematic view of a first configuration of a position indicating device used in combination with a handheld scanning device, see fig. 3, in fig. 3, two straight lines from the handheld scanning device 2 represent the scanning area of the handheld scanning device 2, and a straight line between the optical tracking device 3 and the position indicating device 4 represents that the optical tracking device 3 can track the position indicating device 4, the operator can scan the handheld scanning device 2 at different positions of the head of the patient, and the handheld scanning device 2 can shoot the position indicating device 4 at the same time, the handheld scanning device 2 can obtain a scanning point cloud coordinate system when scanning at each position, that is, how many positions the handheld scanning device 2 can obtain how many scanning point cloud coordinate systems, therefore, the spatial information data may further include spatial position information of the position indication device 4 in a scanning point cloud coordinate system obtained by the handheld scanning device at each position.
The position indicating device is a device capable of being tracked by the optical tracking equipment, and is a rigid structure with at least three optical identification points, and the three optical identification points have special relative position relation to form an easily-recognized array.
The position indicating device can be an active type or a passive type, the active type position indicating device can be a positioning ball capable of emitting signals, the passive type position indicating device is a rigid structure formed by taking a reflective ball or an angular point as an optical identification point, and the arrangement of the optical identification point is asymmetric and can be identified. Illustratively, the position indicating device may be a reflective ball reference frame and the optical tracking device may be an infrared tracking device.
Specifically, the generating of the fused point cloud of the target portion of the patient according to the spatial information data, where the spatial information data further includes spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, may include:
aiming at each position of the spatial information data obtained by the handheld scanning device, determining a third conversion matrix from the handheld scanning device to the position indicating device according to the spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at the position;
determining a fourth conversion matrix from the position indicating device to the optical tracking device according to the spatial position information of the position indicating device in the spatial coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into a space coordinate system of the optical tracking equipment according to the third conversion matrix and the fourth conversion matrix to obtain the scanning point cloud converted at the position;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Since the handheld scanning device 2 can take the images of the position indicating devices 4 together, the relationship between the coordinate system of the position indicating device and the coordinate system of the scanning point cloud obtained by the handheld scanning device can be determined, that is, the third transformation matrix from the handheld scanning device to the position indicating device can be determined for each position of the space information data obtained by the handheld scanning device, according to the space position information of the position indicating device in the coordinate system of the scanning point cloud obtained by the handheld scanning device at the position.
Since the optical tracking device can track the position indicating device, the fourth transformation matrix from the position indicating device to the optical tracking device can be determined according to the spatial position information of the position indicating device in the spatial coordinate system of the optical tracking device, and since the position of the position indicating device in the spatial coordinate system of the optical tracking device is relatively fixed and variable at different positions, the fourth transformation matrix corresponding to each position is the same.
After the third conversion matrix and the fourth conversion matrix are obtained, the scanning point cloud obtained by the handheld scanning device at the position can be converted into the spatial coordinate system of the optical tracking device according to the third conversion matrix and the fourth conversion matrix, and the scanning point cloud converted at the position is obtained.
The scanning point cloud obtained by the handheld scanning device at the position is converted into the spatial coordinate system of the optical tracking device according to the third conversion matrix and the fourth conversion matrix, and the scanning point cloud obtained after the position conversion can be converted into the spatial coordinate system of the optical tracking device according to the first conversion matrix and the second conversion matrix by referring to the above-mentioned method, and the scanning point cloud obtained after the position conversion is obtained by using a similar method, which is not described herein again.
After the scanning point clouds of all the converted positions are obtained, the scanning point clouds of different converted positions can be fused in a space coordinate system of the optical tracking equipment to generate a fused point cloud of the target part of the patient.
Therefore, under the condition that the spatial information data also comprises spatial position information of a position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, aiming at the scanning point clouds obtained by the handheld scanning device at different positions, the scanning point cloud of the position is converted into the position indicating device coordinate system through a third conversion matrix, the scanning point cloud of the position is converted into an optical tracking device spatial coordinate system through a fourth conversion matrix, the scanning point cloud of the position is converted into the optical tracking device spatial coordinate system through two times of coordinate system conversion, the converted scanning point cloud at the position is obtained, then the converted scanning point clouds at different positions are fused in the optical tracking device spatial coordinate system, and the fusion point cloud of the target part of the patient is generated.
In the above two coordinate system conversion processes, only the relative position relationship between the position indicating device and the handheld scanning device is used, or the relative position relationship between the spatial positioning structure and the handheld scanning device, the position indicating device and the spatial positioning structure are only used as a medium for completing scanning of the target part of the patient to obtain the scanning point cloud, and whether the position indicating device and the spatial positioning structure are set with the identification points or not and the number and the positions of the identification points are not limited, so that the calculation process is simplified, and the calculation amount is reduced.
In one implementation, the generating a fused point cloud of a target portion of a patient according to spatial information data by using scanning point clouds obtained at different positions with at least two overlapping portions may include:
and splicing the scanning point clouds with the overlapped parts according to the overlapped parts to generate a fused point cloud of the target part of the patient.
Because the scanning point clouds obtained at different positions have at least an overlapping part between every two scanning point clouds, for example, the scanning point clouds obtained at two adjacent positions have an overlapping part between them, the scanning point clouds with the overlapping parts can be spliced according to the overlapping parts to generate the fused point cloud of the target part of the patient.
Thus, the scan point clouds having the overlapping portions are spliced based on the overlapping portions, and a fused point cloud of the target part of the patient is generated.
In another implementation, a fused point cloud of a target site of a patient may be generated with a handheld scanning device and a position indicating device.
Fig. 4 is a schematic diagram of a second structure of the position indicating device used in combination with the handheld scanning device, referring to fig. 4, two straight lines in fig. 4 represent a scanning area of the handheld scanning device 2, and it can be seen from fig. 4 that the handheld scanning device 2 can shoot down the position indicating device 4 together, so that the spatial information data can further include spatial position information of the position indicating device 4 in a coordinate system of the handheld scanning device.
In the case that the spatial information data further includes spatial position information of the position indicating device in the coordinate system of the handheld scanning device, the generating of the fused point cloud of the target portion of the patient according to the spatial information data may include:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into a position indicating device coordinate system with the position indicating device as an origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system to generate a fused point cloud of the target part of the patient.
The position indicating device 4 is shot at the same time when the handheld scanning device scans every time, so that the scanning point clouds collected every time can be unified into a position indicating device coordinate system taking the position indicating device as an origin, and a fused point cloud of the target part of the patient is obtained, namely, the scanning point clouds obtained by the handheld scanning device at different positions are respectively converted into the position indicating device coordinate system taking the position indicating device as the origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system, and the fused point cloud of the target part of the patient is generated.
Therefore, the generation of the fused point cloud of the target part of the patient is realized by unifying the scanning point clouds acquired each time into a position indicating device coordinate system taking the position indicating device as an origin.
In still another implementation, with continued reference to fig. 4, the solution omits the position indicating device 4 in fig. 4, scans the head only through the handheld scanning device 2, matches the point cloud of the second scanning based on the first scanning data, performs matching calculation through the ICP algorithm, forms a fused point cloud if the two are overlapped, continues the next scanning, and continues to fuse the point cloud of the next scanning with the fused point cloud; and if the two are not overlapped, continuing to perform next scanning, and repeating the matching calculation until the overlapped part position appears.
Registering the fused point cloud and the medical image point cloud to obtain a registration matrix; by using the hand-held structured light spot cloud collector, patterns such as body surface projection of focuses and paths, planned craniotomy range and the like can be projected to a target part, so that the combination of virtual and reality is realized.
S130: and registering the medical image point cloud and the fused point cloud to obtain a coordinate transformation relation.
After the medical image point cloud and the fusion point cloud are obtained, the medical image point cloud and the fusion point cloud can be registered to obtain a coordinate transformation relation.
Illustratively, registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship may include:
and registering the medical image point cloud and the fused point cloud by adopting an ICP (inductively coupled plasma) algorithm to obtain a coordinate transformation relation.
The ICP (Iterative Closest Point algorithm) can merge Point cloud data under different coordinates into the same coordinate system to obtain a coordinate transformation relationship between two coordinate systems. The coordinate transformation relation can be a transformation coordinate matrix, and the transformation coordinate matrix obtained in the embodiment of the invention can be a transformation matrix for transforming the medical image point cloud into the fusion point cloud, and can also be a transformation coordinate matrix for transforming the fusion point cloud into the medical image point cloud.
In one implementation, the registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship may include:
and registering the medical image point cloud and the fusion point cloud in a point and point registration mode and a point and surface registration mode to obtain a coordinate transformation relation.
That is, the point and the surface are registered in two registration modes to obtain a coordinate transformation relation.
As can be seen from the above, in the present embodiment, the medical image data of the target portion of the patient can be received, the three-dimensional model of the target portion of the patient is established, the medical image point cloud of the three-dimensional model is extracted, the spatial information data at least including the target portion of the patient is obtained by the handheld scanning device, the fusion point cloud of the target portion of the patient is generated according to the spatial information data, and the medical image point cloud and the fusion point cloud are registered to obtain the coordinate transformation relationship. According to the embodiment of the invention, the fusion point cloud of the target part of the patient can be generated according to the spatial information data at least containing the target part of the patient, which is obtained by the handheld scanning device, and then the registration can be carried out according to the medical image point cloud and the fusion point cloud without driving screws on the skull of the patient or pasting a plurality of markers on the face or selecting the markers, so that the handheld scanning device only needs tens of seconds, the registration speed is greatly improved, the operation time is saved, and the clinical requirements are met.
Compared with the laser point cloud registration mode which can only acquire the point cloud information of the face of the patient for registration, the embodiment of the invention can acquire the point cloud information of the face of the patient and the point cloud information of other parts of the skull of the patient for registration, so that the registration accuracy is improved, and the number of points in the point cloud information acquired by the handheld scanning device is 10-200 times of the data volume of the points acquired by the laser pen in the laser point cloud registration mode, so that the registration accuracy is improved.
In addition, different positions in the embodiment of the invention at least comprise the face and the non-face part of the patient, so that the embodiment of the invention adopts the information of the non-face part during the registration, improves the registration precision of the non-face part and meets the clinical requirement.
Fig. 5 is a schematic structural diagram of a spatial registration apparatus according to an embodiment of the present invention, and referring to fig. 5, the spatial registration apparatus according to the embodiment of the present invention includes:
a medical image point cloud extraction module 501, configured to receive medical image data including a target portion of a patient, establish a three-dimensional model of the target portion of the patient, and extract a medical image point cloud of the three-dimensional model;
a fused point cloud generating module 502, configured to obtain spatial information data at least including the target part of the patient through a handheld scanning device, and generate a fused point cloud of the target part of the patient according to the spatial information data;
a registration module 503, configured to register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship.
As can be seen, the spatial registration apparatus provided in this embodiment may receive medical image data of a target portion of a patient, establish a three-dimensional model of the target portion of the patient, extract a medical image point cloud of the three-dimensional model, obtain spatial information data at least including the target portion of the patient through a handheld scanning apparatus, generate a fusion point cloud of the target portion of the patient according to the spatial information data, and register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship. According to the embodiment of the invention, the fusion point cloud of the target part of the patient can be generated according to the spatial information data at least containing the target part of the patient, which is obtained by the handheld scanning device, and then the registration can be carried out according to the medical image point cloud and the fusion point cloud, so that screws do not need to be driven into the skull of the patient or a plurality of markers do not need to be pasted on the face of the patient, and the markers do not need to be selected.
In one implementation, the spatial information data obtained by the handheld scanning device includes scanning point clouds obtained by the handheld scanning device at different locations.
In one implementation, the handheld scanning device has a spatial localization mechanism, and the fused point cloud generating module 502 may be specifically configured to:
and converting the spatial information data into the spatial coordinate system of the optical tracking equipment according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking equipment to obtain converted spatial information data, and generating a fusion point cloud of the target part of the patient according to the converted spatial information data.
In one implementation, the fused point cloud generating module 502 may be specifically configured to:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into the space coordinate system of the optical tracking equipment according to the space position information of the space positioning structure in the space coordinate system of the optical tracking equipment to obtain a plurality of converted scanning point clouds, and generating a fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds.
In one implementation, the fused point cloud generating module 502 may include:
the first conversion matrix determining unit is used for obtaining each position of the scanning point cloud aiming at the handheld scanning device and determining a first conversion matrix from the handheld scanning device to the space positioning structure according to the relative position relation between the space positioning structure connecting the handheld scanning device and the handheld scanning device at the position;
a second transformation matrix determining unit, configured to determine a second transformation matrix from the spatial localization structure to the optical tracking device according to spatial location information of the spatial localization structure in a spatial coordinate system of the optical tracking device at the location;
the first conversion unit is used for converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the first conversion matrix and the second conversion matrix to obtain the scanning point cloud after the position conversion;
and the first generation unit is used for fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
In one implementation, the spatial information data further includes spatial position information of a position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, and the fused point cloud generating module 502 may include:
a third conversion matrix determining unit, configured to determine, for each position where the handheld scanning device obtains spatial information data, a third conversion matrix from the handheld scanning device to the position indicating device according to spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at the position;
a fourth conversion matrix determination unit, configured to determine a fourth conversion matrix from the position indication apparatus to the optical tracking apparatus according to the spatial position information of the position indication apparatus in the spatial coordinate system of the optical tracking apparatus at the position;
the second conversion unit is used for converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the third conversion matrix and the fourth conversion matrix to obtain the scanning point cloud after the position conversion;
and the second generation unit is used for fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
In an implementation manner, at least two scanning point clouds obtained at different positions have an overlapping portion therebetween, and the fused point cloud generating module 502 may be specifically configured to:
and splicing the scanning point clouds with the overlapped parts according to the overlapped parts to generate a fused point cloud of the target part of the patient.
In an implementation manner, the spatial information data further includes spatial position information of the position indication device in a coordinate system of the handheld scanning device, and the fused point cloud generating module 502 may be specifically configured to:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into a position indicating device coordinate system with the position indicating device as an origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system, and generating a fused point cloud of the target part of the patient.
In one implementation, the registration module 503 may be specifically configured to:
and registering the medical image point cloud and the fusion point cloud in a point and point registration mode and a point and surface registration mode to obtain a coordinate transformation relation.
Fig. 6 is a schematic structural diagram of a neurosurgical navigation system according to an embodiment of the present invention, and referring to fig. 6, the neurosurgical navigation system according to the embodiment of the present invention includes a host 10 and a handheld scanning device 2, and the handheld scanning device 2 is in communication connection with the host 10.
The handheld scanning device 2 collects spatial information data at least including a target region of a patient and transmits the spatial information data to the host computer 10, and the host computer executes the spatial registration method shown in any one of the embodiments.
Therefore, the handheld scanning device 2 in the neurosurgical navigation system provided by the embodiment of the invention collects spatial information data at least containing a target part of a patient and sends the spatial information data to the host 10, the host 10 can receive medical image data of the target part of the patient, establish a three-dimensional model of the target part of the patient, extract a medical image point cloud of the three-dimensional model, receive spatial information data at least containing the target part of the patient, generate a fusion point cloud of the target part of the patient according to the spatial information data, and register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relation. The neurosurgical navigation system provided by the embodiment of the invention can perform registration according to the medical image point cloud and the fusion point cloud, does not need to drive screws on the skull of a patient or paste a plurality of markers on the face, and also does not need to select the markers, thereby improving the registration speed, saving the operation time and meeting the clinical requirements.
Because different positions in the embodiment of the invention at least comprise the face and the non-face part of the patient, the neurosurgery navigation system provided by the embodiment of the invention adopts the information of the non-face part during the registration, improves the registration precision of the non-face part and meets the clinical requirement.
With reference to fig. 6, the neurosurgical navigation system provided in the embodiment of the present invention further includes an optical tracking device 3, the optical tracking device 3 is in communication connection with the host 10, the optical tracking device 3 tracks the spatial location information of the spatial location structure or the position indicating apparatus and sends the spatial location information to the host 10, the host 10 receives the spatial location information of the spatial location structure or the position indicating apparatus sent by the optical tracking device 3 and performs spatial registration according to the spatial location information of the spatial location structure or the position indicating apparatus, and a specific spatial registration process may refer to corresponding descriptions in the spatial registration method, which is not described herein again.
For example, the handheld scanning device may be a handheld three-dimensional scanner, which may be of various suitable kinds to achieve different functions, for example, it may be a structured light spot cloud collector including a digital projection component and a camera component, or it may include a laser generator and a camera component, the laser generator may emit line laser or surface laser; alternatively, the three-dimensional scanner is a binocular camera.
Thus, the auxiliary host 10 performs spatial registration by way of the optical tracking device 3.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1.一种空间配准方法,其特征在于,包括:1. a spatial registration method, is characterized in that, comprises: 接收包含患者目标部位的医学影像数据,建立所述患者目标部位的三维模型,提取所述三维模型的医学影像点云;receiving medical image data including the target part of the patient, establishing a three-dimensional model of the target part of the patient, and extracting the medical image point cloud of the three-dimensional model; 通过手持式扫描装置获得至少包含所述患者目标部位的空间信息数据,根据所述空间信息数据生成所述患者目标部位的融合点云;Obtaining spatial information data including at least the target part of the patient through a handheld scanning device, and generating a fusion point cloud of the target part of the patient according to the spatial information data; 将所述医学影像点云和所述融合点云进行配准,得到坐标变换关系。The medical image point cloud and the fusion point cloud are registered to obtain a coordinate transformation relationship. 2.如权利要求1所述的方法,其特征在于,所述手持式扫描装置获得的空间信息数据包括所述手持式扫描装置在不同位置获得的扫描点云。2 . The method of claim 1 , wherein the spatial information data obtained by the hand-held scanning device comprises scanning point clouds obtained by the hand-held scanning device at different positions. 3 . 3.如权利要求2所述的方法,其特征在于,所述手持式扫描装置具有空间定位结构,所述根据所述空间信息数据生成所述患者目标部位的融合点云的步骤,包括:3. The method according to claim 2, wherein the hand-held scanning device has a spatial positioning structure, and the step of generating a fusion point cloud of the patient target part according to the spatial information data comprises: 根据所述空间定位结构在光学追踪设备空间坐标系中的空间位置信息,将所述空间信息数据转换到所述光学追踪设备空间坐标系中,得到转换后的空间信息数据,根据所述转换后的空间信息数据生成所述患者目标部位的融合点云。According to the spatial position information of the spatial positioning structure in the optical tracking device spatial coordinate system, the spatial information data is converted into the optical tracking device spatial coordinate system to obtain the converted spatial information data. The spatial information data of the fused point cloud of the target part of the patient is generated. 4.如权利要求3所述的方法,其特征在于,所述根据所述空间定位结构在光学追踪设备空间坐标系中的空间位置信息,将所述空间信息数据转换到所述光学追踪设备空间坐标系中,得到转换后的空间信息数据,根据所述转换后的空间信息数据生成所述患者目标部位的融合点云的步骤,包括:4 . The method according to claim 3 , wherein, according to the spatial position information of the spatial positioning structure in the optical tracking device spatial coordinate system, the spatial information data is converted into the optical tracking device space. 5 . In the coordinate system, the transformed spatial information data is obtained, and the step of generating the fusion point cloud of the patient target part according to the transformed spatial information data includes: 根据所述空间定位结构在光学追踪设备空间坐标系中的空间位置信息,将所述手持式扫描装置在不同位置获得的扫描点云分别转换到所述光学追踪设备空间坐标系中,得到若干转换后的扫描点云,根据若干转换后的扫描点云生成所述患者目标部位的融合点云。According to the spatial position information of the spatial positioning structure in the optical tracking equipment space coordinate system, the scanning point clouds obtained by the handheld scanning device at different positions are respectively converted into the optical tracking equipment space coordinate system, and several conversions are obtained. After scanning the point cloud, the fusion point cloud of the target part of the patient is generated according to several converted scanning point clouds. 5.如权利要求4所述的方法,其特征在于,所述根据所述空间定位结构在光学追踪设备空间坐标系中的空间位置信息,将所述手持式扫描装置在不同位置获得的扫描点云分别转换到所述光学追踪设备空间坐标系中,得到若干转换后的扫描点云,根据若干转换后的扫描点云生成所述患者目标部位的融合点云的步骤,包括:5 . The method according to claim 4 , wherein, according to the spatial position information of the spatial positioning structure in the optical tracking device spatial coordinate system, scanning points obtained by the handheld scanning device at different positions are obtained. 6 . The clouds are respectively converted into the spatial coordinate system of the optical tracking device to obtain a number of converted scanning point clouds, and the steps of generating a fusion point cloud of the target part of the patient according to the converted scanning point clouds include: 针对所述手持式扫描装置获得扫描点云的各个位置,根据在该位置时连接所述手持式扫描装置的空间定位结构与所述手持式扫描装置之间的相对位置关系,确定所述手持式扫描装置到所述空间定位结构的第一转换矩阵;For each position of the scanning point cloud obtained by the hand-held scanning device, according to the relative positional relationship between the spatial positioning structure connecting the hand-held scanning device and the hand-held scanning device at the position, the hand-held scanning device is determined. a first transformation matrix of the scanning device to the spatial positioning structure; 根据在该位置时所述空间定位结构在光学追踪设备空间坐标系中的空间位置信息确定所述空间定位结构到所述光学追踪设备的第二转换矩阵;determining a second transformation matrix of the spatial positioning structure to the optical tracking device according to the spatial position information of the spatial positioning structure in the optical tracking device spatial coordinate system at the position; 根据所述第一转换矩阵和所述第二转换矩阵将所述手持式扫描装置在该位置获得的扫描点云转换到所述光学追踪设备空间坐标系中,得到在该位置转换后的扫描点云;Convert the scanning point cloud obtained by the handheld scanning device at this position into the optical tracking device space coordinate system according to the first conversion matrix and the second conversion matrix to obtain the converted scanning point at this position cloud; 对不同位置转换后的扫描点云进行融合,生成所述患者目标部位的融合点云。The scanned point clouds converted from different positions are fused to generate the fusion point cloud of the target part of the patient. 6.如权利要求2所述的方法,其特征在于,所述空间信息数据还包括位置指示装置在手持式扫描装置于各位置处获得的扫描点云坐标系中的空间位置信息,所述根据所述空间信息数据生成所述患者目标部位的融合点云的步骤,包括:6. The method according to claim 2, wherein the spatial information data further comprises the spatial position information in the scanning point cloud coordinate system obtained by the position indicating device at each position of the hand-held scanning device, the The step of generating the fusion point cloud of the patient target part from the spatial information data includes: 针对所述手持式扫描装置获得空间信息数据的各个位置,根据在该位置时位置指示装置在手持式扫描装置获得的扫描点云坐标系中的空间位置信息,确定所述手持式扫描装置到所述位置指示装置的第三转换矩阵;For each position where the hand-held scanning device obtains the spatial information data, according to the spatial position information of the position indicating device in the scanning point cloud coordinate system obtained by the hand-held scanning device at the position, determine the position of the hand-held scanning device to the location of the a third transformation matrix of the position indicating device; 根据在该位置时所述位置指示装置在光学追踪设备空间坐标系中的空间位置信息确定所述位置指示装置到所述光学追踪设备的第四转换矩阵;determining a fourth transformation matrix from the position indicating device to the optical tracking device according to the spatial position information of the position indicating device in the optical tracking device spatial coordinate system at the position; 根据所述第三转换矩阵和所述第四转换矩阵将手持式扫描装置在该位置获得的扫描点云转换到所述光学追踪设备空间坐标系中,得到在该位置转换后的扫描点云;Convert the scanning point cloud obtained by the handheld scanning device at this position into the optical tracking device space coordinate system according to the third conversion matrix and the fourth conversion matrix, so as to obtain the converted scanning point cloud at this position; 对不同位置转换后的扫描点云进行融合,生成所述患者目标部位的融合点云。The scanned point clouds converted from different positions are fused to generate the fusion point cloud of the target part of the patient. 7.如权利要求2所述的方法,其特征在于,在不同位置获得的扫描点云至少两两之间有重叠部分,所述根据所述空间信息数据生成所述患者目标部位的融合点云的步骤,包括:7 . The method according to claim 2 , wherein at least two scanning point clouds obtained at different positions have overlapping parts, and the fused point cloud of the target part of the patient is generated according to the spatial information data. 8 . steps, including: 根据所述重叠部分,将存在重叠部分的扫描点云进行拼接,生成所述患者目标部位的融合点云。According to the overlapping portion, the scanned point clouds with the overlapping portion are spliced to generate a fusion point cloud of the target part of the patient. 8.如权利要求2所述的方法,其特征在于,所述空间信息数据还包括位置指示装置在手持式扫描装置坐标系中的空间位置信息,所述根据所述空间信息数据生成所述患者目标部位的融合点云的步骤,包括:8. The method of claim 2, wherein the spatial information data further comprises spatial position information of the position indicating device in the hand-held scanning device coordinate system, and the patient is generated according to the spatial information data The steps of fusing the point cloud of the target part include: 根据位置指示装置在手持式扫描装置坐标系中的空间位置信息,将所述手持式扫描装置在不同位置获得的扫描点云分别转换至以位置指示装置为原点的位置指示装置坐标系中,生成所述患者目标部位的融合点云。According to the spatial position information of the position indicating device in the coordinate system of the hand-held scanning device, the scanning point clouds obtained by the hand-held scanning device at different positions are respectively converted into the coordinate system of the position indicating device with the position indicating device as the origin to generate The fused point cloud of the patient target site. 9.如权利要求1所述的方法,其特征在于,所述将所述医学影像点云和所述融合点云进行配准,得到坐标变换关系的步骤,包括:9. The method of claim 1, wherein the step of registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship comprises: 通过点与点配准方式以及点与面配准方式对所述医学影像点云和所述融合点云进行配准,得到坐标变换关系。The medical image point cloud and the fusion point cloud are registered by means of point-to-point registration and point-to-surface registration to obtain a coordinate transformation relationship. 10.一种空间配准装置,其特征在于,包括:10. A spatial registration device, comprising: 医学影像点云提取模块,用于接收包含患者目标部位的医学影像数据,建立所述患者目标部位的三维模型,提取所述三维模型的医学影像点云;The medical image point cloud extraction module is used for receiving medical image data including the target part of the patient, establishing a three-dimensional model of the target part of the patient, and extracting the medical image point cloud of the three-dimensional model; 融合点云生成模块,用于通过手持式扫描装置获得至少包含所述患者目标部位的空间信息数据,根据所述空间信息数据生成所述患者目标部位的融合点云;a fusion point cloud generation module, configured to obtain spatial information data including at least the target part of the patient through a handheld scanning device, and generate a fusion point cloud of the target part of the patient according to the spatial information data; 配准模块,用于将所述医学影像点云和所述融合点云进行配准,得到坐标变换关系。The registration module is used for registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship. 11.一种神经外科手术导航系统,其特征在于,包括主机和手持式扫描装置,所述手持式扫描装置与所述主机通信连接;11. A neurosurgery navigation system, comprising a host and a hand-held scanning device, wherein the hand-held scanning device is communicatively connected to the host; 所述手持式扫描装置采集至少包含所述患者目标部位的空间信息数据并发送至所述主机;The hand-held scanning device collects spatial information data including at least the target part of the patient and sends it to the host; 所述主机执行权利要求1-9任一所述的空间配准方法。The host executes the spatial registration method described in any one of claims 1-9. 12.如权利要求11所述的系统,其特征在于,还包括光学追踪设备,所述光学追踪设备与所述主机通信连接;12. The system of claim 11, further comprising an optical tracking device communicatively coupled to the host; 所述光学追踪设备追踪所述空间定位结构或者所述位置指示装置的空间位置信息并发送至所述主机。The optical tracking device tracks the spatial location information of the spatial positioning structure or the location indicating device and sends it to the host.
CN202011642180.5A 2020-12-31 2020-12-31 Spatial registration method and device and neurosurgical navigation system Pending CN114681057A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011642180.5A CN114681057A (en) 2020-12-31 2020-12-31 Spatial registration method and device and neurosurgical navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011642180.5A CN114681057A (en) 2020-12-31 2020-12-31 Spatial registration method and device and neurosurgical navigation system

Publications (1)

Publication Number Publication Date
CN114681057A true CN114681057A (en) 2022-07-01

Family

ID=82135968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011642180.5A Pending CN114681057A (en) 2020-12-31 2020-12-31 Spatial registration method and device and neurosurgical navigation system

Country Status (1)

Country Link
CN (1) CN114681057A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823897A (en) * 2023-06-29 2023-09-29 华科精准(北京)医疗科技有限公司 Mixed reality enhanced display system and method
CN119048688A (en) * 2024-10-28 2024-11-29 上海佰翊医疗科技有限公司 Method, device, medium and equipment for constructing facial osseous structure
CN119810384A (en) * 2025-03-11 2025-04-11 上海空山慈科技有限公司 Head MRI registration method, navigation method, system and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105078573A (en) * 2014-05-11 2015-11-25 复旦大学 Neuronavigation space aligning method based on handheld scanner
US20180228550A1 (en) * 2015-07-29 2018-08-16 Kyle Richard Dupont Handheld scanner for rapid registration in a medical navigation system
CN109549705A (en) * 2019-01-21 2019-04-02 华科精准(北京)医疗科技有限公司 A surgical robot system and method of using the same
CN109692050A (en) * 2018-12-26 2019-04-30 雅客智慧(北京)科技有限公司 A kind of calibration, method for tracing and the device of dentistry plantation navigating surgery

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105078573A (en) * 2014-05-11 2015-11-25 复旦大学 Neuronavigation space aligning method based on handheld scanner
US20180228550A1 (en) * 2015-07-29 2018-08-16 Kyle Richard Dupont Handheld scanner for rapid registration in a medical navigation system
CN109692050A (en) * 2018-12-26 2019-04-30 雅客智慧(北京)科技有限公司 A kind of calibration, method for tracing and the device of dentistry plantation navigating surgery
CN109549705A (en) * 2019-01-21 2019-04-02 华科精准(北京)医疗科技有限公司 A surgical robot system and method of using the same

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823897A (en) * 2023-06-29 2023-09-29 华科精准(北京)医疗科技有限公司 Mixed reality enhanced display system and method
CN119048688A (en) * 2024-10-28 2024-11-29 上海佰翊医疗科技有限公司 Method, device, medium and equipment for constructing facial osseous structure
CN119048688B (en) * 2024-10-28 2024-12-31 上海佰翊医疗科技有限公司 Method, device, medium and equipment for constructing facial osseous structure
CN119810384A (en) * 2025-03-11 2025-04-11 上海空山慈科技有限公司 Head MRI registration method, navigation method, system and program product

Similar Documents

Publication Publication Date Title
CN110946654B (en) Bone surgery navigation system based on multimode image fusion
CN114041875B (en) An integrated surgical positioning and navigation system
US12178520B2 (en) Model registration system and method
US10593052B2 (en) Methods and systems for updating an existing landmark registration
US20200242339A1 (en) Registration of frames of reference
CN114711969A (en) Surgical robot system and using method thereof
CN106308946B (en) A kind of augmented reality devices and methods therefor applied to stereotactic surgery robot
CN112190328B (en) A holographic perspective positioning system and positioning method
US20140218366A1 (en) Method and device for displaying an object
CN114681057A (en) Spatial registration method and device and neurosurgical navigation system
CN114886558B (en) Endoscope projection method and system based on augmented reality
CN109498156A (en) A kind of head operation air navigation aid based on 3-D scanning
CN113197666A (en) Device and system for surgical navigation
JP2017164075A (en) Image alignment apparatus, method and program
CN209392096U (en) A kind of operation guiding system
CN207164367U (en) AR glasses and its tracing system
CN113470184B (en) Endoscope augmented reality error compensation method and device
JP2019533218A (en) 3D data alignment apparatus and method
CN116883471B (en) Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN114191078B (en) An endoscopic surgical navigation robot system based on mixed reality
CN117174249A (en) AR-assisted method for remote surgery based on 5G communication
CN113597288A (en) Method and system for determining operation path based on image matching
CN114948206A (en) A surgical navigation system incorporating polarized light imaging
CN117918955A (en) Augmented reality surgical navigation device, method, system equipment and medium
CN113100967B (en) Wearable surgical tool positioning device and positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 102600, 1st floor, Building 6, No. 69 Qingfeng West Road, Daxing Biomedical Industry Base, Zhongguancun Science and Technology Park, Daxing District, Beijing

Applicant after: Huake Precision (Beijing) Medical Equipment Co.,Ltd.

Address before: 102609 Room 401, 4th floor, building 12-1, yard 26, Yongwang West Road, Daxing biomedical industry base, Zhongguancun Science Park, Daxing District, Beijing

Applicant before: SINOVATION (BEIJING) MEDICAL TECHNOLOGY Co.,Ltd.

Country or region before: China