CN114549294B - Surface matching calculation method for positioning surgical instrument target - Google Patents
Surface matching calculation method for positioning surgical instrument target Download PDFInfo
- Publication number
- CN114549294B CN114549294B CN202210184644.5A CN202210184644A CN114549294B CN 114549294 B CN114549294 B CN 114549294B CN 202210184644 A CN202210184644 A CN 202210184644A CN 114549294 B CN114549294 B CN 114549294B
- Authority
- CN
- China
- Prior art keywords
- point
- dimensional
- target
- surgical instrument
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a surface matching calculation method for positioning a target of a surgical instrument, which is characterized in that a two-dimensional mark target is arranged on the surgical instrument, a model description of the two-dimensional mark target is established to realize real-time retrieval of characteristics with scene points, so that the problem that the conventional method is low in real-time searching efficiency for depth images is solved, meanwhile, the application range of the positioning technology of the surgical instrument is improved because the three-dimensional data acquisition template and the diversity of scenes are not limited to common free forms, the pose accuracy of the center point of the two-dimensional mark target is further improved by adopting a pose filtering and iteration closest point method, the positioning accuracy can be prevented from being influenced by magnetic interference substances, the tracking capability and accuracy of the tip of the surgical instrument are obviously improved, the visual field of a doctor for performing the operation is enlarged, and the method is suitable for a real-time environment.
Description
Technical Field
The invention relates to the field of communication, in particular to a surface matching calculation method for positioning a surgical instrument target.
Background
Traditional surgical operations, both in preoperative procedure planning and in intraoperative procedure implementation, rely on clinical experience of doctors familiar with and holding the diseased region, however, some regions such as brain or spinal diseases are opaque to the doctor, and the doctor can only perform the operation according to pathological features of the diseased region of the patient, which makes the operation procedure complicated, the operation is traumatic and the postoperative recovery time is long. The key to the problem is that the interventional surgical tool is precisely placed at the surgical target site by the puncturing operation, while the conventional image-guided operation has the following problems: 1. requiring a physician to perform a great deal of exercise and physical exercises, young physicians often need to go through a long training phase to have a mature and stable puncture operation technique; 2. the accuracy of freehand manipulation often depends on the skill and stability of the operating physician; 3. repeating the adjustment operation and the image scanning process to increase the radiation of both doctor and patient and the operation related damage; 4. long-term surgery causes operator fatigue, leading to a decrease in surgical efficiency and stability.
Thus, attempts have been made to improve upon the state of the art. Chinese patent document CN201410608605.9 provides an electromagnetic positioning marking device, electromagnetic positioning system and method. The device main body is a sensor positioning and mounting block, and a positioning hole for movably embedding the metal two-dimensional marking target is formed in the sensor positioning and mounting block; the six-degree-of-freedom positioning sensor is synthesized by utilizing the two five-degree-of-freedom positioning sensors, and is operated under the magnetic field generator, and the magnetic induction coil is arranged on the surgical instrument, so that the position of the instrument can be tracked in real time, and the puncture position can be determined.
However, the disadvantages are that the positioning cost is high, the tracking capability and the precision are limited due to the small application range of the magnetic field during the operation, and when some operations are performed, some necessary magnetic devices exist, and the existence of magnetic interference substances greatly influences the positioning accuracy of the surgical instruments.
Another way of receiving and transmitting ultrasonic signals is ultrasonic positioning, and the principle of ultrasonic positioning is that the system transmits ultrasonic waves by controlling a positioning tool provided with one or more receivers, such as a probe and other receiving systems, so that a time difference exists between the transmission and the reception of the ultrasonic waves, and the three-dimensional space position of the surgical instrument tool can be determined by using the time difference. The positioning device adopted in the "frameless three-dimensional positioning system" reported by Roberts et al for the first time is an ultrasonic positioning device, a transmitter is arranged on a surgical microscope in the system, and a corresponding receiver is arranged near the ceiling of an operating room. Ultrasound positioning technology has the advantage of being inexpensive, but has the disadvantage that there must be no blockage between the generator and the receiver, and is susceptible to interference from factors such as temperature, humidity changes, and air flow. In addition, the ultrasound speed is also affected by other noise and echoes in the operating room.
Therefore, it is significant to study the technical solution that enables accurate positioning of surgical instruments within a magnetic field environment.
Disclosure of Invention
The invention provides a surface matching calculation method for positioning a surgical instrument target, which aims at solving the problem that the existing surgical instrument positioning device and method are easily affected by the environment and comprises the following steps:
Offline stage:
s1, installing a two-dimensional marking target at a fixed position of a surgical instrument to be used;
S2, using three-dimensional data acquisition equipment, projecting active light onto a two-dimensional mark target to obtain a three-dimensional point cloud image of the two-dimensional mark target, randomly sampling the three-dimensional point cloud image, and selecting sampling points;
S3, creating a two-dimensional mark target global model description;
s4, creating a model comprising the selected sampling points and the global model description for preparing for gesture refinement;
On-line stage:
S5, acquiring data of a surgical instrument scene with a two-dimensional marking target through three-dimensional data acquisition equipment to obtain point cloud data of the surgical instrument scene;
S6, selecting a datum point in the scene;
S7, assuming that the datum points are on the two-dimensional mark targets, calculating a set of local coordinates of the optimal two-dimensional mark target positions relative to each datum point;
S8, filtering the gesture of the obtained two-dimensional marking target to form a final gesture;
S9, carrying out attitude refinement;
S10, scoring the final gesture, determining the gesture with the highest score, and realizing gesture precision matching of the two-dimensional marking target in a surgical instrument scene.
Preferably, the step S2 is to sample the three-dimensional point cloud image of the two-dimensional marker target by using 3D data adjustment to create a sparse sampling point set, and the input items of the 3D data adjustment include: 3D data describing a 2D surface in 3D and a sampling distance D, outputting a point pair descriptor F describing a pair of 3D points, the point pair descriptor F including a distance between two points, an angle between two normals, an angle between a first normal and a difference vector of two points, and an angle between a second normal and a difference vector of two points, a form of the point pair descriptor F of two 3D points P 1 and P 2 having normals n 1 and n 2 is defined as:
。
Preferably, the sampled point pair descriptor F S is a sampled version of the point pair descriptor F, let n a be the number of intervals with respect to the angle value, d be the sampling distance, d a=360°/na, and let [ x ] be the maximum integer value less than x, then the point pair feature Is a sampled version of (a)The definition is as follows:
。
preferably, the step of creating a two-dimensional marker target global model description in step S3 includes:
S31, calculating a point pair descriptor about each forming point according to sampling points of a three-dimensional point cloud image of the two-dimensional marker target;
S32, using the calculated point pair descriptors as indexes, storing each pair of sampling points in the global model description.
Preferably, the calculating the set of the local coordinates of the optimal two-dimensional marker target position with respect to each reference point in the step S7 includes the steps of:
S71. the parameter space (here, the space of local coordinates) is divided into a set of samples, and each sample is accompanied by a counter and the counter is initially set to zero;
S72, for each point in a surgical instrument scene with a two-dimensional marking target, determining all local coordinates for forming a description of the point through a two-dimensional marking target model, wherein when the two-dimensional marking target is transformed by using the local coordinates, the current scene point and the current datum point are both positioned on the surface of the two-dimensional marking target;
s73, for each local coordinate describing the point, increasing the count of corresponding parameter space samples containing the local coordinate;
s74. after processing all scene points by steps S72 and S73, the count for each sample of the local coordinate space corresponds to the number of scene points describing the part of the parameter space, the sample with the largest value corresponding to the local coordinates describing the scene point in the best way.
Preferably, in step S74, the sample with the largest count value or the set of samples whose count exceeds the threshold is selected.
Preferably, S71 describes a first component of the local coordinates (i.e., a component describing a position on the model surface) by one of the sampling points selected from the two-dimensional marker target data, and the first component is thus implicitly divided into discrete values; dividing the second component (i.e., the component describing the angle of rotation about the normal to the reference point) by dividing the interval of possible rotation angles [0, 360 ] into n a intervals of equal size; in step S72, the method for calculating the local coordinates of the surgical instrument scene point with the two-dimensional marker target is described as follows:
S721, calculating and sampling point pair descriptors between the datum point and the current scene point as described above;
S722. the sampled point pair descriptors are used to access the two-dimensional marker target global model description computed in the offline phase, which will return a list of model point pairs with similar distance and orientation as the scene point pairs;
S723 for each such model point pair, use the equation To calculate local coordinates, let Sr be the reference point of the scene, TS- > L be the rigid 3D transform that translates Sr to the origin and rotates the normal of Sr to the X-axis (pointing in the positive direction), tm- > L be the rigid 3D transform that translates Mr to the origin and rotates the normal of Mr to the X-axis (pointing in the positive direction) for the model point Mr; rx (α) is a rigid 3D transform that rotates about the x-axis at an angle α; given the local coordinates (Mr, α) about the reference point Sr, the mapping from the point mi in the model space to its corresponding point si in the scene space is given by si, mi, TS- > L and Tm- > L, the above equation can be solved for α, and after processing all scene points, step S74 selects the samples of the parameter space where the respective counts have the maximum value.
Preferably, the filtering the gesture of the two-dimensional marker target in step S8 includes the following steps:
S81, defining a neighbor relation between gestures, wherein if the difference of rotation parts of the gestures is smaller than a fixed threshold value, and if the difference of translation vectors of the gestures is smaller than the length of the fixed threshold value, the gestures are defined as neighbors;
s82, assigning a new score to each gesture, wherein the new score is the sum of all scores of the neighbor gestures;
S83, sorting the gestures according to the new score;
s84, selecting the gesture with the best score;
S85. optionally recalculate the selected pose by averaging the neighbor poses.
Preferably, the pose refinement in step S9 is performed using iterative closest point of approach (ICP), and the sum of the distances between each point in the surgical instrument scene with the two-dimensional marker target and the two-dimensional marker target surface is minimized.
Preferably, scoring the final pose in step S10 is performed on the calculated final pose and the two-dimensional tagged target and surgical instrument scene data as well as the two-dimensional tagged target data as inputs and one or more values describing the quality of the calculated pose or the consistency between the surgical instrument scene and the two-dimensional tagged target at said pose are output, wherein the quality and accuracy of the final pose depends on the presence and visibility of the two-dimensional tagged target in the scene and on the quality of the scene data and the model data.
The invention has the following advantages:
1. By installing the two-dimensional marking target on the surgical instrument, the influence on the positioning accuracy caused by the existence of magnetic interference substances can be avoided, the tracking capability and the precision of the tip of the surgical instrument are obviously improved, and the visual field range of a doctor for performing an operation is enlarged.
2. According to the invention, a template matching technology of a three-dimensional surface is adopted on the positioning of the two-dimensional mark target, and the real-time retrieval of the features with scene points is realized by establishing the model description of the two-dimensional mark target, so that the problem that the conventional method has low real-time searching efficiency for depth images is solved, meanwhile, the application range of the surgical instrument positioning technology is increased because the three-dimensional data acquisition template and the diversity of scenes are not limited to common free forms. After the average gesture is obtained, the gesture accuracy of the central point of the two-dimensional mark target is further improved by adopting a gesture filtering and iteration closest point method.
3. The present invention allows the identification of free-form objects having any type of surface geometry and is therefore not limited to a particular type of object. Is robust to noise, missing object parts, and clutter. The pose of the two-dimensional marker target can be determined with high accuracy, finding the two-dimensional marker target and recovering its 3D pose requires little computation time, and is suitable for a real-time environment.
Drawings
Fig. 1 is a schematic diagram of 3D data adjustment.
Fig. 2 is a schematic diagram of a point-to-point descriptor.
Fig. 3 is a global model description schematic.
FIG. 4 is a schematic diagram of reference points, model points, and local coordinate relationships.
FIG. 5 is a schematic diagram of a two-dimensional labeled target.
Detailed Description
Example 1
The invention discloses a surface matching calculation method for positioning a surgical instrument target, which comprises the following steps of:
Offline stage:
s1, installing a two-dimensional marking target at a fixed position of a surgical instrument to be used;
S2, using three-dimensional data acquisition equipment, projecting active light onto a two-dimensional mark target to obtain a three-dimensional point cloud image of the two-dimensional mark target, randomly sampling the three-dimensional point cloud image, and selecting sampling points;
S3, creating a two-dimensional mark target global model description;
s4, creating a model comprising the selected sampling points and the global model description for preparing for gesture refinement;
On-line stage:
S5, acquiring data of a surgical instrument scene with a two-dimensional marking target through three-dimensional data acquisition equipment to obtain point cloud data of the surgical instrument scene;
S6, selecting a datum point in the scene;
S7, assuming that the datum points are on the two-dimensional mark targets, calculating a set of local coordinates of the optimal two-dimensional mark target positions relative to each datum point;
S8, filtering the gesture of the obtained two-dimensional marking target to form a final gesture;
S9, carrying out attitude refinement;
S10, scoring the final gesture, determining the gesture with the highest score, and realizing gesture precision matching of the two-dimensional marking target in a surgical instrument scene.
More specifically, the step S2 is to sample the three-dimensional point cloud image of the two-dimensional marker target by using 3D data adjustment to create a sparse sampling point set, where the input items of the 3D data adjustment include: 3D data describing a 2D surface in 3D and a sampling distance D, outputting a point pair descriptor F describing a pair of 3D points, the point pair descriptor F including a distance between two points, an angle between two normals, an angle between a first normal and a difference vector of two points, and an angle between a second normal and a difference vector of two points, a form of the point pair descriptor F of two 3D points P 1 and P 2 having normals n 1 and n 2 is defined as:
。
More specifically, the sampled point pair descriptor F S is a sampled version of the point pair descriptor F, let n a be the number of intervals with respect to the angle value, d be the sampling distance, d a=360°/na, and let [ x ] be the maximum integer value less than x, then the point pair feature Is a sampled version of (a)The definition is as follows:
。
more specifically, the step of creating a two-dimensional tagged target global model description in step S3 includes:
S31, calculating a point pair descriptor about each forming point according to sampling points of a three-dimensional point cloud image of the two-dimensional marker target;
S32, using the calculated point pair descriptors as indexes, storing each pair of sampling points in the global model description.
More specifically, the calculating of the set of local coordinates of the optimal two-dimensional marker target position with respect to each reference point in step S7 includes the steps of:
S71. the parameter space (here, the space of local coordinates) is divided into a set of samples, and each sample is accompanied by a counter and the counter is initially set to zero;
S72, for each point in a surgical instrument scene with a two-dimensional marking target, determining all local coordinates for forming a description of the point through a two-dimensional marking target model, wherein when the two-dimensional marking target is transformed by using the local coordinates, the current scene point and the current datum point are both positioned on the surface of the two-dimensional marking target;
s73, for each local coordinate describing the point, increasing the count of corresponding parameter space samples containing the local coordinate;
s74. after processing all scene points by steps S72 and S73, the count for each sample of the local coordinate space corresponds to the number of scene points describing the part of the parameter space, the sample with the largest value corresponding to the local coordinates describing the scene point in the best way.
More specifically, in step S74, the sample having the largest count value or the set of samples whose count exceeds the threshold is selected.
More specifically, S71 describes a first component of the local coordinates (i.e., a component describing a position on the model surface) by one of sampling points selected from the two-dimensional marker target data, and the first component is thus implicitly divided into discrete values; dividing the second component (i.e., the component describing the angle of rotation about the normal to the reference point) by dividing the interval of possible rotation angles [0, 360 ] into n a intervals of equal size; in step S72, the method for calculating the local coordinates of the surgical instrument scene point with the two-dimensional marker target is described as follows:
S721, calculating and sampling point pair descriptors between the datum point and the current scene point as described above;
S722. the sampled point pair descriptors are used to access the two-dimensional marker target global model description computed in the offline phase, which will return a list of model point pairs with similar distance and orientation as the scene point pairs;
S723 for each such model point pair, use the equation To calculate the local coordinates, let S r be the reference point of the scene, let T S->L be a rigid 3D transformation that translates S r to the origin and rotates the normal to S r onto the X-axis (pointing in the positive direction). For model point M r, let T m->L be a rigid 3D transform that translates M r to the origin and rotates the normal to M r onto the x-axis (pointing in the positive direction). Let Rx (α) be a rigid 3D transform that rotates about the x-axis at an angle α. Then, given the local coordinates (M r, α) about the reference point S r, the mapping from point M i in model space to its corresponding point S i in scene space can be written as: ; if S i、mi、TS->L and T m->L are known, the above equation can be solved for α, and after processing all scene points, step S74 selects samples of the parameter space where the corresponding count has the maximum value.
More specifically, the filtering the gesture of the two-dimensional marker target in step S8 includes the following steps:
S81, defining a neighbor relation between gestures, wherein if the difference of rotation parts of the gestures is smaller than a fixed threshold value, and if the difference of translation vectors of the gestures is smaller than the length of the fixed threshold value, the gestures are defined as neighbors;
s82, assigning a new score to each gesture, wherein the new score is the sum of all scores of the neighbor gestures;
S83, sorting the gestures according to the new score;
s84, selecting the gesture with the best score;
S85. optionally recalculate the selected pose by averaging the neighbor poses.
More specifically, pose refinement is performed using Iterative Closest Point (ICP) in step S9, where the sum of distances between points in the surgical instrument scene with the two-dimensional marker target and the two-dimensional marker target surface is minimized.
More specifically, scoring the final pose in step S10 is performed at the calculated final pose and the two-dimensional tagged target and surgical instrument scene data as well as the two-dimensional tagged target data as inputs and one or more values describing the quality of the calculated pose or the consistency between the surgical instrument scene and the two-dimensional tagged target at the pose are output, wherein the quality and accuracy of the final pose depends on the presence and visibility of the two-dimensional tagged target in the scene and on the quality of the scene data and the model data.
To facilitate an understanding of this document, some terms are defined and further described herein:
point-to-point descriptor
The 3D point is a point in 3D space having three coordinate values. Each 3D point references a coordinate system, where the best known coordinate system is the scene coordinate system that defines the 3D scene data, and the object coordinate system that defines the 3D object of interest. The 3D vector is a vector in a 3D space having three coordinate values. The 3D normal vector at a point on a surface is a 3D vector having a euclidean length of 1 and perpendicular to the surface at a given point. A 3D point cloud is a collection of 3D points. Each 3D rigid transformation can be decomposed into rotation and translation, where the argument points are first rotated and the translation is applied to the result. Formally, each 3D rigid transformation may be decomposed into a 3D rotation R and a 3D vector T such that.3D data conditioning is a method of transforming a surface in 3D into a set of 3D points evenly distributed over the surface, as shown in fig. 1. In an embodiment, 3D data adjustment is a method taking as input: (a) 3D data (101) describing a 2D surface in 3D; (b) sampling distance d; it outputs a set of 3D points (102) having the property that the point pair descriptor is a list of values describing pairs of 3D points. In an embodiment, the values include a distance between two points, an angle between two normals, an angle between a first normal and a difference vector of two points, and an angle between a second normal and a difference vector of two points.
In an embodiment, the form definition of the point pair descriptor F of two 3D points P1 and P2 having normals n1 and n2 respectively is shown in figure 2,
(1)。
The sampled point pair descriptor is a point pair description two-dimensional sampled version. In an embodiment, four entries of the point pair descriptor are sampled at equal sized intervals to produce a sampled point pair descriptor. The form of the sampled point pair descriptor is defined as follows: let n a be the number of intervals with respect to the angle value, andLet d be the distance sampling factor and let [ x ] be the largest integer value less than x, then the point-to-featureIs a sampled version of (a)The definition is as follows:
(2)。
global model description
A global model description is a data structure that allows efficient searching of all pairs of points on objects that are similar to a given pair of points from a scene. Thus, it is a data structure or method that takes as input point pairs from a scene and outputs a list of point pairs on objects that are similar to the input point pairs. In an embodiment, a mapping from the sampled point pair descriptors to the set of point pairs is used as the point pair descriptors. The lookup is done by computing the sampled pair descriptors for a given pair and using a hash map to obtain all pairs with equal sampled pair descriptors. The hash table allows efficient access to similar pairs of points, where the timing is independent of the number of pairs of points stored in the model description. FIG. 3 outlines a global model description: point pairs (302) from the surface (301) are selected, and point pair descriptors (303) are computed. The global model description (304) is indexed using the point pair descriptor (305) and a set of point pairs (308) on the surface of the 3D object (307) having similar characteristics to the point pairs (302) is returned.
The local pose of an object in a scene is defined as the 3D pose of the object in the scene relative to a given scene point (called a fiducial point), where the given fiducial point is assumed to be located on the surface of the object.
In an embodiment, the local pose is parameterized using local coordinates as follows: let S r be a reference point assumed to be located in a scene on the object surface, then (a) M r is a point on the model surface corresponding to S r, and (b) α is an angle rotated about the normal of S r after aligning S r, Mr and their normal lines (fig. 4). The local coordinates are written relative to S r (M r, α) and have three degrees of freedom in total, two for the position of M r on the model surface and one for the rotation angle α.
For the reference point S r of the scene, let T S->L be a rigid 3D transform that translates S r to the origin and rotates the normal to S r onto the X-axis (pointing in the positive direction). For model point M r, let T m->L be a rigid 3D transform that translates M r to the origin and rotates the normal to M r onto the x-axis (pointing in the positive direction). Let Rx (α) be a rigid 3D transform that rotates about the x-axis at an angle α. Then, given the local coordinates (M r, α) about the reference point S r, the mapping from point M i in model space to its corresponding point S i in scene space can be written as: (3); If s i、mi、TS->L and T m->L are known, the above equation can be solved for α.
Model creation of two-dimensional marker targets
The two-dimensional marking target of the present embodiment is a two-dimensional marking target for a doctor to install to a surgical instrument. Wherein the two-dimensional label target is 45mm long, 45mm wide and 1cm thick, as shown in FIG. 5. The three-dimensional data acquisition equipment selects a structured light depth camera, and projects active light onto a two-dimensional marking target to obtain a point cloud image, namely a template image, of the two-dimensional marking target; the surgeon selects the surgical instrument to be used for the surgery and installs the two-dimensional marker target into a fixed position of the surgical instrument.
An off-line phase is first established, and a model describing the two-dimensional marking target is constructed in a manner suitable for subsequent identification of the two-dimensional marking target in the context of the surgical instrument with the two-dimensional marking target. The method for creating a two-dimensional marker target model comprises the steps of: (a) The three-dimensional data acquisition equipment selects a structured light depth camera, and projects active light onto the two-dimensional marking target to obtain a point cloud image of the two-dimensional marking target; (b) Randomly sampling on the point cloud image, and selecting sampling points; (c) creating a two-dimensional marker target global model description; (d) preparation of the model for pose refinement. In an embodiment, the created model will include the selected sampling points and the global model description.
In an embodiment, the three-dimensional point cloud of the two-dimensional marker target is sub-sampled using the 3D data adjustment method described above to create a sparse set of sampled points. In an embodiment, creating a global model description of the two-dimensional marker target includes: (1) Calculating a point pair descriptor for each set of points according to sampling points of the three-dimensional point cloud of the two-dimensional marker target; (2) Each pair of sampling points in the global model description is stored using the calculated point pair descriptor as an index.
The method of pose refinement may be used in the matching process, and may be based on some data pre-computed from the 3D object, which may also be computed in an offline stage and stored with the model. In an embodiment, a data structure is calculated that allows for a quick search of points on an object closest to a given search point. The data structure is subsequently used for iterative closest point-of-approach to pose refinement.
Identification and gesture determination of two-dimensional marking targets in surgical instrument scene
An instance of a two-dimensional marker target of a surgical instrument scene with the two-dimensional marker target is identified in an online phase, and a 3D pose of the two-dimensional marker target in the scene is calculated. It takes as input the surgical instrument scene and the two-dimensional marker target model computed in the off-line phase, and outputs a set of 3D poses of the two-dimensional marker targets in the scene, and optionally a set of scores that rank the poses. The online phase comprises the following steps: (a) Acquiring data of a surgical instrument scene with a two-dimensional marking target through three-dimensional data acquisition equipment to obtain point cloud data of the surgical instrument scene; (b) making a selection of fiducial points in the scene; (c) Calculating a set of local coordinates for each fiducial point that best two-dimensionally marks the target location assuming the fiducial point is on the two-dimensionally marked target; (d) Filtering the pose of the obtained two-dimensional marker target to form a final pose; (e) carrying out attitude refinement; (f) scoring the final pose.
The plurality of fiducial points are selected from surgical instrument scene data and used in subsequent steps. For the method to function, at least one fiducial point located on the two-dimensional marker target surface is selected, as the subsequent step finds the object pose only when at least one of the fiducial points satisfies a condition. In an embodiment, the fiducial point is selected by taking a random subset of points from a surgical instrument scene point cloud with a two-dimensional marker target, wherein the number of points in the subset is parameterized with respect to the size of the surgical instrument scene point cloud.
For each fiducial point selected in the previous step, a set of local coordinates corresponding to the 3D pose that the two-dimensional marker target is most likely to have is calculated under the assumption that the fiducial point is on the surface of the two-dimensional marker target. In an embodiment, a voting scheme similar to the normal Hough transform (Houghtransform) is employed, which computes local coordinates that account for the observed data.
In an embodiment, a voting scheme similar to the common Hough transform (Houghtransform) is used for computing the local coordinates of the two-dimensional marker target. The voting scheme includes the steps of: (cl) the parameter space (here the space of local coordinates) is divided into a set of samples, and each sample is accompanied by a counter and the counter is initially set to zero; (c2) For each point in the surgical instrument scene with the two-dimensional marker target, determining all local coordinates forming an illustration of the point by the two-dimensional marker target model, meaning that when the two-dimensional marker target is transformed using these local coordinates, both the current scene point and the current fiducial point are located on the surface of the two-dimensional marker target; (c3) For each local coordinate that describes the point, the count of the corresponding parameter space samples that contain the local coordinate is incremented; (c4) After processing all scene points by steps (c 2) and (c 3), the count for each sample of the local coordinate space will correspond to the number of scene points that account for that part of the parameter space. The samples whose counts have the greatest value correspond to local coordinates that best illustrate the scene point. In a final step, the sample with the largest count value or a set of samples whose count exceeds a threshold is selected.
(C1) The first component of the local coordinates (i.e. the component describing the position on the model surface) is described by one of the sampling points selected from the two-dimensional marker target data and is thus implicitly divided into discrete values. By dividing the interval of possible rotation angles [0, 360 ] into n a intervals of equal size (similar to the sampling of the sampled point-to-descriptor angle values described above), the second component (i.e. the component describing the angle of rotation about the normal to the reference point) is divided.
In step (c 2) it is illustrated that the calculation of the local coordinates of the field point of the surgical instrument currently carrying the two-dimensional marking target is done as follows: (c 2.1) calculating and sampling point pair descriptors between the fiducial point and the current scene point as described above; (c 2.2) the sampled point pair descriptors are used to access a two-dimensional labeled target global model description computed in an offline phase, which will return a list of model point pairs having similar distance and orientation as the scene point pairs; (c 2.3) for each such model point pair, using the scene point pair and the model point pair, using equation (3) to calculate the local coordinates. After processing all scene points, step (c 4) selects samples of the parameter space in which the corresponding count has a maximum value. In an embodiment, a count is selected that has a maximum value (i.e., a global maximum).
One local coordinate is taken from each selected sample and the local coordinates are transformed into a full 3D pose, each of which returns a count value of the corresponding local coordinate sample. The count value is a score of the 3D pose. Gesture filtering is a method that takes candidate gestures (optionally augmented with score values) as input from one or more reference points and outputs a set of filtered gestures ordered by likelihood of gesture correctness that contain only the most likely gesture of the two-dimensional marker target.
Comprising the following steps: (1) outlier removal: assuming that the fiducial is located on the surface of the two-dimensional marker target, candidate poses of the fiducial are calculated. If the assumption is incorrect, for example for clutter points in the scene that do not belong to a two-dimensional marker target, or if the normal to the fiducial is incorrect, the resulting candidate pose for the fiducial will contain an incorrect pose that does not correspond to the correct pose of the object. Gesture filtering should remove such incorrect gestures.
(2) Increase accuracy and stability: if several fiducial points are on the surface of the object, the candidate poses for each of them will contain poses corresponding to the correct pose of the object. However, the pose will be slightly different from the correct pose due to numerical errors in the computation, noise in the data, and due to the sampling step size involved in the above mechanism. Gesture filtering groups all correct gestures found with respect to different reference points and calculates an average gesture, thus increasing the accuracy and stability of the final result.
In an embodiment, the two-dimensional marker target pose filtering comprises the steps of: (d1) Defining a neighbor relation between poses, a pose being defined as a neighbor if the rotational part difference of the poses is less than a fixed threshold and if the difference of their translation vectors is less than the length of the fixed threshold; (d2) Assigning a new score to each gesture, the new score being the sum of all the scores of the neighbor gestures; (d 3) ranking the poses by the new scores; (d 4) selecting the pose with the best score; (d5) The selected pose is optionally recalculated by averaging the neighbor poses.
Gesture refinement describes a class of methods that takes as input the two-dimensional marker target model, the surgical instrument scene with the two-dimensional marker target, and the approximate pose of the model in the scene, and outputs a refined, more accurate pose of the model. Pose refinement methods generally optimize correspondence between scenes and objects by minimizing error functions.
In an embodiment, pose refinement is performed using iterative closest point of approach (ICP). For the iterative closest point method, the sum of the distances between points in the surgical instrument scene with the two-dimensional marker target and the two-dimensional marker target surface is minimized.
Scoring is a method that takes as input the final pose calculated in the algorithm and the surgical instrument scene data and two-dimensional marker target data with two-dimensional marker targets and outputs one or more values describing the quality of the calculated pose or the consistency between the surgical instrument scene and the two-dimensional marker targets at that pose. The quality and accuracy of the final pose depends, among other things, on the presence and visibility of the two-dimensional marker targets in the scene, and on the quality of the scene data and the model data.
Scoring to provide a way to estimate the resulting pose, and calculating the number of scene points on the model surface to give the highest scoring pose, thereby realizing pose accuracy matching of the two-dimensional marker target in the surgical instrument scene.
The invention has the following advantages:
1. By installing the two-dimensional marking target on the surgical instrument, the influence on the positioning accuracy caused by the existence of magnetic interference substances can be avoided, the tracking capability and the precision of the tip of the surgical instrument are obviously improved, and the visual field range of a doctor for performing an operation is enlarged.
2. According to the invention, a template matching technology of a three-dimensional surface is adopted on the positioning of the two-dimensional mark target, and the real-time retrieval of the features with scene points is realized by establishing the model description of the two-dimensional mark target, so that the problem that the conventional method has low real-time searching efficiency for depth images is solved, meanwhile, the application range of the surgical instrument positioning technology is increased because the three-dimensional data acquisition template and the diversity of scenes are not limited to common free forms. After the average gesture is obtained, the gesture accuracy of the central point of the two-dimensional mark target is further improved by adopting a gesture filtering and iteration closest point method.
3. The present invention allows the identification of free-form objects having any type of surface geometry and is therefore not limited to a particular type of object. Is robust to noise, missing object parts, and clutter. The pose of the two-dimensional marker target can be determined with high accuracy, finding the two-dimensional marker target and recovering its 3D pose requires little computation time, and is suitable for a real-time environment.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (10)
1. A surface matching calculation method for positioning a surgical instrument target is characterized by comprising the following steps of: the method comprises the following steps:
Offline stage:
s1, installing a two-dimensional marking target at a fixed position of a surgical instrument to be used;
S2, using three-dimensional data acquisition equipment, projecting active light onto a two-dimensional mark target to obtain a three-dimensional point cloud image of the two-dimensional mark target, randomly sampling the three-dimensional point cloud image, and selecting sampling points;
S3, creating a two-dimensional mark target global model description;
s4, creating a model comprising the selected sampling points and the global model description for preparing for gesture refinement;
On-line stage:
S5, acquiring data of a surgical instrument scene with a two-dimensional marking target through three-dimensional data acquisition equipment to obtain point cloud data of the surgical instrument scene;
S6, selecting a datum point in the scene;
S7, assuming that the datum points are on the two-dimensional mark targets, calculating a set of local coordinates of the optimal two-dimensional mark target positions relative to each datum point;
S8, filtering the gesture of the obtained two-dimensional marking target to form a final gesture;
S9, carrying out attitude refinement;
S10, scoring the final gesture, determining the gesture with the highest score, and realizing gesture precision matching of the two-dimensional marking target in a surgical instrument scene.
2. The surface matching calculation method for surgical instrument target localization of claim 1, wherein: the step S2 is to sample the three-dimensional point cloud image of the two-dimensional marker target by using 3D data adjustment to create a sparse sampling point set, and the input items of the 3D data adjustment include: 3D data describing a 2D surface in 3D and a sampling distance D, outputting a point pair descriptor F describing a pair of 3D points, the point pair descriptor F including a distance between two points, an angle between two normals, an angle between a first normal and a difference vector of two points, and an angle between a second normal and a difference vector of two points, a form of the point pair descriptor F of two 3D points P 1 and P 2 having normals n 1 and n 2 is defined as:
F(P1,P2,n1,n2)=(|P2-P1|,∠(n1,n2),∠(n1,P2-P1),∠(n2,P2-P1)).
3. The surface matching calculation method for surgical instrument target localization of claim 2, wherein: sampled point pair descriptor F S is a sampled version of point pair descriptor F, let n a be the number of intervals about the angle value, d be the sampling distance, d a=360°/na, and let [ x ] be the maximum integer value less than x, then sampled version F S=(P1,P2,n1,n2) of point pair feature F (P 1,P2,n1,n2)=(F1,F2,F3,F4) is defined as:
Fs(P1,P2,n1,n2)=([F1/d],[F2/d],[F3/da],[F4/da]).
4. A surface matching calculation method for surgical instrument target localization according to claim 3, characterized in that: the step of creating a two-dimensional marker target global model description in step S3 includes:
S31, calculating a point pair descriptor about each forming point according to sampling points of a three-dimensional point cloud image of the two-dimensional marker target;
S32, using the calculated point pair descriptors as indexes, storing each pair of sampling points in the global model description.
5. The surface matching calculation method for surgical instrument target localization of claim 4, wherein: the calculating of the set of local coordinates of the optimal two-dimensional marker target position with respect to each fiducial point in said step S7 comprises the steps of:
S71, parameter space: the space, here local coordinates, is divided into a set of samples, and each sample is accompanied by a counter and the counter is initially set to zero;
S72, for each point in a surgical instrument scene with a two-dimensional marking target, determining all local coordinates for forming a description of the point through a two-dimensional marking target model, wherein when the two-dimensional marking target is transformed by using the local coordinates, the current scene point and the current datum point are both positioned on the surface of the two-dimensional marking target;
s73, for each local coordinate describing the point, increasing the count of corresponding parameter space samples containing the local coordinate;
s74. after processing all scene points by steps S72 and S73, the count for each sample of the local coordinate space corresponds to the number of scene points describing the part of the parameter space, the sample with the largest value corresponding to the local coordinates describing the scene point in the best way.
6. The surface matching calculation method for surgical instrument target localization of claim 5, wherein: in step S74, the sample with the largest count value or the set of samples whose count exceeds the threshold is selected.
7. The surface matching calculation method for surgical instrument target localization of claim 6, wherein: s71 describes a first component of the local coordinates, i.e., a component describing a position on the model surface, by one of the sampling points selected from the two-dimensional marker target data, and the first component is thus implicitly divided into discrete values; dividing a second component, i.e. a component describing the angle of rotation about the normal of the reference point, by dividing the interval of possible rotation angles [0, 360 ° ] into na intervals of equal size; in step S72, the method for calculating the local coordinates of the surgical instrument scene point with the two-dimensional marker target is described as follows:
S721, calculating and sampling point pair descriptors between the datum point and the current scene point as described above;
S722. the sampled point pair descriptors are used to access the two-dimensional marker target global model description computed in the offline phase, which will return a list of model point pairs with similar distance and orientation as the scene point pairs;
S723 for each such model point pair, use the equation Calculating local coordinates, setting Sr as a datum point of a scene, wherein TS- > L is a rigid 3D conversion for translating Sr to an original point and rotating a normal line of Sr to a positive direction on an X axis, and for a model point Mr, tm- > L is a rigid 3D conversion for translating Mr to the original point and rotating a normal line of Mr to the positive direction on the X axis; rx (α) is a rigid 3D transform that rotates about the x-axis at an angle α; given the local coordinates (Mr, α) about the reference point Sr, the mapping from the point mi in the model space to its corresponding point si in the scene space is given by si, mi, TS- > L and Tm- > L, the above equation can be solved for α, and after processing all scene points, step S74 selects the samples of the parameter space where the respective counts have the maximum value.
8. The surface matching calculation method for surgical instrument target localization of claim 7, wherein: the step S8 of filtering the gesture of the obtained two-dimensional marking target comprises the following steps:
S81, defining a neighbor relation between gestures, wherein if the difference of rotation parts of the gestures is smaller than a fixed threshold value, and if the difference of translation vectors of the gestures is smaller than the length of the fixed threshold value, the gestures are defined as neighbors;
s82, assigning a new score to each gesture, wherein the new score is the sum of all scores of the neighbor gestures;
S83, sorting the gestures according to the new score;
s84, selecting the gesture with the best score;
S85. optionally recalculate the selected pose by averaging the neighbor poses.
9. The surface matching calculation method for surgical instrument target localization of claim 8, wherein: pose refinement using iterative closest point of approach (ICP) in step S9, the sum of the distances between points in the surgical instrument scene with the two-dimensional marker target and the two-dimensional marker target surface is minimized.
10. The surface matching calculation method for surgical instrument target localization of claim 9, wherein: scoring the final pose in step S10 is at the calculated final pose and the surgical instrument scene data and the two-dimensional marker target data with the two-dimensional marker targets as inputs and outputting one or more values describing the quality of the calculated pose or the consistency between the surgical instrument scene and the two-dimensional marker targets at said pose, wherein the quality and accuracy of the final pose depends on the presence and visibility of the two-dimensional marker targets in the scene and on the quality of the scene data and the model data.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210184644.5A CN114549294B (en) | 2022-02-28 | 2022-02-28 | Surface matching calculation method for positioning surgical instrument target |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210184644.5A CN114549294B (en) | 2022-02-28 | 2022-02-28 | Surface matching calculation method for positioning surgical instrument target |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114549294A CN114549294A (en) | 2022-05-27 |
| CN114549294B true CN114549294B (en) | 2024-08-23 |
Family
ID=81679152
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210184644.5A Active CN114549294B (en) | 2022-02-28 | 2022-02-28 | Surface matching calculation method for positioning surgical instrument target |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114549294B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112066879A (en) * | 2020-09-11 | 2020-12-11 | 哈尔滨工业大学 | Air floatation motion simulator pose measuring device and method based on computer vision |
| CN113066126A (en) * | 2021-03-12 | 2021-07-02 | 常州龙源智能机器人科技有限公司 | Locating method of puncture needle point |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2385483B1 (en) * | 2010-05-07 | 2012-11-21 | MVTec Software GmbH | Recognition and pose determination of 3D objects in 3D scenes using geometric point pair descriptors and the generalized Hough Transform |
| US10702226B2 (en) * | 2015-08-06 | 2020-07-07 | Covidien Lp | System and method for local three dimensional volume reconstruction using a standard fluoroscope |
-
2022
- 2022-02-28 CN CN202210184644.5A patent/CN114549294B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112066879A (en) * | 2020-09-11 | 2020-12-11 | 哈尔滨工业大学 | Air floatation motion simulator pose measuring device and method based on computer vision |
| CN113066126A (en) * | 2021-03-12 | 2021-07-02 | 常州龙源智能机器人科技有限公司 | Locating method of puncture needle point |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114549294A (en) | 2022-05-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110946654B (en) | Bone surgery navigation system based on multimode image fusion | |
| US11751951B2 (en) | Surface tracking-based surgical robot system for drilling operation and control method | |
| Grasa et al. | Visual SLAM for handheld monocular endoscope | |
| US6442417B1 (en) | Method and apparatus for transforming view orientations in image-guided surgery | |
| CN107330926A (en) | Non-marked medical figure registration system and method in a kind of art in navigation system | |
| EP3255609B1 (en) | A method of automatically identifying a sequence of marking points in 3d medical image | |
| EP4105887A1 (en) | Technique of generating surgical information from intra-operatively and pre-operatively acquired image data | |
| CN112907642B (en) | Registration and superposition method, system, storage medium and equipment | |
| CN112734776B (en) | Minimally invasive surgical instrument positioning method and system | |
| CN110264504B (en) | Three-dimensional registration method and system for augmented reality | |
| CN112289416B (en) | Method for evaluating guide needle placement accuracy | |
| CN113870331B (en) | Chest CT and X-ray real-time registration algorithm based on deep learning | |
| CN109965979A (en) | A kind of steady Use of Neuronavigation automatic registration method without index point | |
| CN114092480A (en) | Endoscope adjusting device, surgical robot and readable storage medium | |
| EP1769768A1 (en) | Surgical instrument calibration | |
| Westwood | Visual tracking of laparoscopic instruments in standard training environments | |
| CN106236264A (en) | The gastrointestinal procedures air navigation aid of optically-based tracking and images match and system | |
| CN116509426A (en) | Elbow joint rotation central shaft identification method, system, electronic equipment and medium | |
| CN113066126B (en) | Positioning method for penetrating needle point | |
| US12008760B2 (en) | Systems and methods for estimating the movement of a target using a universal deformation model for anatomic tissue | |
| CN114549294B (en) | Surface matching calculation method for positioning surgical instrument target | |
| US20220249174A1 (en) | Surgical navigation system, information processing device and information processing method | |
| CN117679178A (en) | Minimally invasive surgical robot system for traumatic orthopedics department | |
| CN115607287B (en) | Real-time computing methods, devices, surgical robots, and storage media for the femoral head center | |
| CN109410277B (en) | Virtual mark point filtering method and system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |