CN121221269B - Three-dimensional image navigation control method and system driven by two-dimensional region of interest - Google Patents
Three-dimensional image navigation control method and system driven by two-dimensional region of interestInfo
- Publication number
- CN121221269B CN121221269B CN202511802802.9A CN202511802802A CN121221269B CN 121221269 B CN121221269 B CN 121221269B CN 202511802802 A CN202511802802 A CN 202511802802A CN 121221269 B CN121221269 B CN 121221269B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- region
- coordinate system
- imaging
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides a three-dimensional image navigation control method and a three-dimensional image navigation control system driven by a two-dimensional region of interest, which comprise the steps of acquiring at least two-dimensional images acquired under different poses, selecting the region of interest on the two-dimensional images, mapping the region of interest on the two-dimensional images into a three-dimensional imaging space to obtain a first region pose of the region of interest in the three-dimensional imaging space, calculating a three-dimensional target region containing the region of interest according to the first region pose, and controlling the movement of three-dimensional image equipment so as to enable the imaging center of the three-dimensional image equipment to coincide with the center of the three-dimensional target region. The three-dimensional imaging method has the advantages that the accuracy of three-dimensional imaging is improved, the radiation dose of a patient is reduced, waste sheets generated due to inaccurate shooting positions are avoided, the operation time is reduced, the operation efficiency is greatly improved, a doctor only needs to pick up an interested region on a two-dimensional image, the doctor can automatically guide the equipment to move to a corresponding position, and the intelligent degree and reliability of the three-dimensional imaging equipment are improved to a great extent.
Description
Technical Field
The invention relates to the technical field of medical three-dimensional image navigation control, in particular to a three-dimensional image navigation control method and system driven by a two-dimensional region of interest.
Background
In image-guided surgery, scanning by means of an intraoperative three-dimensional imaging device (e.g. a C-arm X-ray machine) is generally required in order to acquire three-dimensional image data of an affected part of a patient. In this process, in order to perform preliminary analysis and localization on a lesion, a doctor often needs to first take a plurality of two-dimensional images, and when determining that further three-dimensional imaging is required, an operator needs to manually adjust the position and angle of the three-dimensional imaging device according to two-dimensional image information and personal experience, so as to attempt to place the lesion area in the center of the three-dimensional imaging field of view of the device.
The traditional operation mode has the obvious defects that firstly, the traditional operation mode is highly dependent on experience and subjective judgment of operators, the adjustment process is tedious and time-consuming, and the overall operation time is increased. Secondly, due to lack of accurate spatial positioning guidance, deviation of the placement position of the device is easy, which may cause that the target focus area is not completely placed in the three-dimensional scanning field of view, or is not imaged at an optimal angle, and such deviation may affect the imaging quality, reduce the clinical diagnostic value of the image, and may also cause scanning failure and generate invalid "waste sheets", so that the scanning needs to be repeated, which not only wastes time, but also may constitute a potential risk for the health of the patient.
Existing surgical navigation techniques have been widely used to guide the motion and visualization of surgical instruments, but their function has been focused mainly on the registration and tracking of instruments with preoperative images. Therefore, an effective solution is lacking to automatically and precisely guide and control the motion of the three-dimensional imaging device itself by directly using the two-dimensional region of interest (ROI) marked by the doctor on the two-dimensional image in operation, so that the imaging center of the three-dimensional imaging device is automatically aligned with the real focus region in the three-dimensional space. Therefore, there is a need in clinical practice for a three-dimensional imaging guidance method that overcomes the above drawbacks and achieves accurate, automatic, low radiation dose.
Thus, the prior art is still to be further developed.
Disclosure of Invention
The invention aims to overcome the technical defects and provide a three-dimensional image navigation control method and system driven by a two-dimensional region of interest, so as to solve the problems in the prior art.
To achieve the above object, according to a first aspect of the present invention, there is provided a three-dimensional image navigation control method driven by a two-dimensional region of interest, including:
s100, acquiring at least two-dimensional images acquired under different poses, and selecting an interested region on the two-dimensional images;
s200, mapping an interested region on the two-dimensional image into a three-dimensional imaging space to obtain a first region pose of the interested region in the three-dimensional imaging space, and calculating a three-dimensional target region containing the interested region according to the first region pose;
S300, controlling the motion of the three-dimensional imaging device to enable the imaging center of the three-dimensional imaging device to coincide with the center of the three-dimensional target area.
Specifically, the mapping the region of interest on the two-dimensional image into the three-dimensional imaging space to obtain a first region pose of the region of interest in the three-dimensional imaging space includes:
And constructing a ray model taking the central point of the region of interest as a vertex and the direction pointing to the center of the radioactive source as a direction vector under the surface coordinate system of the imaging equipment, and combining the ray model with the radius information of the region of interest to obtain the pose of the first region of the region of interest in the three-dimensional imaging space.
Specifically, the method for obtaining the pose of the first region of the region of interest in the three-dimensional imaging space includes:
Converting the coordinate information of the region of interest under the corresponding two-dimensional image coordinate system into the coordinate information under the imaging equipment surface coordinate system;
Generating a circumcircle on the surface of the imaging equipment, wherein the center point of the circumcircle is the center point of the region of interest, the radius of the region of interest is the radius of the circumcircle, then constructing a ray model taking the center point of the circumcircle as a vertex and the direction pointing to the center of a radioactive source as a direction vector under the surface coordinate system of the imaging equipment, and combining the ray model and the radius of the circumcircle to obtain the pose of the first region of the region of interest in the three-dimensional imaging space , wherein,For the ray vertex coordinates,As a direction vector, the direction vector is defined,Is the radius of the circumscribing circle.
Specifically, the calculating, according to the first region pose, a three-dimensional target region including the region of interest includes:
Converting the first region pose from the imaging equipment surface coordinate system to a patient space coordinate system to obtain a second region pose;
and determining a three-dimensional space sphere as a three-dimensional target area of the region of interest based on the second region pose of the region of interest of different two-dimensional images in the patient space coordinate system.
Specifically, the method for converting the pose of the first region from the surface coordinate system of the imaging device to the space coordinate system of the patient to obtain the pose of the second region includes:
Acquiring a first pose transformation relation between a patient space coordinate system and a navigation coordinate system, a second pose transformation relation between an image equipment tracker coordinate system and the navigation coordinate system and a third pose transformation relation between a pre-calibrated image equipment tracker coordinate system and an imaging equipment surface coordinate system in real time through a navigation positioning system;
And converting the first region pose from the imaging equipment surface coordinate system to the patient space coordinate system according to the first pose conversion relation, the second pose conversion relation and the third pose conversion relation to obtain the second region pose.
Specifically, the method for acquiring the first pose transformation relationship between the patient space coordinate system and the navigation coordinate system and the second pose transformation relationship between the image equipment tracker coordinate system and the navigation coordinate system in real time through the navigation positioning system comprises the following steps:
the navigation positioning system receives pose signals sent by a patient tracker fixed on a patient and an image equipment tracker fixed on three-dimensional image equipment through a navigation pose receiving device;
The navigation positioning system calculates the pose signal to obtain a first real-time pose transformation matrix of the patient tracker coordinate system relative to the navigation coordinate system, namely a first pose transformation relation between the patient space coordinate system and the navigation coordinate system, and obtains a second real-time pose transformation matrix of the image equipment tracker coordinate system relative to the navigation coordinate system, namely a second pose transformation relation between the image equipment tracker coordinate system and the navigation coordinate system.
Specifically, if two-dimensional images acquired under different poses are acquired, the method for determining a three-dimensional space sphere as the three-dimensional target area of the region of interest includes:
Calculating a common vertical line intersecting rays in a ray model represented by second region poses of two-dimensional image regions of interest, extending the common vertical line outwards, wherein the common vertical line and the ray model represented by each second region pose have a first intersection point and a second intersection point, the first intersection point and the second intersection point form a space line segment, the space line segment is used as the diameter of a three-dimensional space sphere, a three-dimensional space sphere taking the midpoint of the space line segment as the center of a circle is generated, and the three-dimensional space sphere is used as the three-dimensional target region.
Specifically, the S300 includes:
Mapping a three-dimensional target area under a patient space coordinate system to a current three-dimensional imaging space coordinate system to obtain a first three-dimensional target area, namely mapping the center of the three-dimensional target area under the patient space coordinate system to a target imaging center under the current three-dimensional imaging space coordinate system;
And under a three-dimensional imaging space coordinate system, generating a control instruction according to the deviation between the target imaging center and the imaging center of the current three-dimensional imaging device, and controlling the three-dimensional imaging device to move so as to enable the imaging center of the three-dimensional imaging device to coincide with the target imaging center.
Specifically, the method for obtaining the mapping of the coordinates of the central point of the three-dimensional target area under the patient space coordinate system to the target imaging center under the current three-dimensional imaging space coordinate system comprises the following steps:
utilizing a fourth transformation relation between a pre-calibrated image equipment tracker coordinate system and a three-dimensional imaging space coordinate system;
Calculating a fifth transformation relation between the current three-dimensional imaging space coordinate system and the patient space coordinate system according to the first pose transformation relation, the second pose transformation relation and the fourth pose transformation relation;
and converting the center point coordinate of the three-dimensional target area from the patient space coordinate system to the current three-dimensional imaging space coordinate system by utilizing a fifth transformation relation to obtain a target imaging center.
Specifically, the method for enabling the imaging center of the three-dimensional imaging device to coincide with the target imaging center comprises the following steps:
and calculating pose deviation of the target imaging center and the current imaging center under a three-dimensional imaging space coordinate system, decomposing the pose deviation into one or more motion degrees of freedom of the three-dimensional imaging equipment, and generating a control instruction to enable the imaging center of the three-dimensional imaging equipment to coincide with the target imaging center.
According to a second aspect of the present invention, there is provided a two-dimensional region-of-interest driven three-dimensional image navigation control system, comprising:
The image acquisition module is used for acquiring at least two-dimensional images acquired under different poses and selecting an interested region on the two-dimensional images;
The space mapping module is used for mapping the region of interest on the two-dimensional image into a three-dimensional imaging space to obtain a first region pose of the region of interest in the three-dimensional imaging space, and calculating a three-dimensional target region containing the region of interest according to the first region pose;
And the motion control module is used for controlling the motion of the three-dimensional image equipment so as to enable the imaging center of the three-dimensional image equipment to coincide with the center of the three-dimensional target area.
Specifically, the three-dimensional imaging device is a mechanical arm X-ray imaging device with a plurality of motion degrees of freedom.
The beneficial effects are that:
The invention provides a three-dimensional image navigation control method and a system driven by a two-dimensional region of interest, which are characterized in that at least two-dimensional images acquired under different poses are acquired, the region of interest on the two-dimensional images is selected, the region of interest on the two-dimensional images is mapped into a three-dimensional imaging space, the first region pose of the region of interest in the three-dimensional imaging space is obtained, a three-dimensional target region containing the region of interest is calculated, the imaging center of a three-dimensional image device coincides with the center of the three-dimensional target region, the region of interest can be accurately positioned through a navigation technology, the accuracy of three-dimensional imaging is improved, the radiation dose of a patient is reduced, the waste pieces generated due to inaccurate shooting positions are avoided, the operation time is reduced, the operation efficiency is greatly improved, the operation flow of doctors is simplified, namely, the region of interest is only selected on the two-dimensional images, the device can be automatically guided to move to the corresponding position, and the intelligent degree, usability and reliability of the three-dimensional image device are greatly improved.
Drawings
FIG. 1 is a flow chart of a two-dimensional region of interest driven three-dimensional image navigation control method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of the system components of a two-dimensional region-of-interest driven three-dimensional image navigation control system according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for controlling navigation of a two-dimensional region of interest driven three-dimensional image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of key components of a navigation imaging device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a main pose transformation relationship of each entity of a navigation imaging device according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a correspondence relationship between surface coordinates of a three-dimensional image device according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a mapping relationship between a two-dimensional image coordinate system and an imaging device surface coordinate system of a region of interest according to an embodiment of the present invention;
FIG. 8 is a schematic representation of a region of interest in three-dimensional imaging space provided in an embodiment of the present invention;
FIG. 9 is a schematic diagram of a mapping of patient space coordinates to imaging device surface coordinates provided in an embodiment of the present invention;
FIG. 10 is a schematic illustration of a region of interest forming a three-dimensional spatial sphere provided in an embodiment of the present invention;
FIG. 11 is a schematic illustration of a mapping relationship of a patient space coordinate system and a three-dimensional imaging space coordinate system provided in an embodiment of the present invention;
FIG. 12 is a schematic diagram of a correspondence between a three-dimensional imaging space and a rotation angle of a navigation imaging device according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of guiding movement of a navigation imaging device according to an embodiment of the present invention;
wherein, the reference numerals of the above drawings are as follows:
1. A radioactive source center; 2, an image equipment tracker, 3, an imaging equipment surface, 4, a three-dimensional imaging space, 5, an imaging center, 6, a patient tracker, 7, a navigation pose receiving device, 8, an image first pixel point, 9, an external circle, 10 and a target imaging center.
Detailed Description
In order to make the technical solution of the present application better understood by those skilled in the art, the technical solution of the present application will be clearly and completely described in the following with reference to the accompanying drawings, and based on the embodiments of the present application, other similar embodiments obtained by those skilled in the art without making creative efforts should fall within the protection scope of the present application. In addition, directional words such as "upper", "lower", "left", "right", and the like, as used in the following embodiments are merely directions with reference to the drawings, and thus, the directional words used are intended to illustrate, not to limit, the application.
The invention will be further described with reference to the drawings and preferred embodiments.
Example 1
Referring to fig. 1, the embodiment provides a three-dimensional image navigation control method driven by a two-dimensional region of interest, which includes acquiring at least two-dimensional images acquired under different poses, selecting a region of interest on the two-dimensional images, mapping the region of interest on the two-dimensional images into a three-dimensional imaging space 4 to obtain a first region pose of the region of interest in the three-dimensional imaging space 4, calculating a three-dimensional target region including the region of interest according to the first region pose, and controlling movement of a three-dimensional image device to enable an imaging center 5 of the three-dimensional image device to coincide with a center of the three-dimensional target region. According to the technical scheme, the region of interest is accurately positioned through the navigation technology, the accuracy of three-dimensional imaging is improved, the radiation dose borne by a patient is reduced, and the waste sheets generated due to inaccurate shooting positions are avoided, so that a doctor only needs to pick up the region of interest on a two-dimensional image, the device can be automatically guided to move to the corresponding position, and the operation efficiency is further improved.
Referring to fig. 1, the implementation steps of the three-dimensional image navigation control method driven by the two-dimensional region of interest in the present embodiment are as follows:
S100, acquiring at least two-dimensional images acquired under different poses, and selecting an interested region on the two-dimensional images. In practical application, a patient can be subjected to perspective at different angles through a three-dimensional image device to obtain at least two-dimensional images, and a doctor or an operator can select an area of interest needing to be focused on, such as lesion tissues, tumors or specific organs, on the two-dimensional images through mouse clicking, touch screen operation or other interaction modes.
It should be noted that, the navigation imaging device in this embodiment is formed by combining a conventional 3D intraoperative imaging device (three-dimensional imaging device) with a navigation device, and mainly comprises three parts, specifically a host part, a main control part and a navigation pose receiving device 7, wherein the host part is used for executing main motion and imaging, the main control part is mainly used for controlling the use of each part, receiving feedback of each part, displaying and controlling an image, and the navigation pose receiving device 7 is used for receiving pose information of each pose sensor. The three parts are only for describing the functions, any parts can be combined into a single body, and the navigation pose receiving device 7 can be fixedly arranged at a certain position of an operating room.
As shown in fig. 4, fig. 4 is a schematic diagram illustrating key components of a navigation imaging device in this embodiment, where the three-dimensional imaging device may be a robotic X-ray imaging device with multiple degrees of freedom of movement, in this embodiment, a C-arm X-ray imaging device with multiple degrees of freedom of movement is used, a radiation source center 1 is located above the C-arm, a imaging device tracker 2 and an imaging device surface 3 (a device surface for receiving a radiation generated image) are located below the C-arm, a three-dimensional imaging space 4 is located in a middle area space of the C-arm, a center of the three-dimensional imaging space 4 is an imaging center 5 (a rotation center), and a patient tracker 6 is located near a patient focus, where the patient tracker 6 should be located near the three-dimensional imaging space 4 when scanning the patient focus. The patient tracker 6 and the image device tracker 2 may be any tracker based on optical or electromagnetic principles, and the navigation pose receiving device 7 is also used for receiving pose information of the patient tracker 6 and the image device tracker 2 in real time, and the imaging center 5 is the center of the three-dimensional imaging space 4 and can be considered as the rotation center of the isocentric image device or the fitting center of the non-isocentric image device.
And S200, mapping the region of interest on the two-dimensional image into the three-dimensional imaging space 4 to obtain a first region pose of the region of interest in the three-dimensional imaging space 4, and calculating a three-dimensional target region containing the region of interest according to the first region pose.
It should be further noted that, in the navigation imaging device of the present embodiment, first, each necessary entity is considered to have its own coordinate system, and a certain spatial pose conversion relationship exists between each entity,Representing the pose transformation of the A coordinate system relative to the B coordinate system at the nth time (or position), n is omitted when not needed to be embodied to each time (or position) difference, i.e,An inverse matrix representing the pose matrix. As shown in fig. 5, the meaning of each conversion matrix is shown as follows, n representing the time or position of n:
The position and posture transformation of the coordinate system of the patient tracker 6 relative to the coordinate system of the navigation position and posture receiving device can be read from the navigation position and posture receiving device 7 in real time; The position and posture transformation of the coordinate system of the image equipment tracker 2 relative to the coordinate system of the navigation position and posture receiving device 7 can be read from the navigation position and posture receiving device 7 in real time; Representing the pose transformation of the image equipment tracker 2 coordinate system relative to the three-dimensional imaging space coordinate system, the matrix can be obtained at a specific position by calibrating each time the image equipment tracker 2 is installed AndThe corresponding relation of angles, and other angles are obtained through calculation; Representing pose transformation of the image equipment tracker 2 coordinate system corresponding to the imaging equipment surface coordinate system, wherein the matrix can be obtained by calibrating each time the image equipment tracker 2 is installed; representing a pose transformation of the patient tracker 6 coordinate system relative to the three-dimensional imaging spatial coordinate system; representing a pose transformation of the patient tracker 6 coordinate system relative to the imaging device surface coordinate system.
Specifically, in the embodiment, the method for obtaining the pose of the first region of the region of interest in the three-dimensional imaging space 4 includes constructing a ray model with a central point of the region of interest as a vertex and a direction pointing to the center 1 of the radioactive source as a direction vector under the surface coordinate system of the imaging device, and combining the ray model with radius information of the region of interest to obtain the pose of the first region of the region of interest in the three-dimensional imaging space 4. The specific implementation steps are as follows:
Firstly converting coordinate information of an interested region under a corresponding two-dimensional image coordinate system into coordinate information under an imaging equipment surface coordinate system, then generating a circumcircle 9 on the imaging equipment surface 3, wherein the center point of the circumcircle 9 is the center point of the interested region, the radius of the circumcircle 9 is the radius of the interested region, then constructing a ray model taking the center point of the circumcircle 9 as a vertex and the direction pointing to the radioactive source center 1 as a direction vector under the imaging equipment surface coordinate system, combining the ray model and the radius of the circumcircle 9 to obtain the pose of a first region of the interested region in the three-dimensional imaging space 4 , wherein,For the ray vertex coordinates,As a direction vector, the direction vector is defined,Is the radius of the circumscribing circle 9.
In some specific examples, referring to fig. 6-8, the process of calculating the pose of a first region of interest in the three-dimensional imaging space 4 is as follows:
On the surface 3 of the imaging device, according to the x-axis and y-axis directions of the pixels of the two-dimensional image, the center of the first pixel point 8 of the image is the origin of coordinates in the x-axis and y-axis directions of the coordinate system of the surface of the imaging device, the z-axis direction is determined by using a right-hand method, as shown in fig. 6, an arbitrary region of interest is selected on the two-dimensional image, as shown in the left-hand diagram of fig. 7, the drawn region of interest can be in an arbitrary shape, and the selected pixels correspond to the region of interest of the surface 3 of the imaging device according to information such as the display region, the spatial resolution of the pixels, the rotation of the image, and the like;
Further, as shown in the right-hand diagram of fig. 7, a circumscribing circle 9 is generated on the imaged region for the mapped region of interest, where the center point of the circumscribing circle 9 is considered to be the center point of the region of interest, and the radius of the region of interest is the radius of the circumscribing circle 9 ;
Further, at this time, the region of interest is spatially represented as a cone, the apex of the cone is the center 1 of the radioactive source, and for convenience of analysis, in this embodiment, a cylinder is used to represent the spatial representation of the region of interest, a ray and a radius of a circumscribing circle are used to represent the region of interest in space, and the coordinate system of the surface of the imaging device is usedNext, rays are represented using points and vectors, the points being represented as the center points of the circumscribed circles:
;
Wherein the vector is expressed as;
According toGeometric relationship with the three-dimensional image imaging region center, distance of the radiation source center 1 to the imaging device surface 3 (physical deformation due to gravity factor irrespective of each rotation and the scribing angle), as shown in FIG. 8, can be calculatedCoordinate system connected with radioactive source center 1 on surface of imaging equipmentProjection in each direction, and normalizing the values projected in each axial direction to obtain vectors:
The overall representation of the region of interest in the three-dimensional imaging space 4 is obtained, i.e. the first region pose is represented as:
;
In this embodiment, according to the first region pose, the method for calculating the three-dimensional target region including the region of interest includes:
The navigation positioning system acquires a first pose transformation relation between a patient space coordinate system and a navigation coordinate system and a second pose transformation relation between an image equipment tracker coordinate system and the navigation coordinate system in real time, and specifically, the navigation positioning system receives pose signals sent by a patient tracker 6 fixed on a patient and an image equipment tracker 2 fixed on three-dimensional image equipment through a navigation pose receiving device 7; the navigation positioning system calculates pose signals to obtain a first real-time pose transformation matrix of the patient tracker coordinate system relative to a navigation coordinate system, namely a first pose transformation relation between a patient space coordinate system and the navigation coordinate system, and obtains a second real-time pose transformation matrix of the image equipment tracker coordinate system relative to the navigation coordinate system, namely a second pose transformation relation between the image equipment tracker coordinate system and the navigation coordinate system;
Further, a third pose transformation relation between a pre-calibrated image equipment tracker coordinate system and an imaging equipment surface coordinate system is obtained, the first region pose is transformed from the imaging equipment surface coordinate system to a patient space coordinate system according to the first pose transformation relation, the second pose transformation relation and the third pose transformation relation to obtain a second region pose, and a three-dimensional space sphere is determined as a three-dimensional target area of the region of interest based on the second region poses of the regions of interest of different two-dimensional images in the patient space coordinate system.
In some preferred embodiments, in the case of acquiring two-dimensional images acquired under different poses, the method of determining a three-dimensional spatial sphere as a three-dimensional target region of a region of interest is:
Calculating a common vertical line intersecting rays in a ray model represented by second region poses of two-dimensional image regions of interest, extending the common vertical line outwards, wherein the common vertical line and the ray model represented by each second region pose have a first intersection point and a second intersection point, the first intersection point and the second intersection point form a space line segment, the space line segment is used as the diameter of a three-dimensional space sphere, a three-dimensional space sphere taking the midpoint of the space line segment as the center of a circle is generated, and the three-dimensional space sphere is used as the three-dimensional target region.
Further, in some specific examples, referring to fig. 9-10, in a patient space coordinate system, the three-dimensional target region for determining the region of interest is calculated as follows:
During surgery, after the patient tracker 6 is installed near the focus of the patient, the focus position is bound to the spatial position relationship of the patient tracker 6, so the patient tracker coordinate system is used to represent the patient spatial coordinate system, as shown in fig. 9, and it is possible to obtain:
;
Namely:
;
According to Pose conversion matrix, which is to pass through the imaging equipment surface coordinate system of two regions of interest selected on two-dimensional images acquired under different posesThe following representation, the region of interest representation converted to the patient space coordinate system, namely the second region pose:
;
For ease of analysis, the two acquisition positions are denoted as first position and second position, respectively. As shown in FIG. 10, the cylinder is formed by As a vertex ofIs a radial ray with a radiusThe composed cylinder (actually cone, simplified to cylinder for ease of analysis) represents the representation of the region of interest drawn at the nth position (or instant) in three-dimensional space;
Further, in three dimensions, there is a shortest line between two non-parallel and non-intersecting straight lines, called a common vertical line, when two lines are When the two-dimensional images are not parallel lines (when the two-dimensional images are approximately parallel, two-dimensional images are reselected according to the work flow requirement, and an interested region is drawn), the unique common vertical lines intersecting with the two rays can be obtained, the region between the intersection points is shown as a solid line of a left side diagram of fig. 10, the common vertical lines between the intersection points are respectively expanded outwards, a first intersection point and a second intersection point exist between the two common vertical lines and the rays represented by the pose of the second region, and the first intersection point and the second intersection point form a space line segment like the right side diagram of fig. 10And generating a center point of the line segment by taking the line segment as the diameter of the sphereA three-dimensional sphere with a center of a circle, namely a three-dimensional target area in a patient space coordinate system.
S300, controlling the motion of the three-dimensional image equipment to enable the imaging center 5 of the three-dimensional image equipment to coincide with the center of the three-dimensional target area, wherein the implementation steps are as follows:
The method comprises the steps of mapping a three-dimensional target area under a patient space coordinate system to a current three-dimensional imaging space coordinate system to obtain a first three-dimensional target area, namely, obtaining a target imaging center 10 of which the center point coordinates of the three-dimensional target area under the patient space coordinate system are mapped to the current three-dimensional imaging space coordinate system, and generating a control instruction according to the deviation between the target imaging center 10 and an imaging center 5 of current three-dimensional imaging equipment under the three-dimensional imaging space coordinate system to control the movement of the three-dimensional imaging equipment so as to enable the imaging center 5 of the three-dimensional imaging equipment to coincide with the target imaging center 10.
Further, in the present embodiment, the method for obtaining the mapping of the coordinates of the center point of the three-dimensional target area in the patient space coordinate system to the target imaging center 10 in the current three-dimensional imaging space coordinate system includes:
the method comprises the steps of acquiring a first pose transformation relation between a patient space coordinate system and a navigation coordinate system and a second pose transformation relation between an image equipment tracker coordinate system and the navigation coordinate system in real time through a navigation positioning system, calculating a fifth transformation relation between a current three-dimensional imaging space coordinate system and the patient space coordinate system according to the first pose transformation relation, the second pose transformation relation and the fourth pose transformation relation by utilizing a fourth transformation relation between a pre-calibrated image equipment tracker coordinate system and a three-dimensional imaging space coordinate system, and converting a center point coordinate of a three-dimensional target area from the patient space coordinate system to the current three-dimensional imaging space coordinate system by utilizing the fifth transformation relation to obtain a target imaging center 10, wherein the navigation positioning system comprises a navigation pose receiving device 7.
In some specific examples, referring to fig. 11, the calculation of the mapping of the center point of the three-dimensional target region in the patient space coordinate system to the target imaging center 10 in the current three-dimensional imaging space coordinate system is as follows:
The spatial relationship between the three-dimensional imaging space 4 and the patient space is analyzed, and the calculation process needs to use the transformation relationship of the coordinate system D of the image equipment tracker relative to the three-dimensional imaging space coordinate system I But at different angles of movementIs a variable and thus needs to be usedShowing the relationship, as shown in FIG. 11, of the transformation of the patient space coordinate system P (patient tracker coordinate system) with respect to the three-dimensional imaging space coordinate system IThe following conversion relations are provided:
;
Wherein, the Is thatIs an inverse matrix of (a) representing the pose transformation of the navigation coordinate system of the navigation pose receiving device 7 relative to the image equipment tracker coordinate system, and thereforeCan be expressed as:
;
Assuming that the position (or time) of the pose relation of the three-dimensional imaging space coordinate system I and the image equipment tracker coordinate system D is 0 position (or time), the position (or time) is calibrated according to the known rotation angle of the navigation image equipment Angle of sum of strokesAt this time, through calibration, get;
Further, as shown in FIG. 12, it is assumed that the rotation angle isAnd the angle of rotation isWhen in use, the navigation image equipment relatively forms a pose transformation matrix of the imaging center 5 into a form ofWhen (1)AndThe transformation can be seen as doing and angling around the imaging center 5AndRelative rotation, here using a pose matrixRepresentation relatedAndIs a transformation matrix of (a);
The calculation process is considered for an idealized isocentric imaging device, and if non-isocentric conditions are considered, the calculation process is needed Multiplying correction factors such as position offset and deformation, for the convenience of showing the calculation process, the embodiment usesRepresentation, consider each instantIs established by the following equation:
;
It can be deduced that:
;
In the above equation, the right side of the equation is all the standard amount or calculated amount or directly measured, and thus can be calculated Taking the sphere center of the three-dimensional target area as the sphere center of the target imaging area, and under the condition that the sphere center of the three-dimensional imaging area is coincident with the sphere center of the three-dimensional target area, performing three-dimensional imaging on the region of interest;
Further, in order to facilitate motion control and calculation, the coordinate value of the target imaging region is converted into the current three-dimensional imaging space coordinate system, so that the position of the target imaging center 10 in the current three-dimensional imaging space coordinate system can be easily known, and as shown in fig. 13, the pose representation of the target imaging center 10 in the current three-dimensional imaging space coordinate system is obtained The following are provided:
;
In the embodiment, the method for enabling the imaging center 5 of the three-dimensional imaging device to coincide with the target imaging center 10 comprises the steps of calculating pose deviation of the target imaging center 10 and the current imaging center 5 under a three-dimensional imaging space coordinate system, decomposing the pose deviation into one or more motion degrees of freedom of the three-dimensional imaging device, and generating a control instruction to enable the imaging center 5 of the three-dimensional imaging device to coincide with the target imaging center 10.
It will be appreciated that, as shown in fig. 13, the motion control of the navigation imaging device has several degrees of freedom, namely, degrees of freedom B and E control can be operated at any angle and position relative to the ground, degree of freedom a can control the imaging center 5 to move up and down, degrees of freedom α and β can control the C-arm to move at two angles of rotation and rotation, and degree of freedom H can assist in horizontal operation, and for convenience in analyzing the operation in the rotation and rotation directions, the angles of motion are denoted as α and β, respectively.
In this embodiment, the process of moving the imaging center 5 of the three-dimensional imaging device to coincide with the target imaging center 10 is a real-time dynamic control process,The position under the three-dimensional imaging space coordinate system is changed along with the movement, continuous real-time calculation is needed, the pose deviation of the target imaging center 10 and the current imaging center 5 is decomposed to A, B, E moving as above, the target imaging center 10 can be continuously approached, and under the condition that the distance between the current imaging center 5 and the target imaging center 10 is smaller than the allowable error, the running is considered to be completed, and the three-dimensional imaging scanning can be performed.
It should be noted that this embodiment provides a three-dimensional image navigation control method driven by a two-dimensional region of interest, by acquiring at least two-dimensional images acquired under different poses, selecting a region of interest on the two-dimensional images, mapping the region of interest on the two-dimensional images into a three-dimensional imaging space, obtaining a first region pose of the region of interest in the three-dimensional imaging space, calculating a three-dimensional target region including the region of interest, enabling an imaging center of a three-dimensional image device to coincide with a center of the three-dimensional target region, automatically controlling the three-dimensional image device to be positioned to a corresponding three-dimensional space position based on the selected region of interest on the two-dimensional images, improving accuracy and efficiency of three-dimensional imaging, reducing workload of a doctor for manually adjusting the image device, enabling the doctor to automatically guide the device to move to the corresponding position only by selecting the region of interest on the two-dimensional images, and greatly improving the intelligent degree, usability and reliability of the three-dimensional image device, and being particularly suitable for medical image guiding surgery and diagnosis processes requiring accurate positioning.
Referring to fig. 3, a specific example is used to describe the implementation process of the two-dimensional region-of-interest driven three-dimensional image navigation control according to the present invention.
Step 1, setting preconditions
Before the navigation control flow is started, the initial setting and calibration of the system are required to be completed to ensure the accuracy of subsequent calculation, and the pre-conditions include that an optical or electromagnetic tracker is firmly installed at a designated position of a three-dimensional image device (such as a C-shaped arm), namely an image device tracker 2, and the other tracker is installed on a bony structure or a fixer near a focus of a patient, namely a patient tracker 6.
Calibrating the system to obtain the key fixed transformation relation, namely calibrating the transformation relation between the coordinate system D of the tracker of the image equipment and the three-dimensional imaging space coordinate system I by using a special calibration toolThis calibration is typically performed with the C-arm at a particular rotational angleSum of the angles of rotationThe transformation relation between the tracker coordinate system D of the imaging equipment and the surface coordinate system F (such as the surface of a flat panel detector) of the imaging equipment is calibratedEnsuring that the navigational coordinate system C of the navigational positioning system is able to continuously and stably capture the signals of the patient tracker 6 and the imaging device tracker 2.
Step 2, 2D mapping and information recording
Around a focus needing three-dimensional imaging, a doctor shoots two-dimensional X-ray images from at least two different angles, each two-dimensional image is shot, and the system automatically records a plurality of key data at the moment (position):
(1) Image data, namely storing the two-dimensional image;
(2) Pose data, namely, recording and storing a pose transformation matrix of a patient tracker coordinate system P relative to a navigation coordinate system C at the moment in real time through a navigation positioning system Pose transformation matrix of image equipment tracker coordinate system D relative to navigation coordinate system C。
(3) The motion angle data is used for recording the current motion angle, mainly the rotation angle, of the C-shaped armSum of the angles of rotationThis process may be repeated multiple times to obtain multiple sets of images at different angles and corresponding data, providing more material for the physician to select subsequently.
Step 3, image selection, region of interest (ROI) rendering and three-dimensional target region generation
And 2, selecting two (or more) images from all the two-dimensional images shot in the step 2 by a doctor, and drawing a closed area on each selected image through a human-computer interaction interface (such as a touch screen), wherein the area is an interested area needing key three-dimensional imaging.
After the system receives the ROI information, the following calculations are performed:
(1) Mapping to imaging equipment surface 3. The system maps the pixel coordinates of the ROI drawn by the user on the two-dimensional image to the physical coordinates under the imaging equipment surface coordinate system F when the image is shot according to the pixel information, physical size and calibration parameters of the image, and calculates the minimum circumcircle 9 of each ROI to obtain the circle center coordinates Radius sum;
(2) Generating a radiation model from the radiation source center 1 to the center of a circle according to the projection geometry of X-raysForming a direction vectorTo this end, an ROI is characterized as a radial cylinder model under the imaging device surface coordinate system;
(3) Mapping to patient space the system retrieves pose data recorded when the image was takenAndUsing a transformation formulaModeling ray cylindersConversion from imaging device surface coordinate system F to patient tracker coordinate system P;
(4) The system carries out geometric operation on the ray cylinder model under the two patient spaces obtained by calculation from the two images, calculates a common vertical line intersecting the two rays, finds the intersection point of the common vertical line and the two rays, and based on the two points, expands the radiuses r1 and r2 of the respective models outwards to form a new space line segment, and finally, a three-dimensional target region (ROI sphere) is generated in the patient space by taking the midpoint of the line segment as a sphere center and taking half of the length of the line segment as a radius.
Step 4, target 3D circular scanning position control
The system starts an automatic motion control flow to guide the C-shaped arm to move to a three-dimensional target area:
real-time data acquisition, wherein the system controls the C-shaped arm to move, and the navigation system continuously feeds back the current in real time in the process And;
Real-time coordinate conversion, namely the system converts the real-time angle of the current C-shaped arm,) Calculating the currentAnd combined with calibration matrixBy the formulaCalculating the transformation relation from the patient coordinate system P to the current imaging space coordinate system I at the current moment;
calculating the target deviation by using Center of sphere of three-dimensional target areaConverting to the current imaging space coordinate system to obtain the target imaging center 10Calculation ofA deviation vector from the current imaging center 5 (i.e., the origin of the imaging spatial coordinate system;
And closed-loop motion control, wherein the system decomposes the deviation vector into displacement amounts on each motion degree of the C-shaped arm (such as C-shaped arm movement, C-arm rotation, C-arm scratching rotation and the like), and generates corresponding control instructions to drive the motor to move. The system continuously repeats the step 4, calculates new deviation in real time and controls movement until the deviation value is smaller than a preset threshold value, namely the imaging center 5 is considered to be coincident with the spherical center of the ROI;
And executing 3D scanning, namely prompting a doctor and automatically triggering a three-dimensional annular scanning program of the C-shaped arm after the system judges that the target position is reached, and accurately acquiring three-dimensional image data of the region of interest.
Example two
Referring to fig. 2, the present embodiment provides a three-dimensional image navigation control system driven by a two-dimensional region of interest, which is configured to implement a three-dimensional image navigation control method driven by the two-dimensional region of interest, and includes an image acquisition module, a spatial mapping module, and a motion control module.
The overall structure of the system corresponds to the method described in the first embodiment, and is used to perform the same technical functions. The image acquisition module is used for acquiring at least two-dimensional images acquired under different poses and selecting a region of interest on the two-dimensional images. The space mapping module is used for mapping the region of interest on the two-dimensional image into the three-dimensional imaging space 4 to obtain a first region pose of the region of interest in the three-dimensional imaging space 4, and calculating a three-dimensional target region containing the region of interest according to the first region pose. The motion control module is used for controlling the motion of the three-dimensional image equipment so as to enable the imaging center 5 of the three-dimensional image equipment to coincide with the center of the three-dimensional target area.
The motion control module comprises a pose acquisition unit, a real-time coordinate conversion unit and a control instruction generation unit. The pose acquisition unit acquires a first pose transformation relation between a patient space coordinate system and a navigation coordinate system and a second pose transformation relation between an image equipment tracker coordinate system and the navigation coordinate system in real time through the navigation positioning system, and acquires a fourth transformation relation between a pre-calibrated image equipment tracker coordinate system and a three-dimensional imaging space coordinate system. The real-time coordinate conversion unit is configured to calculate a fifth transformation relationship between the current three-dimensional imaging spatial coordinate system and the patient spatial coordinate system according to the first pose transformation relationship, the second pose transformation relationship, and the fourth pose transformation relationship, and convert the center point coordinate of the three-dimensional target area from the patient spatial coordinate system to the current three-dimensional imaging spatial coordinate system by using the fifth transformation relationship, so as to obtain the target imaging center 10. The control instruction generating unit calculates pose deviation of the target imaging center 10 and the current imaging center 5 under a three-dimensional imaging space coordinate system, decomposes the pose deviation into one or more motion degrees of freedom of the three-dimensional imaging device, and generates a control instruction to enable the imaging center 5 of the three-dimensional imaging device to coincide with the target imaging center 10.
In a preferred embodiment, the three-dimensional imaging device is a robotic X-ray imaging device having multiple degrees of freedom of motion, including C-arm, G-arm, or O-arm, and the like, and the X-ray imaging device typically has multiple degrees of freedom, including rotation, translation, and scoring, which enable it to image the patient from different angles, facilitating the physician to obtain more comprehensive image information. After the control instruction generating unit calculates pose deviations, the deviations can be decomposed into motion instructions of various degrees of freedom of the mechanical arm, so that accurate positioning is realized.
It should be noted that this embodiment provides a three-dimensional image navigation control system driven by a two-dimensional region of interest, through the system, by mapping the region of interest on the two-dimensional image to a three-dimensional space and controlling the automatic positioning of the image device, the accuracy and efficiency of medical image guided surgery and diagnosis process are greatly improved, the workload of a doctor for manually adjusting the image device is reduced, especially in medical scenes requiring accurate positioning such as minimally invasive surgery and radiotherapy, the doctor only has to pick up the region of interest on the two-dimensional image, and the device can be automatically guided to move to the corresponding position.
In a preferred embodiment, the present application also provides an electronic device, including:
And a processor, the memory having stored thereon computer readable instructions that when executed by the processor implement the two-dimensional region of interest driven three-dimensional image navigation control method, the computer device may broadly be a server, a terminal, or any other electronic device having the necessary computing and/or processing capabilities. In one embodiment, the computer device may include a processor, memory, network interface, communication interface, etc. connected by a system bus. The processor of the computer device may be used to provide the necessary computing, processing and/or control capabilities. The memory of the computer device may include a non-volatile storage medium and an internal memory. The non-volatile storage medium may have an operating system, computer programs, etc. stored therein or thereon. The internal memory may provide an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface and communication interface of the computer device may be used to connect and communicate with external devices via a network. Which when executed by a processor performs the steps of the method of the invention.
The present invention may be implemented as a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes steps of a method of an embodiment of the present invention to be performed. In one embodiment, the computer program is distributed over a plurality of computer devices or processors coupled by a network such that the computer program is stored, accessed, and executed by one or more computer devices or processors in a distributed fashion. A single method step/operation, or two or more method steps/operations, may be performed by a single computer device or processor, or by two or more computer devices or processors. One or more method steps/operations may be performed by one or more computer devices or processors, and one or more other method steps/operations may be performed by one or more other computer devices or processors. One or more computer devices or processors may perform a single method step/operation or two or more method steps/operations.
The invention provides a three-dimensional image navigation control method and a system driven by a two-dimensional region of interest, wherein at least two-dimensional images acquired under different poses are acquired, the region of interest on the two-dimensional images is selected, the region of interest on the two-dimensional images is mapped into a three-dimensional imaging space to obtain a first region pose of the region of interest in the three-dimensional imaging space, a three-dimensional target region containing the region of interest is calculated, the imaging center of a three-dimensional image device coincides with the center of the three-dimensional target region, the region of interest can be accurately positioned through a navigation technology, the accuracy of three-dimensional imaging is improved, the radiation dose of a patient is reduced, waste pieces generated due to inaccurate shooting positions are avoided, the operation time is reduced, the operation efficiency is greatly improved, the doctor operation flow is simplified, namely, the region of interest is only required to be checked on the two-dimensional images, and the device can be automatically guided to move to the corresponding position, and the intelligent degree, the usability and the reliability of the three-dimensional image device are greatly improved.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical features described above may be arbitrarily combined. Although not all possible combinations of features are described, any combination of features should be considered to be covered by the description provided that such combinations are not inconsistent.
The above-described embodiments of the present invention do not limit the scope of the present invention. Any other corresponding changes and modifications made in accordance with the technical idea of the present invention shall be included in the scope of the claims of the present invention.
Claims (11)
1. A three-dimensional image navigation control method driven by a two-dimensional region of interest is characterized by comprising the following steps:
s100, acquiring at least two-dimensional images acquired under different poses, and selecting an interested region on the two-dimensional images;
S200, mapping an interested region on the two-dimensional image into a three-dimensional imaging space (4), obtaining a first region pose of the interested region in the three-dimensional imaging space (4), and calculating a three-dimensional target region containing the interested region according to the first region pose;
s300, controlling the motion of the three-dimensional imaging equipment to enable an imaging center (5) of the three-dimensional imaging equipment to coincide with the center of the three-dimensional target area;
Mapping the region of interest on the two-dimensional image into a three-dimensional imaging space (4) to obtain a first region pose of the region of interest in the three-dimensional imaging space (4), comprising:
And constructing a ray model taking the central point of the region of interest as a vertex and the direction pointing to the center (1) of the radioactive source as a direction vector under the surface coordinate system of the imaging equipment, and combining the ray model with the radius information of the region of interest to obtain the pose of the first region of the region of interest in the three-dimensional imaging space (4).
2. The two-dimensional region-of-interest driven three-dimensional image navigation control method according to claim 1, wherein the method of obtaining a first region pose of the region of interest in a three-dimensional imaging space (4) comprises:
Converting the coordinate information of the region of interest under the corresponding two-dimensional image coordinate system into the coordinate information under the imaging equipment surface coordinate system;
Generating a circumcircle (9) on the surface (3) of the imaging device by using the converted region of interest, wherein the center point of the circumcircle (9) is the center point of the region of interest, the radius of the region of interest is the radius of the circumcircle (9), then constructing a ray model taking the center point of the circumcircle (9) as a vertex and the direction pointing to the center (1) of the radioactive source as a direction vector under the surface coordinate system of the imaging device, and combining the ray model and the radius of the circumcircle (9) to obtain the pose of the first region of the region of interest in the three-dimensional imaging space (4) , wherein,For the ray vertex coordinates,As a direction vector, the direction vector is defined,Is the radius of the circumscribing circle (9).
3. The method according to claim 1, wherein calculating a three-dimensional target region including the region of interest according to the first region pose comprises:
Converting the first region pose from the imaging equipment surface coordinate system to a patient space coordinate system to obtain a second region pose;
and determining a three-dimensional space sphere as a three-dimensional target area of the region of interest based on the second region pose of the region of interest of different two-dimensional images in the patient space coordinate system.
4. The two-dimensional region-of-interest driven three-dimensional image navigation control method of claim 3, wherein the method of converting the first region pose from an imaging device surface coordinate system to a patient space coordinate system to obtain a second region pose comprises:
Acquiring a first pose transformation relation between a patient space coordinate system and a navigation coordinate system, a second pose transformation relation between an image equipment tracker coordinate system and the navigation coordinate system and a third pose transformation relation between a pre-calibrated image equipment tracker coordinate system and an imaging equipment surface coordinate system in real time through a navigation positioning system;
And converting the first region pose from the imaging equipment surface coordinate system to the patient space coordinate system according to the first pose conversion relation, the second pose conversion relation and the third pose conversion relation to obtain the second region pose.
5. The method for two-dimensional region-of-interest driven three-dimensional image navigation control of claim 4, wherein the method for acquiring a first pose transformation relationship between a patient space coordinate system and a navigation coordinate system and a second pose transformation relationship between an image device tracker coordinate system and a navigation coordinate system in real time by a navigation positioning system comprises:
The navigation positioning system receives pose signals sent by a patient tracker (6) fixed on a patient and an image equipment tracker (2) fixed on three-dimensional image equipment through a navigation pose receiving device (7);
The navigation positioning system calculates the pose signal to obtain a first real-time pose transformation matrix of the patient space coordinate system relative to the navigation coordinate system, namely a first pose transformation relation between the patient space coordinate system and the navigation coordinate system, and obtains a second real-time pose transformation matrix of the image equipment tracker coordinate system relative to the navigation coordinate system, namely a second pose transformation relation between the image equipment tracker coordinate system and the navigation coordinate system.
6. The method for controlling three-dimensional image navigation driven by a two-dimensional region of interest according to claim 3, wherein if two-dimensional images acquired under different poses are acquired, the method for determining a three-dimensional sphere as a three-dimensional target region of the region of interest comprises:
Calculating a common vertical line intersecting rays in a ray model represented by second region poses of two-dimensional image regions of interest, extending the common vertical line outwards, wherein the common vertical line and the ray model represented by each second region pose have a first intersection point and a second intersection point, the first intersection point and the second intersection point form a space line segment, the space line segment is used as the diameter of a three-dimensional space sphere, a three-dimensional space sphere taking the midpoint of the space line segment as the center of a circle is generated, and the three-dimensional space sphere is used as the three-dimensional target region.
7. The method of two-dimensional region-of-interest driven three-dimensional image navigation control of claim 4, wherein S300 comprises:
Mapping a three-dimensional target area under a patient space coordinate system to a current three-dimensional imaging space coordinate system to obtain a first three-dimensional target area, namely, obtaining a target imaging center (10) of mapping the center of the three-dimensional target area under the patient space coordinate system to the current three-dimensional imaging space coordinate system;
Under a three-dimensional imaging space coordinate system, generating a control instruction according to the deviation between the target imaging center (10) and the imaging center (5) of the current three-dimensional imaging device, and controlling the three-dimensional imaging device to move so as to enable the imaging center (5) of the three-dimensional imaging device to coincide with the target imaging center (10).
8. The two-dimensional region-of-interest driven three-dimensional image navigation control method according to claim 7, wherein the method of obtaining a center point coordinate mapping of a three-dimensional target region in a patient space coordinate system to a target imaging center (10) in a current three-dimensional imaging space coordinate system comprises:
utilizing a fourth transformation relation between a pre-calibrated image equipment tracker coordinate system and a three-dimensional imaging space coordinate system;
Calculating a fifth transformation relation between the current three-dimensional imaging space coordinate system and the patient space coordinate system according to the first pose transformation relation, the second pose transformation relation and the fourth pose transformation relation;
and converting the central point coordinate of the three-dimensional target area from the patient space coordinate system to the current three-dimensional imaging space coordinate system by utilizing a fifth transformation relation to obtain a target imaging center (10).
9. The two-dimensional region-of-interest driven three-dimensional image navigation control method according to claim 7, characterized in that the method of overlapping the imaging center (5) of the three-dimensional imaging device with the target imaging center (10) comprises:
And calculating pose deviation of the target imaging center (10) and the current imaging center (5) under a three-dimensional imaging space coordinate system, decomposing the pose deviation into one or more degrees of freedom of motion of the three-dimensional imaging equipment, and generating a control instruction to enable the imaging center (5) of the three-dimensional imaging equipment to coincide with the target imaging center (10).
10. A two-dimensional region-of-interest driven three-dimensional image navigation control system for implementing the two-dimensional region-of-interest driven three-dimensional image navigation control method according to any one of claims 1 to 9, comprising:
The image acquisition module is used for acquiring at least two-dimensional images acquired under different poses and selecting an interested region on the two-dimensional images;
The space mapping module is used for mapping the region of interest on the two-dimensional image into a three-dimensional imaging space (4), obtaining a first region pose of the region of interest in the three-dimensional imaging space (4), and calculating a three-dimensional target region containing the region of interest according to the first region pose;
and the motion control module is used for controlling the motion of the three-dimensional image equipment so as to enable an imaging center (5) of the three-dimensional image equipment to coincide with the center of the three-dimensional target area.
11. The two-dimensional region of interest driven three-dimensional imaging navigation control system of claim 10, wherein the three-dimensional imaging device is a robotic X-ray imaging device having multiple degrees of freedom of motion.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511802802.9A CN121221269B (en) | 2025-12-03 | 2025-12-03 | Three-dimensional image navigation control method and system driven by two-dimensional region of interest |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511802802.9A CN121221269B (en) | 2025-12-03 | 2025-12-03 | Three-dimensional image navigation control method and system driven by two-dimensional region of interest |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN121221269A CN121221269A (en) | 2025-12-30 |
| CN121221269B true CN121221269B (en) | 2026-01-27 |
Family
ID=98151333
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202511802802.9A Active CN121221269B (en) | 2025-12-03 | 2025-12-03 | Three-dimensional image navigation control method and system driven by two-dimensional region of interest |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN121221269B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103961130A (en) * | 2006-09-25 | 2014-08-06 | 马佐尔机器人有限公司 | Method for adapting C-arm system to provide three-dimensional imaging information |
| CN110120095A (en) * | 2015-08-06 | 2019-08-13 | 柯惠有限合伙公司 | System and method for using the partial 3 d volumetric reconstruction of standard fluorescence mirror |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100800554B1 (en) * | 2007-07-03 | 2008-02-04 | (주)지에스엠솔루션 | 3D modeling method using laser scanner and camera image information in mobile photogrammetry system |
| CN109069217B (en) * | 2016-02-12 | 2022-05-06 | 直观外科手术操作公司 | System and method for pose estimation and calibration of fluoroscopic imaging systems in image-guided surgery |
| KR101988531B1 (en) * | 2017-07-04 | 2019-09-30 | 경희대학교 산학협력단 | Navigation system for liver disease using augmented reality technology and method for organ image display |
| WO2021012142A1 (en) * | 2019-07-22 | 2021-01-28 | 京东方科技集团股份有限公司 | Surgical robot system and control method therefor |
| CN110706336A (en) * | 2019-09-29 | 2020-01-17 | 上海昊骇信息科技有限公司 | Three-dimensional reconstruction method and system based on medical image data |
| CN212913213U (en) * | 2020-07-13 | 2021-04-09 | 上海卓昕医疗科技有限公司 | Three-dimensional imaging device |
| CN116549108A (en) * | 2022-01-28 | 2023-08-08 | 北京天智航医疗科技股份有限公司 | Method, device, equipment and storage medium for determining target point position based on image |
| CN119854640B (en) * | 2024-12-30 | 2025-09-26 | 北京航空航天大学 | A spatial pose pointing positioning method for surgical field video acquisition |
-
2025
- 2025-12-03 CN CN202511802802.9A patent/CN121221269B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103961130A (en) * | 2006-09-25 | 2014-08-06 | 马佐尔机器人有限公司 | Method for adapting C-arm system to provide three-dimensional imaging information |
| CN110120095A (en) * | 2015-08-06 | 2019-08-13 | 柯惠有限合伙公司 | System and method for using the partial 3 d volumetric reconstruction of standard fluorescence mirror |
Also Published As
| Publication number | Publication date |
|---|---|
| CN121221269A (en) | 2025-12-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220313190A1 (en) | System and method for pose estimation of an imaging device and for determining the location of a medical device with respect to a target | |
| US20220022979A1 (en) | System And Method For Registration Between Coordinate Systems And Navigation | |
| JP4469423B2 (en) | Stereotaxic treatment apparatus and method | |
| JP5934230B2 (en) | Method and apparatus for treating a partial range of movement of a target | |
| US8000445B2 (en) | Rotational X-ray scan planning system | |
| AU2023226004A1 (en) | Three-dimensional reconstruction of an instrument and procedure site | |
| CN113081265B (en) | Surgical navigation space registration method and device and surgical navigation system | |
| CN116077155B (en) | Surgical navigation method based on optical tracking equipment and mechanical arm and related device | |
| CN1925793A (en) | System for guiding a medical device inside a patient | |
| JP2009022754A (en) | Method for correcting registration of radiography images | |
| CN115222801A (en) | Method and device for positioning through X-ray image, X-ray machine and readable storage medium | |
| CN112869856B (en) | Two-dimensional image guided intramedullary needle distal locking robot system and locking method thereof | |
| US20200222122A1 (en) | System and Method for Registration Between Coordinate Systems and Navigation | |
| JP2024524800A (en) | A robot equipped with an ultrasound probe for time-real-time guidance in percutaneous interventions | |
| CN110267594A (en) | Isocenter in C-arm Computed Tomography | |
| Li et al. | Robotic CBCT meets robotic ultrasound | |
| CN121221269B (en) | Three-dimensional image navigation control method and system driven by two-dimensional region of interest | |
| CN116509543A (en) | Composite surgical navigation device, method and system | |
| CN115462816A (en) | Control method of imaging device, control device and imaging device | |
| JP7184139B2 (en) | Positioning device and positioning method | |
| US20240277415A1 (en) | System and method for moving a guide system | |
| US20240298979A1 (en) | Collision avoidance when positioning a medical imaging device and a patient positioning apparatus | |
| CN121147460A (en) | A control method, device, equipment and medium for medical imaging equipment | |
| JPWO2018225234A1 (en) | Positioning device and positioning method | |
| WO2025131290A1 (en) | Computer-implemented method for medical imaging |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant |