[go: up one dir, main page]

WO2014166415A1 - Image guidance method employing two-dimensional imaging - Google Patents

Image guidance method employing two-dimensional imaging Download PDF

Info

Publication number
WO2014166415A1
WO2014166415A1 PCT/CN2014/075126 CN2014075126W WO2014166415A1 WO 2014166415 A1 WO2014166415 A1 WO 2014166415A1 CN 2014075126 W CN2014075126 W CN 2014075126W WO 2014166415 A1 WO2014166415 A1 WO 2014166415A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
feature
real
feature area
Prior art date
Application number
PCT/CN2014/075126
Other languages
French (fr)
Chinese (zh)
Inventor
母治平
Original Assignee
重庆伟渡医疗设备股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201310127363.7A external-priority patent/CN103876763A/en
Application filed by 重庆伟渡医疗设备股份有限公司 filed Critical 重庆伟渡医疗设备股份有限公司
Publication of WO2014166415A1 publication Critical patent/WO2014166415A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5223Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data generating planar views from image data, e.g. extracting a coronal view from a 3D image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection

Definitions

  • the present invention relates to the field of image guidance, and more particularly to an image guidance method using two-dimensional images.
  • the basic purpose of image guidance is to reproduce the pre-set diseased body position during actual treatment or treatment, referred to as the preset body position.
  • the doctor collects the patient's 3D CT image, develops a radiotherapy plan based on the image, and selects a point in the image as a treatment center, called the isocenter.
  • the anatomical location represented by these centers needs to be placed in the treatment device, usually at the center of the medical electron linear accelerator (also known as the isocenter, the isocenter of the device).
  • the preset position refers to the position of the patient when the anatomical position of the isocenter representative set by the radiotherapy plan is located at the center of the device.
  • image-guided technology determines the deviation (or desired position adjustment parameter) of the patient's current position and preset position by real-time acquisition of intraoperative images (hereinafter referred to as real-time images or real-time images) before or during treatment.
  • the doctor or radiotherapy technician can adjust the patient's position based on this deviation to improve the placement accuracy and achieve precise radiotherapy.
  • the basic steps of implementing image guidance by this method may include: A1) The marker (as opposed to the preset body center) The three-dimensional position can be obtained in advance in a three-dimensional image (such as CT), Rl, R2...Rn, called a preset. Position; A2) Find the location of these markers in the acquired real-time image; the method of finding is described in references [A] and [B]; if using a two-dimensional image for guidance, images at least at two angles Find their position; A3) through the back projection operation, get their position in 3D space at this time, Sl, S2, ...
  • This positional deviation includes displacement in three-dimensional space and may also include rotation about three degrees of freedom.
  • This method is a general method, has a large amount of literature, and has been used in commercial products for many years without further elaboration. The operator adjusts the position of the treatment bed based on the deviation of the intraoperative position from the pre-set position or the desired position adjustment.
  • Implanted markers are usually made of heavy metals such as gold, stainless steel, etc., in X-ray images It will exhibit a higher contrast to the surrounding tissue and is therefore easier to distinguish in a two-dimensional image ( Figure 5).
  • implanting markers is an invasive clinical tool that increases certain patient risks and costs, and its scope of application is limited.
  • the preparation for radiotherapy such as three-dimensional image acquisition is performed, and thus the treatment has a certain time delay.
  • the implanted marker is prone to positional displacement or shedding in the tissue, ie, deviating from its implantation position.
  • markers introduces large positioning errors and is not easily found. This can be avoided by using a three-dimensional image taken on the patient or by marking a feature area that reflects the patient's own anatomical features on a plurality of two-dimensional DRRs generated from the three-dimensional image.
  • the present invention provides an image guiding method using a two-dimensional image without implanting a marker.
  • the DRR is a projection imaging process that simulates a two-dimensional radiological imaging system, and the object or patient anatomy represented by the three-dimensional image is generated at a specific position (or relative to the imaging system).
  • the DRR in the above steps may be representative of a plurality of image systems and acquisition angles, and may be a plurality of sets of DRRs generated by placing a patient's three-dimensional image in a plurality of known body positions (relative to the imaging system).
  • the shape can be square, circular, spherical or other shape determined according to a certain template), which can be a feature region formed by the marker and a feature region formed near the anatomical structure, or only a feature region formed near the anatomical structure, and then selecting a reference feature region, wherein at least one of the reference feature regions is near the anatomical structure
  • the feature area is defined on the three-dimensional image. It can be selected on the 3D image or selected on the 2D DRR, and the selected area has obvious features on both the 3D and 2D images.
  • step 2) searching for the real-time feature area, searching for real-time feature areas corresponding to the reference feature areas in step 1) in the real-time image, and searching the real-time feature areas as real-time feature areas corresponding to all reference feature areas, or A part of the reference feature area corresponds to the real-time feature area of the feature area.
  • the position deviation as the image guidance is determined by comparing the real-time feature area on the real-time image with the position of the reference feature area on the two-dimensional DRR.
  • This positional deviation includes displacement in three-dimensional space and may also include rotation about three degrees of freedom.
  • the estimation of the rotation error is mainly estimated by calculating the possible error of the isocenter position.
  • the possible error of the isocenter position is 1* ⁇ , where L is the distance between the center of the reference feature region and the isocenter, and ⁇ is the rotation error.
  • the center of the feature area is the center point of the feature area.
  • the estimating step of the deformation error is: integrating the deformation degree of the tissue on the connection line between the reference feature area and the isocenter according to the position of each reference feature area in the three-dimensional image, and obtaining the reference feature area relative to The possible degree of deformation of the isocenter, and then average or weighted the degree of deformation to obtain an estimate of the degree of deformation of the reference feature region, and then based on the estimate of the degree of deformation gives an isocenter position caused by the deformation Estimate the error, and finally calculate the relative equal center position of the reference feature area according to the look-up table or through the empirical formula. Set the error.
  • the selection and screening of the feature regions in the above step 1) can be performed on the three-dimensional image.
  • the user can select a feature point in the three-dimensional image for marking, the feature point is located near the anatomical structure with obvious features, and the area centered on the feature point of the mark is used as an optional feature area with obvious features;
  • the projection position of the feature point is calculated according to the projection relationship and marked out; then it is judged whether the image near the position has sufficient feature degree, if it is enough to retain, if not enough, it is discarded and re-selected.
  • the "significant feature” in this application refers to a more significant change in the gray level of the image near the feature point, rather than a flat, untextured area. These features are usually formed by bony tissue.
  • This selection principle is very easy to understand and master for operators familiar with X-ray images, and the subsequent real-time feature area search algorithm has high fault tolerance. Therefore, this method is used to select the reference feature area to guide the image.
  • the positional deviation obtained by the method can fully meet the requirements of image guidance.
  • the degree of feature can be calculated in a variety of ways, such as local grayscale changes (which can be expressed by variance, standard deviation), information entropy (entropy), contrast, and so on.
  • the specific choice may also need to match the search algorithm of the feature area.
  • the selection and screening of the reference feature regions can be performed on the two-dimensional image.
  • the steps of selecting a plurality of feature regions having distinct features on the two-dimensional DRR as described in step 1) are: a) selecting one of a plurality of two-dimensional DRRs representing a unified position of the patient generated by the three-dimensional image taken by the patient as The first two-dimensional DRR selects a strong A point on the first two-dimensional DRR as the center point of the feature area with obvious features; b) marks the corresponding A point on the second two-dimensional DRR (This step can be implemented by computer software); c) selecting a strong B point from the projection line as the center point of the feature area having significant features on the second two-dimensional DRR; d) according to step a) And the two center points A and B in c), using the back projection relationship to determine the position of the reference feature area in the three-dimensional image (this step can be implemented by computer software).
  • the process of selecting and screening out the reference feature area in step 1) may be determined by the operator based on the visual evaluation or automatically by the device responsible for image processing.
  • the automatic selection method may be: calculating the feature degree of each point in a certain range around the center of the three-dimensional image according to the foregoing feature degree calculation method; and selecting several points of the loca l maximum as the candidate feature area.
  • a certain preset ratio such as 80% or 100%
  • the above similarity is calculated in many ways, and may include correlation coefficients (correla t ion coeff ic ient), mutual information (mutua l informa t ion ), and the like.
  • the value of the above-mentioned threshold is also related to the method of calculating the similarity.
  • the correlation coefficient can be selected to be 0.7.
  • Another processing method is to select a plurality of reference feature regions, and after obtaining the real-time image, according to the similarity between the regions in the real-time map and the corresponding reference feature regions, the reference feature region with high similarity is selected, and the similarity is low and difficult to use. Those reference feature areas searched in the real-time graph.
  • the purpose of searching for the real-time feature area in the above step 1) is to find the position of the reference feature area selected in step 1) in the real-time map.
  • the reference feature center point that is, the position of the feature point in the three-dimensional image and the projection relationship of the imaging system
  • the projection position of the reference feature area in the DRR can be calculated.
  • an area of 0. 5-6 cm centered on the feature point can be defined in the DRR, and the size and shape of the area (usually square, rectangular) can be selected and adjusted.
  • the search for the real-time feature area is to find the location of the small area most similar to the template in the corresponding real-time map. There are various methods.
  • one method includes multi-value processing to filter "suspected" regions (blobs), filtering with patterns of shape, size, brightness, etc., vertical axis (super ior-infer Ior axi s) decision, layout (conf igura t ion) determination and other steps. Finally, the blobs in the layout mode giving the best position matching are determined as the corresponding feature areas in the real-time image.
  • Another method for searching for real-time feature regions is as described in [B], including pre-processing, correlation processing, and extracting local maximum points to form a candidate region list, and using CVA algorithm to select the best matching candidate region as the selection. Corresponding feature area in the real-time image.
  • This 4 set is basically applicable to the feature area located in the central area, but a large error is generated for the characteristic area located in the edge area.
  • the degree of correlation can be calculated according to the principle of the Epipolar line. Assume that the central coordinate of a candidate region ⁇ in real-time graph A is (x A , yJ, and the central coordinate of a candidate region ⁇ in real-time graph B is ( , y B ). When calculating the degree of association between the two, find the real-time first.
  • the method of calculating the positioning based on the reference feature area and the real-time feature area in the above step 3) may be various.
  • the image guidance system uses a real-time image taken at one imaging angle to achieve guidance.
  • the second method is to calculate the coordinates of the real-time feature regions obtained from the real-time map in the above-mentioned step A3 in the three-dimensional coordinates S1, .., Sn, and calculate the rest of the corresponding reference feature region preset positions R1, ..., Rn.
  • Image guidance using this method eliminates the need for implantable markers, making the test non-invasive, reducing patient suffering and reducing patient care costs and costs. At the same time, because no need to wait If the implant is to be placed in the body, the method can also shorten the waiting period for treatment, and the patient can start treatment as soon as possible.
  • Figure 1 is a schematic diagram of the selected reference feature area.
  • the left side of the "+" mark in the figure is the reference feature area on the right side of the reference feature area.
  • FIG. 2 is a schematic diagram showing that the information of the reference feature area is weak.
  • the left side of the "+" mark in the figure is the reference feature area on the right side of the reference feature area.
  • FIG. 3 is a schematic diagram of a process of selecting a reference feature region from a DRR image; S1 and S2 represent the position of the radiation source; D1 and D2 represent imaging detectors corresponding to the two sources; FIG. 4 is not suitable for use in the three-dimensional image.
  • An example of a reference feature area is selected by a two-dimensional image, and has strong features in both DRRs, but in a three-dimensional image in a region where the grayscale is gentle, there is no obvious feature, Shown in the three views of CT, the left picture is the cross section; the upper right is the coronal plane; the lower right is the sagittal plane;
  • Figure 5 is a schematic diagram of a reference feature area and a real-time feature area of the implanted marker. There are two DRRs on the left side; the real-time map corresponding to the right side, and the implanted markers in the box on the right. detailed description
  • the positional deviation is determined by searching some real-time feature regions in the real-time map and comparing the positions of the real-time map with the reference feature region in the DRR in the real-time feature region.
  • These reference feature regions are small regions in a three-dimensional image whose projection in a two-dimensional image (including a real-time map and a two-dimensional DRR) is a small region having features different from the surrounding region.
  • the specific feature description depends on the image mode and image alignment (also called registration or fusion) algorithm.
  • the reference feature area can be a large grayscale, unique change. Small area, as shown in Figure 1.
  • the reference feature region selected here is the reference feature region of the non-artificial implant marker.
  • the reference feature region in the present invention refers to a feature region formed by the body's own anatomical structure in addition to these implantable markers, usually formed near the bone structure. At least one such reference feature area is selected at the time of selection, and the size and shape of the reference feature area are adjustable.
  • the selection of these reference feature zones can be done manually. It can be selected by the operator in a three-dimensional image (such as CT) used to generate the DRR, or directly on the DRR.
  • a three-dimensional image such as CT
  • the operator selects several feature points in the three-dimensional space as the center of several reference feature areas (which can be implemented by software assistance).
  • the position of these points in the two-dimensional DRR can be calculated, and the selected reference feature regions are marked on the two-dimensional DRR, and then the operator can visually judge whether the feature information of these regions is strong enough, and the selection is retained one by one. Or delete. This may be the case: A reference feature area is reluctant in one of the projected DRRs but weaker in the other DRR. As shown in FIG.
  • This process can also be done directly on the 2D DRR, as shown in Figure 3.
  • the specific steps are as follows: Select a strong feature point on a DRR as the reference feature area center point P1; then mark the corresponding projection line corresponding to the point P1 on another DRR (as an auxiliary means, can be implemented by software) , that is, Epipolar l ine, L2; the operator selects a point with strong features from the projection line L2 (as the center point of the reference feature region selected first in the DRR), P2; The position of these two points determines the center point position P of the feature point in the three-dimensional image with reference to the feature area.
  • One of the above DRRs corresponds to one of the other DRRs.
  • the information of a point in the DRR (e.g., P1) represents the integral effect of the medium on the line between the source S1 and the detector D1 pixel that collects the point information.
  • This connection is called the projection line of P1.
  • Projection line S1_P1 is projected on D2 in another imaging system consisting of S2 and D2. It is a line formed by the projection of each point on S1_P1 on D2, called the Epipolar line corresponding to P1.
  • the back projection relationship which determines the back projection line based on the projection points P1 and P2 on the two DRRs, and then calculates the spatial intersection point P is a back projection process and is the basic technique in computer vision.
  • the reference feature area selected according to the above steps may be further selected to meet the requirements of the feature area.
  • the feature area selected by the two-dimensional image may have obvious features on the two two-dimensional images, but there are no significant features on the three-dimensional image, as shown in the "+" mark in Figure 4; some features may be caused by The difference between the postoperative position and the preset position (especially when there is a large angular difference) and a large change in the real-time graph, This makes it difficult to search for such feature areas in real-time graphs. Such feature areas do not meet the requirements.
  • the screening methods for the reference feature area are:
  • the feature degree of the feature area on the three-dimensional image may be a change of its three-dimensional neighborhood (variation of gray value) or a measure of information amount. If the characteristic degree does not reach a certain threshold, delete it.
  • test DRR representing the difference between the postural position and the preset body position (ie, the larger body position difference, including the larger angle and the larger displacement); according to the projection relationship, calculate the position of the feature point in these DRRs, that is, the reference feature area The position in these test DRRs; for each of the reference feature regions, their similarities in the test DRR and in the DRR representing the preset body position are calculated. The operator can decide whether to retain or delete the reference feature area based on the degree of similarity. If the similarity in all test DRRs is high, it can be retained.
  • the above screening process can also be done automatically by the image processing device.
  • the test DRR is generated, the position of the feature point in the test DRR is calculated, and the similarity of the reference feature area is calculated automatically.
  • the selected reference feature area and the real-time feature area are non-implanted feature areas
  • the projection of the three-dimensional structure on the two-dimensional imaging plane changes with the position and direction of the imaged body, this represents the DRR of the preset body position.
  • the reference feature area may decrease in similarity with the corresponding real-time feature area in the real-time map. This change is gradual and is primarily affected by directional deviations.
  • a series of DRRs can be generated, representing the expected X-ray image when the imaged body and the preset body position have a certain positional deviation, and then the feature areas in which the projections in the DRRs are not changed greatly are selected as the reference feature areas.
  • the magnitude of the error and the center of the reference feature zone are related to the distance of the isocenter.
  • the positioning error calculated based on the position of the feature area, especially the rotation error.
  • the rotation error is the error of the displacement part result caused by the possible error of the rotation part of the position deviation, which is amplified by the above distance:
  • the estimation of the rotation error can be based on experimental data.
  • the rotation error ⁇ can be estimated experimentally, usually in multiple Know the root mean square of the error between the test result and the known position at the position.
  • the soft tissue, lung tissue, and bone have different gray values in CT and X-ray images, and can be divided accordingly, and the corresponding degree of deformation can be set (the degree of deformation can be set according to the elastic coefficient of the tissue).
  • the deformation degree of the tissue on the line between the point and the isocenter can be integrated to obtain a possible deformation of the point relative to the isocenter. Degree measure.
  • the deformation degree of all the feature points is averaged or weighted averaged to obtain an estimate of the deformation degree of the currently selected feature region.
  • This deformation estimate can be provided directly to the operator as a reference to the isocenter position error, or an estimate of the isocenter position error caused by the deformation can be given based on this estimate.
  • the final step of estimating the isocenter position error from the deformation degree estimation may be a look-up table or an empirical formula calculation. For example, the measurement of the elastic coefficient of a tissue has a large amount of literature available, such as [ ⁇ ]. Both the table and the empirical formula can be determined experimentally.
  • This kind of error is also related to the specific anatomical position, and the possibility that the error is greater in the part affected by the breathing and peristalsis is greater.
  • the error caused by the motion here is mainly related to the organization between the equal center and the set feature area. This can be achieved by assigning different coefficients of motion to the various anatomical parts, for example, imparting a large coefficient of motion to tissues and organs near the diaphragm, such as the liver, and giving small movements to areas with less motion, such as the intracranial coefficient. This coefficient can directly prompt the user to estimate the motion error. This factor can also be converted to an error value based on an empirical formula.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Pulmonology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

An image guidance method employing two-dimensional imaging, comprising the following steps: selection and screening of a reference characteristic area, searching for a real-time characteristic area, positioning calculation on the basis of the reference characteristic area and of the real-time characteristic area, and, shape distortion estimation and error estimation. When the two-dimensional imaging is employed for image guidance, generation of a two-dimensional DRR image on the basis of a three-dimensional imaging of a focus is required, and then an offset between a current position and a predetermined position is determined by comparing a characteristic area in a real-time image collected in real-time to a characteristic area in the two-dimensional DRR. Use of the present method obviates the need for implantation of a marker into a patient during detection, and, allows for reduced pain for the patient and for reduced treatment risks and costs for the patient.

Description

一种采用二维影像的图像引导方法 技术领域 本发明涉及图像引导领域,具体是一种采用二维影像的图像引导 方法。  BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to the field of image guidance, and more particularly to an image guidance method using two-dimensional images.
背景技术 Background technique
图像引导的基本目的是在实际治疗或处理时重现预先设定的病 人体位,简称预设体位。以放疗为例, 医生采集病人的三维 CT影像, 根据影像制定放疗方案, 并在影像中选择一点作为治疗中心, 称为等 中心。 治疗时需要将此等中心所代表的解剖位置摆放到治疗设备, 通 常是医用电子直线加速器的中心(也称为等中心,设备的等中心)处。 预设体位就是指放疗计划设定的等中心代表的解剖位置位于设备等 中心处时的病人体位。图像引导技术的用途就是在治疗前或治疗中通 过实时采集术中影像(以下称实时影像或实时图像), 确定病人当前 体位和预设体位的偏差 (或所需的位置调整参数)。 医生或放疗技师 可以根据这一偏差调整病人体位,从而提高摆位精度和实现精确放疗。  The basic purpose of image guidance is to reproduce the pre-set diseased body position during actual treatment or treatment, referred to as the preset body position. Taking radiotherapy as an example, the doctor collects the patient's 3D CT image, develops a radiotherapy plan based on the image, and selects a point in the image as a treatment center, called the isocenter. The anatomical location represented by these centers needs to be placed in the treatment device, usually at the center of the medical electron linear accelerator (also known as the isocenter, the isocenter of the device). The preset position refers to the position of the patient when the anatomical position of the isocenter representative set by the radiotherapy plan is located at the center of the device. The purpose of image-guided technology is to determine the deviation (or desired position adjustment parameter) of the patient's current position and preset position by real-time acquisition of intraoperative images (hereinafter referred to as real-time images or real-time images) before or during treatment. The doctor or radiotherapy technician can adjust the patient's position based on this deviation to improve the placement accuracy and achieve precise radiotherapy.
图像引导中有一类采用植入标记物作位置参照的引导方法。标记 物事先由医生通过手术或穿刺植入被测物体内。这一方法实现图像引 导的基本步骤可以包括: A1 )标记物(与预设体位中心相对)三维位 置可以事先在三维影像(如 CT )中取得, Rl, R2...Rn, 称为预设位置; A2 )在采集的实时图像中, 找到这些标记物的位置; 寻找的方法在参 考文献 [A]和 [B]中有讲述; 如果采用二维图像作引导, 至少在两个角 度的图像中找到他们的位置; A3 )通过反投影运算, 得到他们此时在 三维空间的位置, Sl, S2, ... Sn, 称为术中位置; A4 )最后根据预设 位置和术中位置就可以计算出术中体位与预设体位的偏差,或者是所 需的位置调整。 这一位置偏差包括三维空间中的位移,还可以包括围 绕三个自由度的转动。 这一方法为通用方法, 有大量的文献, 也已在 商用产品中应用了多年, 无需赘述。操作人员根据这一术中体位与预 设体位的偏差或者是所需的位置调整, 调整治疗床位置。  There is a type of guidance in image guidance that uses implanted markers for positional reference. The marker is previously implanted into the object to be measured by a doctor through surgery or puncture. The basic steps of implementing image guidance by this method may include: A1) The marker (as opposed to the preset body center) The three-dimensional position can be obtained in advance in a three-dimensional image (such as CT), Rl, R2...Rn, called a preset. Position; A2) Find the location of these markers in the acquired real-time image; the method of finding is described in references [A] and [B]; if using a two-dimensional image for guidance, images at least at two angles Find their position; A3) through the back projection operation, get their position in 3D space at this time, Sl, S2, ... Sn, called the intraoperative position; A4) finally according to the preset position and the intraoperative position It is possible to calculate the deviation of the intraoperative position from the preset position or the required position adjustment. This positional deviation includes displacement in three-dimensional space and may also include rotation about three degrees of freedom. This method is a general method, has a large amount of literature, and has been used in commercial products for many years without further elaboration. The operator adjusts the position of the treatment bed based on the deviation of the intraoperative position from the pre-set position or the desired position adjustment.
本方法中的关键是找到各标记物在实时二维图像中的对应位置。 植入标记物通常由重金属, 如金、 不锈钢等材料制成, 在 X射线图像 中会呈现出对周边组织的较高对比度,因此在二维图像中较为易于分 辨(如图 5 )。 但是, 植入标记物是一种有创的临床手段, 会增加一 定的病人风险和费用, 其适用范围也有一定限制。 临床上通常在标记 物植入体内后过一周左右 ,待植入物落定位置后进行三维图像获取等 放疗准备工作, 因而对治疗有一定时间延迟的影响。 并且植入标记物 在组织中容易发生位置挪移或脱落, 即偏离其植入位置。 这种情况下 采用这些标记物会带来较大的定位误差而且不易被发现。采用在对病 人拍摄的三维图像或者在根据所述三维图像生成的多个二维 DRR 上 标注能够反映病人自身的解剖特点的特征区的方式可以避免这种情 况的发生。 The key in this method is to find the corresponding position of each marker in the real-time two-dimensional image. Implanted markers are usually made of heavy metals such as gold, stainless steel, etc., in X-ray images It will exhibit a higher contrast to the surrounding tissue and is therefore easier to distinguish in a two-dimensional image (Figure 5). However, implanting markers is an invasive clinical tool that increases certain patient risks and costs, and its scope of application is limited. Clinically, after the marker is implanted in the body for about one week, after the implant is settled, the preparation for radiotherapy such as three-dimensional image acquisition is performed, and thus the treatment has a certain time delay. And the implanted marker is prone to positional displacement or shedding in the tissue, ie, deviating from its implantation position. The use of these markers in this case introduces large positioning errors and is not easily found. This can be avoided by using a three-dimensional image taken on the patient or by marking a feature area that reflects the patient's own anatomical features on a plurality of two-dimensional DRRs generated from the three-dimensional image.
发明内容 Summary of the invention
针对上述方法的缺陷,本发明提供了一种无需植入标记物的采用 二维影像的图像引导方法。  In view of the deficiencies of the above methods, the present invention provides an image guiding method using a two-dimensional image without implanting a marker.
采用二维影像进行图像引导时, 需要根据病灶的三维影像(通常 是 CT )生成二维参照图, 即 DRR (数字重建放射图 ), 然后通过比较 实时采集的二维图像(即实时图)与 DRR来确定当前位置与预设位置 的偏差。 另一方面, 采用二维影像进行图像引导时通常需要用到从多 个角度和位置投影生成的二维影像以实现精确的定位计算(因为如果 只采用一个角度的二维影像,延投影方向的位置偏差将很难精确地确 定)。 通常采用两个接近于正交角度采集的二维影像就可以实现精确 的定位计算。 DRR是模拟一个二维放射成像系统的投影成像过程, 将 三维影像代表的物体或病人解剖部位置于某个(相对于该成像系统的) 特定位置或体位而生成的。上述步骤中的 DRR可以是代表多个图像系 统和采集角度的, 而且可以是将病人三维影像置于(相对于成像系统 的) 多个已知体位来生成的多组 DRR。  When using 2D images for image guidance, it is necessary to generate a 2D reference image based on the 3D image of the lesion (usually CT), ie DRR (Digital Reconstructed Radiograph), and then compare the real-time acquired 2D image (ie real-time map) with The DRR determines the deviation of the current position from the preset position. On the other hand, when using 2D images for image guidance, it is usually necessary to use 2D images generated from multiple angles and positions to achieve accurate positioning calculations (because if only one angle of 2D images is used, the projection direction is extended). Positional deviations will be difficult to determine accurately). Accurate positioning calculations are typically achieved using two 2D images acquired close to the orthogonal angle. The DRR is a projection imaging process that simulates a two-dimensional radiological imaging system, and the object or patient anatomy represented by the three-dimensional image is generated at a specific position (or relative to the imaging system). The DRR in the above steps may be representative of a plurality of image systems and acquisition angles, and may be a plurality of sets of DRRs generated by placing a patient's three-dimensional image in a plurality of known body positions (relative to the imaging system).
本发明采用的技术方案是采用二维影像的图像引导方法的步骤 是:  The technical solution adopted by the present invention is that the steps of the image guiding method using the two-dimensional image are:
1 )参照特征区的选择和筛选, 在已确定等中心的 (该确定等中 心的过程是整个图像引导或正常(非图像引导)放疗过程中必须首先 确定的一个步骤。 ) 病人拍摄的三维图像或者在根据所述三维图像生 成的多个二维 DRR上选取若干个具有明显灰度、 纹理、 形状、 或颜色 等特征的明显特征区并标注(对三维图像上标注的特征区(点)需要 通过投影关系将计算其在二维 DRR上的位置,并采用附近的区域作为 特征区), 该具有明显特征的特征区是图像中的以某个特征点为中心 的小区域(尺寸大小在 0. 5-6 cm, 形状可以是方形、 圓形、 球形或 其他根据某种模板确定的形状), 可以是植入标记物形成的特征区和 在解剖结构附近形成的特征区,或只包括在解剖结构附近形成的特征 区, 然后 选出参照特征区, 所述参照特征区中至少有一个是在解剖 结构附近形成的特征区。 特征区定义在三维图像上, 选取时可以在三 维图像上选取, 也可以在二维 DRR上选取, 并且选取的区域在三维和 二维图像上都具有明显特征。 1) Selection and screening of reference feature regions, which must be determined first in the determination of the isocenter (the process of determining the isocenter is the entire image guidance or normal (non-image guided) radiotherapy process.) 3D images taken by the patient Or selecting a plurality of two-dimensional DRRs generated from the three-dimensional image with a certain grayscale, texture, shape, or color The obvious feature areas of the features are marked (the feature areas (points) marked on the three-dimensional image need to be calculated by the projection relationship on the two-dimensional DRR, and the nearby area is used as the feature area), which has obvious features. The feature area is a small area in the image centered on a certain feature point (the size is 0. 5-6 cm, the shape can be square, circular, spherical or other shape determined according to a certain template), which can be a feature region formed by the marker and a feature region formed near the anatomical structure, or only a feature region formed near the anatomical structure, and then selecting a reference feature region, wherein at least one of the reference feature regions is near the anatomical structure The characteristic area formed. The feature area is defined on the three-dimensional image. It can be selected on the 3D image or selected on the 2D DRR, and the selected area has obvious features on both the 3D and 2D images.
2 ) 实时特征区的搜寻, 在实时图像中搜寻与步骤 1 ) 中参照特 征区相对应位置的实时特征区 ,搜寻到的所述实时特征区为对应全部 参照特征区的实时特征区,或者为对应全部参照特征区的其中一部分 参照特征区的实时特征区。  2) searching for the real-time feature area, searching for real-time feature areas corresponding to the reference feature areas in step 1) in the real-time image, and searching the real-time feature areas as real-time feature areas corresponding to all reference feature areas, or A part of the reference feature area corresponds to the real-time feature area of the feature area.
3 )基于参照特征区和实时特征区的定位计算, 通过比较实时图 像上的实时特征区与二维 DRR 上的参照特征区的位置来确定作为图 像引导的位置偏差。 这一位置偏差包括三维空间中的位移, 还可以包 括围绕三个自由度的转动。  3) Based on the positioning calculation of the reference feature area and the real-time feature area, the position deviation as the image guidance is determined by comparing the real-time feature area on the real-time image with the position of the reference feature area on the two-dimensional DRR. This positional deviation includes displacement in three-dimensional space and may also include rotation about three degrees of freedom.
进一步包括对步骤 3 )得到的位置偏差结果的误差估计步骤, 所 述误差估计包括旋转误差和变形误差的估计。其中旋转误差的估计主 要用计算等中心位置的可能误差进行估计,等中心位置的可能误差为 1*ΑΘ, 其中 L为参照特征区中心与等中心的距离, ΔΘ为旋转误差。 该旋转误差估计可以根据设备的测试结果确定。 如果测试结果表明, 图像引导系统的角度误差为 ±a度, 则 A0=a。 特征区中心即为所述特 征区的中心点。  Further included is an error estimating step for the positional deviation result obtained in step 3), the error estimate including an estimate of the rotational error and the deformation error. The estimation of the rotation error is mainly estimated by calculating the possible error of the isocenter position. The possible error of the isocenter position is 1*ΑΘ, where L is the distance between the center of the reference feature region and the isocenter, and ΔΘ is the rotation error. The rotation error estimate can be determined based on the test results of the device. If the test results show that the angular error of the image guidance system is ± a degree, then A0 = a. The center of the feature area is the center point of the feature area.
变形误差的估计步骤为:根据每个参照特征区在所述三维图像中 的位置,对该参照特征区和等中心之间连线上的组织的变形度作积分, 得到该参照特征区域相对于等中心的可能的变形度,然后对变形度作 平均或加权平均,得到一个对该参照特征区的变形度的估计, 然后根 据这一变形度的估计给出一个对由变形引起的等中心位置误差的估 计,最后根据查表或通过经验公式计算出参照特征区相对亦等中心位 置的误差。 The estimating step of the deformation error is: integrating the deformation degree of the tissue on the connection line between the reference feature area and the isocenter according to the position of each reference feature area in the three-dimensional image, and obtaining the reference feature area relative to The possible degree of deformation of the isocenter, and then average or weighted the degree of deformation to obtain an estimate of the degree of deformation of the reference feature region, and then based on the estimate of the degree of deformation gives an isocenter position caused by the deformation Estimate the error, and finally calculate the relative equal center position of the reference feature area according to the look-up table or through the empirical formula. Set the error.
上述步骤 1 ) 中特征区的选择和筛选可以在三维影像上进行。 用 户可以在三维影像中选取一个特征点进行标记,该特征点位于具有明 显特征的解剖结构附近,以该标记的特征点为中心的区域作为备选的 具有明显特征的特征区; 然后在二维 DRR 中根据投影关系计算出该 特征点的投影位置并标注出来;再判断该位置附近的影像是否具有足 够的特征度, 如果足够就保留, 不够则放弃并重新选择。 在本申请中 的 "具有明显特征"指的是在特征点附近图像灰度有较为显著的变化, 而不是平緩的没有纹路的区域。 这些特征通常是由骨性组织形成的。 这一选择原则对熟悉 X射线影像的操作人员来说极容易理解和掌握, 而且后续的实时特征区的搜寻算法具有较高的容错能力,因此采用这 一方式选择参照特征区对采用本图像引导方法所获得的位置偏差完 全能符合图像引导的需求。  The selection and screening of the feature regions in the above step 1) can be performed on the three-dimensional image. The user can select a feature point in the three-dimensional image for marking, the feature point is located near the anatomical structure with obvious features, and the area centered on the feature point of the mark is used as an optional feature area with obvious features; In the DRR, the projection position of the feature point is calculated according to the projection relationship and marked out; then it is judged whether the image near the position has sufficient feature degree, if it is enough to retain, if not enough, it is discarded and re-selected. The "significant feature" in this application refers to a more significant change in the gray level of the image near the feature point, rather than a flat, untextured area. These features are usually formed by bony tissue. This selection principle is very easy to understand and master for operators familiar with X-ray images, and the subsequent real-time feature area search algorithm has high fault tolerance. Therefore, this method is used to select the reference feature area to guide the image. The positional deviation obtained by the method can fully meet the requirements of image guidance.
特征度可以有多种计算方式, 如局部的灰度变化(可以用方差, standard deviation表示), 信息熵( entropy ), 对比度等方式。 具体的 选择还可能需要和特征区的搜寻算法匹配。  The degree of feature can be calculated in a variety of ways, such as local grayscale changes (which can be expressed by variance, standard deviation), information entropy (entropy), contrast, and so on. The specific choice may also need to match the search algorithm of the feature area.
参照特征区的选择和筛选可以在二维影像上进行。 步骤 1 ) 中所 述在二维 DRR上选取若干个具有明显特征的特征区的步骤为: a )在 由病人拍摄的三维图像生成的代表病人统一体位的多个二维 DRR 中 任选一个作为第一个二维 DRR, 在第一个二维 DRR上选取一个特征强 的 A点作为具有明显特征的特征区中心点; b )在第二个二维 DRR上 标出对应 A点的投影线(这一步可以由计算机软件实现); c )从投影 线上选择一个特征强的 B点,作为在该第二个二维 DRR上具有明显特 征的特征区的中心点; d )根据步骤 a )和 c )中的这两个中心点 A和 B, 利用反投影关系确定在三维图像中的参照特征区的位置 (这一步 可以由计算机软件实现)。  The selection and screening of the reference feature regions can be performed on the two-dimensional image. The steps of selecting a plurality of feature regions having distinct features on the two-dimensional DRR as described in step 1) are: a) selecting one of a plurality of two-dimensional DRRs representing a unified position of the patient generated by the three-dimensional image taken by the patient as The first two-dimensional DRR selects a strong A point on the first two-dimensional DRR as the center point of the feature area with obvious features; b) marks the corresponding A point on the second two-dimensional DRR (This step can be implemented by computer software); c) selecting a strong B point from the projection line as the center point of the feature area having significant features on the second two-dimensional DRR; d) according to step a) And the two center points A and B in c), using the back projection relationship to determine the position of the reference feature area in the three-dimensional image (this step can be implemented by computer software).
步骤 1 )中选择和筛选出参照特征区的过程可以由操作人员根据 视觉评价决定, 也可由负责图像处理的设备自动完成。 自动选取方法 可以是根据前述特征度计算方法计算出三维影像中等中心周围一定 范围内各点的特征度; 再选择局部特征度最大(loca l maximum ) 的 若干个点作为候选的特征区。自动筛选的完成方法包括两种:一种是, 计算参照特征区在三维图像上的特征度, 并设定一个阈值, 如果特征 度小于这个阔值则删除该参照特征区, 否则保留; 另一种是, 生成若 干代表较大体位差异(如各轴向偏差 ±15mm、 绕各轴转动 ±5。等) 的 测试 DRR , 以模拟病人体位和预设体位有较大偏差时参照特征区在实 时图像中可能的变化; 根据投影关系计算参照特征区在这些测试 DRR 中的位置;针对每一个参照特征区分别计算它们在所有测试 DRR和步 骤 1 ) 中二维 DRR中的相似度; 如果在所有测试 DRR中的相似度高于 预设阔值的比例高于或达到某个预设比例, 如 80%或 100%, 则保留该 参照特征区。 上述相似度的计算方法很多, 可以包括相关系数 (correla t ion coeff ic ient) , 互信息 ( mutua l informa t ion )等。 上述阔值的设定也与选用的相似度计算方法有关,如对相关系数可以 选择 0. 7。 The process of selecting and screening out the reference feature area in step 1) may be determined by the operator based on the visual evaluation or automatically by the device responsible for image processing. The automatic selection method may be: calculating the feature degree of each point in a certain range around the center of the three-dimensional image according to the foregoing feature degree calculation method; and selecting several points of the loca l maximum as the candidate feature area. There are two ways to complete automatic screening: one is, Calculating the feature degree of the reference feature area on the three-dimensional image, and setting a threshold value, if the feature degree is less than the threshold value, deleting the reference feature area, otherwise retaining; the other is, generating a plurality of representations representing larger body positions (such as The test DRR with axial deviation ±15mm, ±5° around the axis, etc., to simulate the possible change of the reference feature area in the real-time image when there is a large deviation between the patient's body position and the preset body position; Calculate the reference feature area according to the projection relationship The position in these test DRRs; their similarity in the two-dimensional DRR in all test DRRs and step 1) is calculated separately for each reference feature region; if the similarity in all test DRRs is higher than the preset threshold If the ratio is higher than or reaches a certain preset ratio, such as 80% or 100%, the reference feature area is retained. The above similarity is calculated in many ways, and may include correlation coefficients (correla t ion coeff ic ient), mutual information (mutua l informa t ion ), and the like. The value of the above-mentioned threshold is also related to the method of calculating the similarity. For example, the correlation coefficient can be selected to be 0.7.
另一种处理方法是选择多个参照特征区,在获取实时图像后根据 实时图中这些区域与对应参照特征区的相似度情况,选用相似度高的 参照特征区,弃用相似度低、难以在实时图中搜寻的那些参照特征区。  Another processing method is to select a plurality of reference feature regions, and after obtaining the real-time image, according to the similarity between the regions in the real-time map and the corresponding reference feature regions, the reference feature region with high similarity is selected, and the similarity is low and difficult to use. Those reference feature areas searched in the real-time graph.
上述步骤 1 ) 中实时特征区的搜寻目的是寻找步骤 1 ) 中选择的 参照特征区在实时图中的位置。根据参照特征区中心点, 即特征点在 三维影像中的位置和成像系统的投影关系,可以计算出参照特征区在 DRR中的投影位置。 才艮据这一投影位置可以在 DRR中定义一个以该特 征点为中心的 0. 5-6 cm的区域, 区域大小和形状(通常为方形、 矩 形)可以选择和调整。 以这个小区域作为在该 DRR中的模板。 这样实 时特征区的搜寻就是要在对应的实时图中寻找和该模板最相似的小 区域的位置,具体的方法可以有多种。例如,一种方法是,如文献 [A] 中描述, 包括多阔值处理以筛选 "疑似" 区域(blobs ), 以形状、 大 小、 亮度等为参数的模式筛选, 纵轴( super ior-infer ior axi s )判 定,布局(conf igura t ion )判定等步骤。 最后, 给出最佳位置匹配的 布局方式中的 blobs被确定为实时图像中的对应特征区。  The purpose of searching for the real-time feature area in the above step 1) is to find the position of the reference feature area selected in step 1) in the real-time map. According to the reference feature center point, that is, the position of the feature point in the three-dimensional image and the projection relationship of the imaging system, the projection position of the reference feature area in the DRR can be calculated. According to this projection position, an area of 0. 5-6 cm centered on the feature point can be defined in the DRR, and the size and shape of the area (usually square, rectangular) can be selected and adjusted. Use this small area as a template in the DRR. The search for the real-time feature area is to find the location of the small area most similar to the template in the corresponding real-time map. There are various methods. For example, one method, as described in the literature [A], includes multi-value processing to filter "suspected" regions (blobs), filtering with patterns of shape, size, brightness, etc., vertical axis (super ior-infer Ior axi s) decision, layout (conf igura t ion) determination and other steps. Finally, the blobs in the layout mode giving the best position matching are determined as the corresponding feature areas in the real-time image.
另一种搜寻实时特征区的方法是, 如文献 [B]中所描述的, 包括 预处理, 相关处理并提取局部最大值点构成候选区域列表, 采用 CVA 算法选择最佳匹配的候选区域作为选中的实时图像中的对应特征区。  Another method for searching for real-time feature regions is as described in [B], including pre-processing, correlation processing, and extracting local maximum points to form a candidate region list, and using CVA algorithm to select the best matching candidate region as the selection. Corresponding feature area in the real-time image.
[B]中所描述的方法是针对植入标记物研发的。 在临床上植入标记物 时, 植入位置要求在图像引导追踪的病灶周围。 这样, 这些标记物在 预设治疗位置处的参照 DRR中通常位于图像中心区域(如图 5 中的 植入标记物)。 但在采用非植入标记物的解剖特征区作图像引导时, 这些特征区可能位于 DRR的边缘区域(如图 1、图 2中的 0号特征区)。 这一差别会对 CVA算法中的计算关联度( association, 公式 25 )步骤 带来影响。 这一步骤中假设同一特征区(植入标记物)在两幅实时图 像中的共同轴(X轴)上坐标几乎相同。 这一 4艮设对位于中心区域的 特征区基本适用,但对位于边缘区的特征区会产生较大误差。 为解决 这一问题, 可以根据 Epipolar线的原理计算关联度。 假设实时图 A中 某候选区域 φ的中心坐标为(xA, yJ, 实时图 B中某候选区域 Θ的中 心坐标为( , yB) , 计算二者之间的关联度时, 先找到实时图 B中对 应于 φ的 Epipolar线, 求得线上 y=yB处的 x坐标 xP, 然后根据 xB和 xP计算关联度:
Figure imgf000007_0001
(|xB-Xp|)) ,其中 f (X)、 β的定义参见 [Β]。
The method described in [B] was developed for implanted markers. Clinically implanted markers At the time, the implantation site is required to be around the lesion guided by the image. Thus, these markers are typically located in the central region of the image (eg, the implant marker in Figure 5) in the reference DRR at the predetermined treatment location. However, when image guidance is performed using the anatomical feature regions of the non-implanted markers, these feature regions may be located in the edge regions of the DRR (Fig. 1, Fig. 2, feature area 0). This difference affects the computational association (association, formula 25) steps in the CVA algorithm. In this step it is assumed that the same feature area (implanted marker) has almost the same coordinates on the common axis (X-axis) in the two real-time images. This 4 set is basically applicable to the feature area located in the central area, but a large error is generated for the characteristic area located in the edge area. To solve this problem, the degree of correlation can be calculated according to the principle of the Epipolar line. Assume that the central coordinate of a candidate region φ in real-time graph A is (x A , yJ, and the central coordinate of a candidate region 实时 in real-time graph B is ( , y B ). When calculating the degree of association between the two, find the real-time first. In Figure B, the Epipolar line corresponding to φ, find the x coordinate x P at the line y=y B , and then calculate the correlation according to x B and x P :
Figure imgf000007_0001
(|x B -Xp|)) , where f (X), β are defined as [Β].
上述步骤 3)中基于参照特征区和实时特征区的定位计算的方法 可以有多种。假设图像引导系统采用 Ν个成像角度拍摄的实时图实现 引导。 一种是对第 i ( i=l,..N) 幅实时图, 先独立计算在实时图上 搜寻到的各实时特征区的中心位置, Pu代表第 j 个实时特征区 (j=l,..n) 在第 i幅实时图中的位置;再计算和相应的 DRR中各实时 特征区中心 的位置偏差, dij; 并取各个实时特征区的平均值, di; 最后, 根据 N个实时图中得到的 计算出当前病人位置和预设位 置的偏差。例如, [C]中给出了在立体成像系统( N=2 )中根据 άΑ xl, yl ) 和 d2=(x2,y2)计算三维空间位置偏差的方法(公式 2)。 The method of calculating the positioning based on the reference feature area and the real-time feature area in the above step 3) may be various. Assume that the image guidance system uses a real-time image taken at one imaging angle to achieve guidance. One is the real-time map of the i-th (i=l,..N), firstly calculating the central position of each real-time feature area searched on the real-time map, and Pu represents the j-th real-time feature area (j=l, ..n) the position in the i-th real-time map; recalculate and the positional deviation of the center of each real-time feature area in the corresponding DRR, dij; and take the average value of each real-time feature area, di; Finally, according to N real-time The deviation obtained from the current patient position and the preset position is calculated in the figure. For example, [C] gives a method for calculating the three-dimensional spatial position deviation based on άΑ xl, yl ) and d 2 = (x2, y2) in the stereo imaging system (N=2) (Equation 2).
第二种方法是根据前述步骤 A3从实时图中得到的实时特征区在 三维空间的坐标 Sl, .., Sn,计算其余各自对应的参照特征区预设位置 Rl, ...,Rn之间的位置差, di = Si-Ri; 再计算出位置差的平均值 d = ∑di/n; d就是当前病人位置与预设位置的位置差。  The second method is to calculate the coordinates of the real-time feature regions obtained from the real-time map in the above-mentioned step A3 in the three-dimensional coordinates S1, .., Sn, and calculate the rest of the corresponding reference feature region preset positions R1, ..., Rn. The position difference, di = Si-Ri; then calculate the average value of the position difference d = ∑di / n; d is the position difference between the current patient position and the preset position.
第三种方法是采用优化算法, 寻找一个空间变换(包括平移和转 动) T, 使得各参照特征区经过变换 T后的新位置, R,=TR, 与 S之间 的距离最小化。 文献 [D]对这一方法有详细描述。  The third method is to use an optimization algorithm to find a spatial transformation (including translation and rotation) T, so that each reference feature region undergoes a new position after transform T, R, =TR, and the distance between S and S is minimized. This method is described in detail in [D].
使用本方法进行图像引导无需植入标记物, 使检测变成无创的, 减少病人的痛苦, 减小病人的治疗风险和费用。 同时, 由于不需要等 待植入物在体内落定位置,使用本方法还可以缩短治疗等待期, 病人 可以尽快开始治疗。 Image guidance using this method eliminates the need for implantable markers, making the test non-invasive, reducing patient suffering and reducing patient care costs and costs. At the same time, because no need to wait If the implant is to be placed in the body, the method can also shorten the waiting period for treatment, and the patient can start treatment as soon as possible.
附图说明 DRAWINGS
图 1为选取的参照特征区示意图, 图中的 "+" 标记处左边 为 1号参照特征区右边为 0号参照特征区;  Figure 1 is a schematic diagram of the selected reference feature area. The left side of the "+" mark in the figure is the reference feature area on the right side of the reference feature area.
图 2为选取参照特征区的信息较弱的示意图, 图中的 "+" 标记 处左边为 0号参照特征区右边为 1号参照特征区;  2 is a schematic diagram showing that the information of the reference feature area is weak. The left side of the "+" mark in the figure is the reference feature area on the right side of the reference feature area.
图 3为从 DRR图像上选取参照特征区的过程示意图; 图中 S1和 S2代表射线源的位置; D1和 D2代表和两个源对应的成像探测器; 图 4为在三维影像中不适宜作为参照特征区的示例; 图中 "+" 标记处是通过二维影像选中的, 在两个 DRR中都有较强的特征,但在 三维图像中位于灰度平緩的区域, 没有明显特征, 图中所示为 CT的 三视图, 其中左图为横截面; 右上为冠状面; 右下为矢状面;  3 is a schematic diagram of a process of selecting a reference feature region from a DRR image; S1 and S2 represent the position of the radiation source; D1 and D2 represent imaging detectors corresponding to the two sources; FIG. 4 is not suitable for use in the three-dimensional image. An example of a reference feature area; the "+" mark in the figure is selected by a two-dimensional image, and has strong features in both DRRs, but in a three-dimensional image in a region where the grayscale is gentle, there is no obvious feature, Shown in the three views of CT, the left picture is the cross section; the upper right is the coronal plane; the lower right is the sagittal plane;
图 5为植入标记物的参照特征区和实时特征区示意图。左侧为两 幅 DRR; 右侧为与其对应的实时图, 右图中方框内为植入标记物。 具体实施方式  Figure 5 is a schematic diagram of a reference feature area and a real-time feature area of the implanted marker. There are two DRRs on the left side; the real-time map corresponding to the right side, and the implanted markers in the box on the right. detailed description
本方法中, 通过在实时图中搜寻一些实时特征区, 并比较实时特 征区在实时图与参照特征区在 DRR中的位置来确定位置偏差。这些参 照特征区是三维图像中的小区域, 其在二维影像(含实时图和二维 DRR ) 中的投影是具有不同于周边区域的特征的小区域。 具体的特征 描述取决于影像模式和图像比对(也称为配准或融合等)算法, 如在 X射线类的二维影像中, 参照特征区可以是灰度有较大的, 独特的变 化的小区域, 如图 1中所示的两个区域。 这里选择的参照特征区是非 人工植入标记物的参照特征区。放疗中采用植入金属或其他标记物的 方式实现图像引导早有报道, 也已广泛应用在临床上。 这些植入物在 二维和三维影像上表现出很强的特征。本发明中的参照特征区指的是 除这些植入性的标记物外, 由人体自身解剖结构所形成的, 通常是在 骨骼结构附近形成的特征区。在选择时至少要选择一个这样的参照特 征区, 且参照特征区的大小、 形状是可调的。  In the method, the positional deviation is determined by searching some real-time feature regions in the real-time map and comparing the positions of the real-time map with the reference feature region in the DRR in the real-time feature region. These reference feature regions are small regions in a three-dimensional image whose projection in a two-dimensional image (including a real-time map and a two-dimensional DRR) is a small region having features different from the surrounding region. The specific feature description depends on the image mode and image alignment (also called registration or fusion) algorithm. For example, in the X-ray 2D image, the reference feature area can be a large grayscale, unique change. Small area, as shown in Figure 1. The reference feature region selected here is the reference feature region of the non-artificial implant marker. The use of implanted metal or other markers to achieve image guidance in radiotherapy has long been reported and has been widely used clinically. These implants exhibit strong features in 2D and 3D images. The reference feature region in the present invention refers to a feature region formed by the body's own anatomical structure in addition to these implantable markers, usually formed near the bone structure. At least one such reference feature area is selected at the time of selection, and the size and shape of the reference feature area are adjustable.
这些参照特征区的选择可以手动进行。可以由操作员在用来生成 DRR的三维图像(如 CT ) 中选择, 也可以直接在 DRR上进行选取。 在三维图像中选取参照特征区时 ,操作员选择三维空间中的若干 特征点作为若干个参照特征区的中心 (可以通过软件辅助实现)。 根 据投影关系可以计算出这些点在二维 DRR中的位置,将选择的参照特 征区标注在二维 DRR上,然后操作员可以从视觉上判断这些区域的特 征信息是否足够强, 逐个选择是保留或删除。 可能存在这种情况: 某 个参照特征区在其中一个投影的 DRR中特征艮强,但在另一个 DRR中 则较弱。如图 2 中显示的是和图 1对应的另一角度的 DRR, 其中 1号 参照特征区在图 1中特征信息较强,但在图 2中灰度变化不明显, 特 征较弱。 这时原则上应删除类似 1号参照特征区这样的特征区。 The selection of these reference feature zones can be done manually. It can be selected by the operator in a three-dimensional image (such as CT) used to generate the DRR, or directly on the DRR. When the reference feature area is selected in the three-dimensional image, the operator selects several feature points in the three-dimensional space as the center of several reference feature areas (which can be implemented by software assistance). According to the projection relationship, the position of these points in the two-dimensional DRR can be calculated, and the selected reference feature regions are marked on the two-dimensional DRR, and then the operator can visually judge whether the feature information of these regions is strong enough, and the selection is retained one by one. Or delete. This may be the case: A reference feature area is reluctant in one of the projected DRRs but weaker in the other DRR. As shown in FIG. 2, the DRR of another angle corresponding to FIG. 1 is shown, wherein the feature information of No. 1 reference feature area is stronger in FIG. 1, but the gray level change is not obvious in FIG. 2, and the feature is weak. In this case, a feature area like the reference feature area No. 1 should be deleted in principle.
这一过程也可以在二维 DRR上直接完成, 如图 3所示。 具体步骤 如下: 在一个 DRR 上选定一个特征较强的点作为参照特征区中心点 P1 ; 然后再另一个 DRR上标出对应该点 P1参照的投影线 (作为辅助 手段, 可通过软件实现), 即 Epipolar l ine, L2; 操作员从该投影 线 L2上选择一个特征较强的点 (作为在该 DRR中对应先选中的参照 特征区域的中心点), P2; 通过反投影关系, 可以根据这两个点的位 置确定该特征点在三维图像中参照特征区的中心点位置 P。  This process can also be done directly on the 2D DRR, as shown in Figure 3. The specific steps are as follows: Select a strong feature point on a DRR as the reference feature area center point P1; then mark the corresponding projection line corresponding to the point P1 on another DRR (as an auxiliary means, can be implemented by software) , that is, Epipolar l ine, L2; the operator selects a point with strong features from the projection line L2 (as the center point of the reference feature region selected first in the DRR), P2; The position of these two points determines the center point position P of the feature point in the three-dimensional image with reference to the feature area.
上述一个 DRR中的一个点对应另一个 DRR中的一条线。 DRR中的 一个点(如 P1 )的信息代表处于射线源 S1和采集该点信息的探测器 D1像素之间连线上的介质的积分效应。 这条连线称为 P1 的投影线。 投影线 S1 _P1在由 S2和 D2构成的另一个成像系统中 D2上的投影是 一条线, 由 S1 _P1上各个点在 D2上的投影连线而成, 称为对应于 P1 的 Epipolar线。 利用计算机视觉技术中的基本概念, 可以很容易通 过计算得到。  One of the above DRRs corresponds to one of the other DRRs. The information of a point in the DRR (e.g., P1) represents the integral effect of the medium on the line between the source S1 and the detector D1 pixel that collects the point information. This connection is called the projection line of P1. Projection line S1_P1 is projected on D2 in another imaging system consisting of S2 and D2. It is a line formed by the projection of each point on S1_P1 on D2, called the Epipolar line corresponding to P1. Using the basic concepts in computer vision technology, it can be easily calculated.
反投影关系, 根据两个 DRR上的投影点 P1和 P2确定反投影线, 进而计算出空间交汇点 P的过程是一个反投影过程,也是计算机视觉 中的基本技术。  The back projection relationship, which determines the back projection line based on the projection points P1 and P2 on the two DRRs, and then calculates the spatial intersection point P is a back projection process and is the basic technique in computer vision.
参照特征区的筛选,根据上述步骤选取的参照特征区可以经过进 一步的 选使特征区符合需求。 因为通过二维图像选中的特征区, 有 可能在两个二维图像上有明显特征,但在三维图像上并没有显著特征 , 如图 4中 "+" 标记处所示; 有些特征可能会因术中体位和预设体位 的差异(特别是存在较大角度差异时) 而在实时图中发生较大变化, 导致在实时图中很难搜寻这样的特征区。这样的特征区就不符合需求。 参照特征区的筛选方法有: Referring to the screening of the feature area, the reference feature area selected according to the above steps may be further selected to meet the requirements of the feature area. Because the feature area selected by the two-dimensional image may have obvious features on the two two-dimensional images, but there are no significant features on the three-dimensional image, as shown in the "+" mark in Figure 4; some features may be caused by The difference between the postoperative position and the preset position (especially when there is a large angular difference) and a large change in the real-time graph, This makes it difficult to search for such feature areas in real-time graphs. Such feature areas do not meet the requirements. The screening methods for the reference feature area are:
1.计算特征区在三维图像上的特征度,特征度可以是其三维邻域 的变化(灰度值方差)或信息量的量度。 如果其特征度达不到某个阔 值, 则删除。  1. Calculate the feature degree of the feature area on the three-dimensional image, and the feature degree may be a change of its three-dimensional neighborhood (variation of gray value) or a measure of information amount. If the characteristic degree does not reach a certain threshold, delete it.
2. 生成代表术中体位和预设体位差异(即较大体位差异, 包括 较大角度和较大位移) 的测试 DRR; 根据投影关系, 计算特征点在这 些 DRR中的位置, 即参照特征区在这些测试 DRR中的位置; 再对每一 个参照特征区,计算它们在测试 DRR中和在代表预设体位的 DRR中的 相似度。操作员可以根据相似度的高低决定是否保留或删除参照特征 区。 如果在所有测试 DRR中的相似度都较高, 就可以保留。  2. Generate a test DRR representing the difference between the postural position and the preset body position (ie, the larger body position difference, including the larger angle and the larger displacement); according to the projection relationship, calculate the position of the feature point in these DRRs, that is, the reference feature area The position in these test DRRs; for each of the reference feature regions, their similarities in the test DRR and in the DRR representing the preset body position are calculated. The operator can decide whether to retain or delete the reference feature area based on the degree of similarity. If the similarity in all test DRRs is high, it can be retained.
以上筛选过程也可以有图像处理设备自动完成。 上述步骤中, 生 成测试 DRR, 计算特征点在测试 DRR中的位置, 以及计算参照特征区 的相似度都可以自动进行。 在判定是否保留时, 可以设定一个阔值; 如果某个特征区在所有测试 DRR 中的相似度高于预设阔值的比例或 达到一个预设比例, 则保留。  The above screening process can also be done automatically by the image processing device. In the above steps, the test DRR is generated, the position of the feature point in the test DRR is calculated, and the similarity of the reference feature area is calculated automatically. When determining whether to retain, you can set a threshold; if a feature area is more similar to the preset threshold in all test DRRs or reaches a preset ratio, it is retained.
在选择的各个参照特征区和实时特征区为非植入特征区时,由于 三维结构在二维成像平面上的投影会随被成像体的位置和方向而改 变,这样代表预设体位的 DRR中的参照特征区可能和实时图中对应的 实时特征区相似度会下降。 这一变化是渐进的, 而且主要受方向偏差 的影响。 要解决这个问题, 可以生成一系列的 DRR, 代表被成像体和 预设体位有一定位置偏差时的预期 X射线影像, 然后选择在这些 DRR 中的投影变化不大的特征区作为参照特征区。  When the selected reference feature area and the real-time feature area are non-implanted feature areas, since the projection of the three-dimensional structure on the two-dimensional imaging plane changes with the position and direction of the imaged body, this represents the DRR of the preset body position. The reference feature area may decrease in similarity with the corresponding real-time feature area in the real-time map. This change is gradual and is primarily affected by directional deviations. To solve this problem, a series of DRRs can be generated, representing the expected X-ray image when the imaged body and the preset body position have a certain positional deviation, and then the feature areas in which the projections in the DRRs are not changed greatly are selected as the reference feature areas.
采用这种方法时,可以给出对得到的位置偏差结果中可能存在的 误差的估计。误差大小和参照特征区中心与等中心的距离有关。根据 特征区位置计算出的定位误差, 特别是旋转误差。旋转误差是位置偏 差中旋转部分结果可能存在的误差造成的对位移部分结果的误差,会 被上述距离放大:  With this method, an estimate of the error that may exist in the resulting positional deviation result can be given. The magnitude of the error and the center of the reference feature zone are related to the distance of the isocenter. The positioning error calculated based on the position of the feature area, especially the rotation error. The rotation error is the error of the displacement part result caused by the possible error of the rotation part of the position deviation, which is amplified by the above distance:
距离等中心越远, 等中心位置的可能误差越大(距离为 L, 旋转 误差为 ΔΘ, 等中心位置的可能误差为 L*A0 )。 对旋转误差的估计可 以基于实验数据。 通过实验可以估计旋转误差 ΔΘ, 通常是在多个已 知位置上测试结果与已知位置之间的误差的均方根。 除了绝对误差, 还可以类似方法通过实验测定相对误差。如果通过多个位置的实验得 到旋转结果的相对误差为 P% , 而此时的位置偏差中的旋转估计为 Θ, 则 Δθ=θχρ%。 The farther away from the center, the greater the possible error of the isocenter position (the distance is L, the rotation error is ΔΘ, and the possible error of the isocenter position is L*A0). The estimation of the rotation error can be based on experimental data. The rotation error ΔΘ can be estimated experimentally, usually in multiple Know the root mean square of the error between the test result and the known position at the position. In addition to the absolute error, the relative error can be determined experimentally in a similar manner. If the relative error of the rotation result is P% by experiments at a plurality of positions, and the rotation in the positional deviation at this time is estimated to be Θ, then Δθ=θχρ%.
另一方面,距离越大, 参照特征区中心和等中心中间的软组织越 多, 可能的形变越大, 根据参照特征区位置确定的等中心位置不确定 性(即误差)越大。 软组织、 肺叶组织和骨骼在 CT及 X射线影像中 的灰度值是不同的, 可以据此划分, 并设定相应的变形度(变形度可 以根据组织的弹性系数设定)。 可以根据每个特征点 (即特征区中心 点)在三维 CT中的位置, 对该点和等中心之间连线上的组织的变形 度作积分,得到该点相对于等中心的可能的变形度度量。 然后对所有 特征点的变形度作平均或加权平均等方法,得到一个对当前所选用的 特征区的变形度的估计。这一变形度估计可以直接作为等中心位置误 差的一种参考提供给操作者,也可以根据这一估计给出一个对由变形 引起的等中心位置误差的估计。最后一步由变形度估计得到等中心位 置误差的估计的方法可以是查表, 或者通过经验公式计算等。 例如, 组织的弹性系数的测量有大量的文献可供参考, 如 [Ε]。 表和经验公 式都可以通过实验确定。  On the other hand, the greater the distance, the more soft tissue is centered between the center of the reference feature and the isocenter, and the greater the possible deformation, the greater the uncertainty of the isocenter position (i.e., the error) determined from the position of the reference feature region. The soft tissue, lung tissue, and bone have different gray values in CT and X-ray images, and can be divided accordingly, and the corresponding degree of deformation can be set (the degree of deformation can be set according to the elastic coefficient of the tissue). According to the position of each feature point (ie, the center point of the feature area) in the three-dimensional CT, the deformation degree of the tissue on the line between the point and the isocenter can be integrated to obtain a possible deformation of the point relative to the isocenter. Degree measure. Then, the deformation degree of all the feature points is averaged or weighted averaged to obtain an estimate of the deformation degree of the currently selected feature region. This deformation estimate can be provided directly to the operator as a reference to the isocenter position error, or an estimate of the isocenter position error caused by the deformation can be given based on this estimate. The final step of estimating the isocenter position error from the deformation degree estimation may be a look-up table or an empirical formula calculation. For example, the measurement of the elastic coefficient of a tissue has a large amount of literature available, such as [Ε]. Both the table and the empirical formula can be determined experimentally.
这种误差也和具体的解剖位置有关, 受呼吸、蠕动等影响越大的 部位这种误差存在的可能性越大。在这里运动带来的误差主要和等中 心与设定得特征区之间的组织有关。其实现方式可以是对各个解剖部 位赋予不同的运动系数, 例如, 对横膈附近的组织器官, 如肝脏, 赋 予较大的运动系数; 对运动较小的部位, 如颅内, 赋予很小的系数。 这一系数可以直接提示用户以便估计运动误差。也可以将这一系数根 据一个经验公式转换成误差值。  This kind of error is also related to the specific anatomical position, and the possibility that the error is greater in the part affected by the breathing and peristalsis is greater. The error caused by the motion here is mainly related to the organization between the equal center and the set feature area. This can be achieved by assigning different coefficients of motion to the various anatomical parts, for example, imparting a large coefficient of motion to tissues and organs near the diaphragm, such as the liver, and giving small movements to areas with less motion, such as the intracranial coefficient. This coefficient can directly prompt the user to estimate the motion error. This factor can also be converted to an error value based on an empirical formula.
为验证本发明中的方法的可行性,发明人采用人形模体进行了对 照实验。 这一头颈部模体采用模拟人体头颈部组织的材料制成。 图 4 是其 CT扫描的三截面示意图,图 1和图 2是根据其 CT重建的 DRR图。 在这些图像中可以清晰地分辩软组织和骨组织。该模体中放入了多个 金标记(go ld marker )。 为测试本专利申请图像引导算法的精度, 进 行如下实验: 1)将模体固定在一个精确的移动平台上(精度优于 0. 02 mm); 2)以某一个位置为基准并记录在该位置处图像引导算法给出的 结果(dO); 3) 控制平台移动到某个已知位置 ci (即和基准位置的 相对位移已知)上, 记录图像引导在这些位置上给出的结果(di); 4) 计算 di和 dO的偏移量并和已知位置进行比较,得到算法在该位置上 的误差, ei = (di-dO)-ci; 5)移动平台到不同位置, 重复 3、 4步 N 次, 计算误差的均方根, ^ E= sqrt (∑ei2/N) 0 In order to verify the feasibility of the method of the present invention, the inventors conducted a control experiment using a humanoid motif. This head and neck phantom is made of a material that mimics the head and neck tissue of the human body. Figure 4 is a three-section schematic view of its CT scan, and Figures 1 and 2 are DRR maps based on its CT reconstruction. Soft tissue and bone tissue can be clearly distinguished in these images. A plurality of gold markers (go ld markers) are placed in the phantom. In order to test the accuracy of the image guidance algorithm of this patent application, the following experiment is performed: 1) Fixing the phantom on a precise mobile platform (accuracy is better than 0. 02) Mm); 2) based on a certain position and record the result (dO) given by the image guidance algorithm at that position; 3) the control platform moves to a known position ci (ie the relative displacement with the reference position is known) On, the recorded image directs the result given at these locations (di); 4) calculates the offset of di and dO and compares it with the known position, resulting in the error of the algorithm at that position, ei = (di- dO)-ci; 5) Move the platform to different positions, repeat 3, 4 steps N times, calculate the root mean square of the error, ^ E= sqrt (∑ei 2 /N) 0
分别采用将参照特征区选择为植入金标记和本发明所提出的解 剖特征区, 进行上述实验, 得到的结果非常类似, 误差在各个方向上 均好于 0.5 mm。 这一实验由国内权威的认证检测机构 (国家食药监 局沈阳医疗器械质量监督检验中心)执行并出具检测报告, 达到临床 使用要求。  The above experiment was carried out by selecting the reference feature region as the implant gold mark and the anatomical feature region proposed by the present invention, and the results obtained were very similar, and the error was better than 0.5 mm in all directions. This experiment was carried out by a domestic authoritative certification testing agency (Shenyang Medical Device Quality Supervision and Inspection Center of the State Food and Drug Administration) and issued a test report to meet clinical use requirements.
参考文献  references
[A] C. B. Saw, et al, "Implementation of fiducial based image registration in the Cyberknif e robotic system", Med. Dosimetry, Vol 33, No. 2, pp. 156—160, 2008。  [A] C. B. Saw, et al, "Implementation of fiducial based image registration in the Cyberknif e robotic system", Med. Dosimetry, Vol 33, No. 2, pp. 156-160, 2008.
[B] Z. Mu, et al. , IEEE TMI Vol. 27, No. 9, pp. 1288-1300, 2008。  [B] Z. Mu, et al., IEEE TMI Vol. 27, No. 9, pp. 1288-1300, 2008.
[C] D. Fu, et al. , "A fast, accurate, and automatic 2D-3D image registration for image-guided cranial radiosurgery", Med.  [C] D. Fu, et al. , "A fast, accurate, and automatic 2D-3D image registration for image-guided cranial radiosurgery", Med.
Phys., Vol.35, No.5, pp.2180-2194, 2008。 Phys., Vol. 35, No. 5, pp. 2180-2194, 2008.
[D] M.J. Murphy, "Fiducial-based targeting accuracy for external-beam radiotherapy", Med. Phys., Vol.29, No.3, pp.  [D] M.J. Murphy, "Fiducial-based targeting accuracy for external-beam radiotherapy", Med. Phys., Vol.29, No.3, pp.
334-344, 2002。 334-344, 2002.
[E]E.J. Chen, et al., IEEEE UFFC, Vol.43, No.1, pp.191-194, 1996。  [E] E.J. Chen, et al., IEEE E UFFC, Vol. 43, No. 1, pp. 191-194, 1996.

Claims

权 利 要 求 Rights request
1. 一种采用二维影像的图像引导方法, 包括如下步骤: 1. An image guidance method using two-dimensional images, comprising the following steps:
1 ) 参照特征区的选择和筛选, 在已确定等中心的病人拍摄的三维图像上或者在 根据所述三维图像生成的多个二维 DRR上选取若干个具有明显特征的特征区并 标注,若干个具有明显特征的特征区包括植入标记物形成的特征区和在解剖结构 附近形成的特征区, 或只包括在解剖结构附近形成的特征区,然后筛选出参照特 征区, 所述参照特征区中至少有一个是在解剖结构附近形成的特征区;  1) referring to the selection and screening of the feature region, selecting a plurality of feature regions with distinct features on the three-dimensional image taken by the patient whose isocenter has been determined or on the plurality of two-dimensional DRRs generated from the three-dimensional image, and marking The characteristic regions having obvious features include a feature region formed by implanting the marker and a feature region formed near the anatomical structure, or only a feature region formed near the anatomical structure, and then screening out the reference feature region, the reference feature region At least one of them is a feature area formed near the anatomical structure;
2 ) 实时特征区的搜寻, 在实时图像中搜寻与步骤 1 ) 中参照特征区相对应位置 的实时特征区, 搜寻到的所述实时特征区为对应全部参照特征区的实时特征区, 或者为对应全部参照特征区的其中一部分参照特征区的实时特征区;  2) searching for the real-time feature area, searching for a real-time feature area corresponding to the reference feature area in step 1) in the real-time image, and searching the real-time feature area as a real-time feature area corresponding to all the reference feature areas, or Corresponding to a real-time feature area of the feature area corresponding to a part of all reference feature areas;
3 )基于参照特征区和实时特征区的定位计算,通过比较实时图像上的实时特征区 与二维 DRR上的参照特征区的位置来确定作为图像引导的位置偏差。  3) Based on the positioning calculation of the reference feature area and the real-time feature area, the positional deviation as the image guidance is determined by comparing the real-time feature area on the real-time image with the position of the reference feature area on the two-dimensional DRR.
2. 根据权利要求 1所述一种采用二维影像的图像引导方法, 其特征在于: 还包 括对步骤 3)得到的位置偏差结果的误差估计步骤, 所述误差估计包括旋转误差 和变形误差的估计。  2. The image guiding method using two-dimensional images according to claim 1, further comprising: an error estimating step of the positional deviation result obtained in the step 3), wherein the error estimation comprises a rotation error and a deformation error estimate.
3. 根据权利要求 2所述一种采用二维影像的图像引导方法, 其特征在于: 所述 旋转误差的估计用计算等中心位置的可能误差进行估计,等中心位置的可能误差 为 L*Ae, 其中 L为所述参照特征区中心与等中心的距离, ΔΘ为旋转误差。 3. The image guiding method using two-dimensional images according to claim 2, wherein: the estimation of the rotation error is estimated by calculating a possible error of the isocenter position, and the possible error of the isocenter position is L*Ae. Where L is the distance between the center of the reference feature region and the isocenter, and ΔΘ is the rotation error.
4. 根据权利要求 2所述一种采用二维影像的图像引导方法, 其特征在于: 所述 变形误差的估计步骤为: 根据每个参照特征区在所述三维图像中的位置, 对该参 照特征区中心和等中心之间连线上的组织的变形度作积分,得到该参照特征区相 对于等中心的可能的变形度,然后对该变形度作平均或加权平均,得到一个对该 参照特征区的变形度的估计,然后根据这一变形度的估计给出一个对由变形引起 的等中心位置误差的估计,最后根据査表或通过经验公式计算出参照特征区相对 于等中心位置的误差。 4. The image guiding method using two-dimensional images according to claim 2, wherein: the estimating step of the deformation error is: according to a position of each reference feature region in the three-dimensional image, the reference Integrating the deformation degree of the tissue on the line between the center of the feature area and the isocenter, obtaining the possible deformation degree of the reference feature area with respect to the isocenter, and then averaging or weighting the deformation degree to obtain a reference to the reference The estimation of the deformation degree of the feature region, and then an estimation of the isocenter position error caused by the deformation is given according to the estimation of the deformation degree, and finally the error of the reference feature region relative to the isocenter position is calculated according to the look-up table or by an empirical formula. .
5. 根据权利要求 1所述一种采用二维影像的图像引导方法, 其特征在于: 步骤 1 )中所述在二维 DRR上选取若干个具有明显特征的特征区的步骤为: a)在由对 病人拍摄的三维图像生成的多个二维 DRR中任选一个作为第一个二维 DRR, 在第 一个二维 DRR上选取一个特征强的 A点作为具有明显特征的特征区中心点; b ) 在第二个二维 DRR上标出对应 A点的投影线; c ) 从投影线上选择一个特征强的 B点,作为在第二个二维 DRR上具有明显特征的特征区的中心点; d)根据步骤 a) 和 c ) 中的这两个中心点 A和 B, 利用反投影关系确定在三维图像中的参照特征 区的位置。 5. The image guiding method using two-dimensional images according to claim 1, wherein: the step of selecting a plurality of characteristic regions having distinct features on the two-dimensional DRR in step 1) is: a) Any one of a plurality of two-dimensional DRRs generated from a three-dimensional image taken by the patient as the first two-dimensional DRR, in the first Select a strong A point on the two-dimensional DRR as the center point of the feature area with obvious features; b) Mark the projection line corresponding to point A on the second two-dimensional DRR; c) Select a feature from the projection line Strong point B, as the center point of the feature area with distinct features on the second two-dimensional DRR; d) based on the two center points A and B in steps a) and c), using the back projection relationship to determine The position of the reference feature area in the three-dimensional image.
6. 根据权利要求 1所述一种采用二维影像的图像引导方法, 其特征在于: 步骤 1 ) 中选择和筛选出参照特征区的过程由负责图像处理的设备自动完成。  6. The image guiding method using two-dimensional images according to claim 1, wherein: the process of selecting and filtering out the reference feature regions in step 1) is automatically performed by a device responsible for image processing.
7. 根据权利要求 6所述一种采用二维影像的图像引导方法, 其特征在于: 所述 负责图像处理的设备自动完成图像筛选的方法有两种:一种是,计算参照特征区 在三维图像上的特征度, 并设定一个阈值, 如果特征度小于这个阈值则删除该参 照特征区, 否则保留; 另一种是, 生成若干代表较大体位差异的测试 DRR, 计算 参照特征区在这些测试 DRR中的位置,针对每一个参照特征区分别计算它们在所 有测试 DRR和步骤 1 ) 中二维 DRR中的相似度, 如果在所有测试 DRR中的相似度 高于预设阈值的比例高于或达到一个预设比例, 则保留该参照特征区。  7. The image guiding method using two-dimensional images according to claim 6, wherein: the device responsible for image processing automatically performs image filtering in two ways: one is to calculate a reference feature region in three dimensions. The feature degree on the image, and set a threshold. If the feature degree is less than the threshold, the reference feature area is deleted, otherwise it is retained; the other is to generate a number of test DRRs representing large body position differences, and calculate reference feature areas in these Test the position in the DRR, and calculate their similarity in the two-dimensional DRR in all the test DRRs and in step 1) for each reference feature area, if the similarity in all the tested DRRs is higher than the preset threshold Or to reach a preset ratio, the reference feature area is retained.
8. 根据权利要求 1所述一种采用二维影像的图像引导方法, 其特征在于: 所述 参照特征区和实时特征区的大小、 形状是可调的。  8. The image guiding method using two-dimensional images according to claim 1, wherein: the size and shape of the reference feature area and the real-time feature area are adjustable.
9. 根据权利要求 1所述一种采用二维影像的图像引导方法, 其特征在于: 所述 多个二维 DRR是根据所述三维图像从多个角度和位置投影生成的。  9. The image guiding method using two-dimensional images according to claim 1, wherein: the plurality of two-dimensional DRRs are generated from a plurality of angles and positions according to the three-dimensional image.
PCT/CN2014/075126 2013-04-12 2014-04-10 Image guidance method employing two-dimensional imaging WO2014166415A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310127363.7 2013-04-12
CN201310127363.7A CN103876763A (en) 2012-12-21 2013-04-12 Image guide method implemented by aid of two-dimensional images

Publications (1)

Publication Number Publication Date
WO2014166415A1 true WO2014166415A1 (en) 2014-10-16

Family

ID=51690142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/075126 WO2014166415A1 (en) 2013-04-12 2014-04-10 Image guidance method employing two-dimensional imaging

Country Status (1)

Country Link
WO (1) WO2014166415A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0429148A1 (en) * 1989-11-15 1991-05-29 George S. Allen Method and apparatus for imaging the anatomy
CN101032650A (en) * 2006-03-10 2007-09-12 三菱重工业株式会社 Radiotherapy device control apparatus and radiation irradiation method
CN101076282A (en) * 2004-09-30 2007-11-21 安科锐公司 Dynamic tracking of moving targets
CN101478918A (en) * 2006-06-28 2009-07-08 艾可瑞公司 Parallel stereovision geometry in image-guided radiosurgery
WO2012119649A1 (en) * 2011-03-09 2012-09-13 Elekta Ab (Publ) System and method for image-guided radio therapy

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0429148A1 (en) * 1989-11-15 1991-05-29 George S. Allen Method and apparatus for imaging the anatomy
CN101076282A (en) * 2004-09-30 2007-11-21 安科锐公司 Dynamic tracking of moving targets
CN101032650A (en) * 2006-03-10 2007-09-12 三菱重工业株式会社 Radiotherapy device control apparatus and radiation irradiation method
CN101478918A (en) * 2006-06-28 2009-07-08 艾可瑞公司 Parallel stereovision geometry in image-guided radiosurgery
WO2012119649A1 (en) * 2011-03-09 2012-09-13 Elekta Ab (Publ) System and method for image-guided radio therapy

Similar Documents

Publication Publication Date Title
US11257241B2 (en) System and method for component positioning by registering a 3D patient model to an intra-operative image
Bert et al. Clinical experience with a 3D surface patient setup system for alignment of partial-breast irradiation patients
Tomazevic et al. 3-D/2-D registration of CT and MR to X-ray images
Aubry et al. Measurements of intrafraction motion and interfraction and intrafraction rotation of prostate by three-dimensional analysis of daily portal imaging with radiopaque markers
CN101443816B (en) Image deformable registration for image-guided radiation therapy
US11911110B2 (en) System and method for registration between coordinate systems and navigation of selected members
US20200237445A1 (en) System and Method for Registration Between Coordinate Systems and Navigation of Selected Members
JP5243754B2 (en) Image data alignment
US20150150523A1 (en) On-site verification of implant positioning
US20080037843A1 (en) Image segmentation for DRR generation and image registration
Schmid et al. A phantom study to assess accuracy of needle identification in real-time planning of ultrasound-guided high-dose-rate prostate implants
EP2032039A2 (en) Parallel stereovision geometry in image-guided radiosurgery
JP7513980B2 (en) Medical image processing device, treatment system, medical image processing method, and program
Russakoff et al. Intensity-based 2D-3D spine image registration incorporating a single fiducial marker1
Huang et al. Rapid dynamic image registration of the beating heart for diagnosis and surgical navigation
US10376712B2 (en) Real-time applicator position monitoring system
Chaoui et al. Recognition-based segmentation and registration method for image guided shoulder surgery
WO2002061680A2 (en) Surface imaging
CN101632570B (en) Calibration method of medical endoscope
CN103876763A (en) Image guide method implemented by aid of two-dimensional images
US20050288574A1 (en) Wireless (disposable) fiducial based registration and EM distoration based surface registration
Shin et al. Markerless registration for intracerebral hemorrhage surgical system using weighted iterative closest point (ICP)
CN113545848B (en) Registration method and registration device of navigation guide plate
CN116452755B (en) Skeleton model construction method, system, medium and equipment
CN116035832A (en) Apparatus and method for registering live and scanned images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14782375

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14782375

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 03/05/2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14782375

Country of ref document: EP

Kind code of ref document: A1