[go: up one dir, main page]

CN116152071A - A Fusion and Reconstruction Method for Multi-View Anisotropic 3D Images - Google Patents

A Fusion and Reconstruction Method for Multi-View Anisotropic 3D Images Download PDF

Info

Publication number
CN116152071A
CN116152071A CN202310215727.0A CN202310215727A CN116152071A CN 116152071 A CN116152071 A CN 116152071A CN 202310215727 A CN202310215727 A CN 202310215727A CN 116152071 A CN116152071 A CN 116152071A
Authority
CN
China
Prior art keywords
images
image
floating
reference image
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310215727.0A
Other languages
Chinese (zh)
Inventor
屈磊
李紫翔
吴军
李园园
黄志祥
陈宇飞
朱铃菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202310215727.0A priority Critical patent/CN116152071A/en
Publication of CN116152071A publication Critical patent/CN116152071A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及一种多视角各向异性三维图像的融合重建方法,包括:获取小型哺乳动物器官的三幅正交的MRI图像;使三幅正交的MRI图像的空间比例达到一致,得到图像I1、I2和I3;提取参考图像R1上的特征点;利用得出的参考图像和浮动图像对应的两组特征点对之间的对应关系计算参考图像和浮动图像的仿射变换矩阵;得到两幅生成图像R2、R3;得到过亮度调整后的三幅图像R1’、R2’和R3’;得到一幅三维各向同性图片即最终输出图像Io。本发明将不同视角的三幅图像进行精确的拼接对齐,通过对多幅正交的低层间分辨率图像的融合重建,使原本低分辨率的图像的清晰度得到很大提升,为小型哺乳动物的器官功能研究提供了高清图像数据支持。

Figure 202310215727

The invention relates to a method for fusion and reconstruction of multi-view anisotropic three-dimensional images, comprising: acquiring three orthogonal MRI images of small mammalian organs; making the spatial proportions of the three orthogonal MRI images consistent to obtain image I1 , I2 and I3; extract the feature points on the reference image R1; use the corresponding relationship between the obtained reference image and the two sets of feature point pairs corresponding to the floating image to calculate the affine transformation matrix of the reference image and the floating image; get two Generate images R2 and R3; obtain three images R1', R2' and R3' after brightness adjustment; obtain a three-dimensional isotropic image, which is the final output image I o . The present invention accurately stitches and aligns three images of different angles of view, and through the fusion and reconstruction of multiple orthogonal low-resolution images, the clarity of the original low-resolution images is greatly improved. Animal organ function research provides high-definition image data support.

Figure 202310215727

Description

一种多视角各向异性三维图像的融合重建方法A Fusion and Reconstruction Method for Multi-View Anisotropic 3D Images

技术领域technical field

本发明涉及3D生物医学图像处理技术领域,尤其是一种多视角各向异性三维图像的融合重建方法。The invention relates to the technical field of 3D biomedical image processing, in particular to a method for fusion and reconstruction of multi-view anisotropic three-dimensional images.

背景技术Background technique

磁共振成像MRI是一种非侵入性成像技术,可产生三维详细的解剖图像,通常用于疾病检测,诊断和治疗监测。限于成像技术,目前的MRI成像主要用于人体器官成像,一般医疗设备可达到的层间分辨率在毫米级别,若要提高层间分辨率,需要强度很高的成像磁场。Magnetic Resonance Imaging MRI is a non-invasive imaging technique that produces three-dimensional detailed images of anatomy, often used for disease detection, diagnosis and treatment monitoring. Limited to imaging technology, current MRI imaging is mainly used for imaging human organs. The interlayer resolution that can be achieved by general medical equipment is at the millimeter level. To improve the interlayer resolution, a high-intensity imaging magnetic field is required.

目前针对于小型哺乳动物的MRI成像还很困难,如树鼩大脑的大小在两厘米以下,需要很高分辨率的成像设备才能获取可供分析使用的图像,成像条件苛刻,图像样本很少。现阶段的对于后期提升图像层间分辨率的方法往往需要预设先验条件和大量高质量数据进行学习,这与实际条件下高质量图像数据难以获取的现实相违背。At present, MRI imaging for small mammals is still very difficult. For example, the brain size of tree shrews is less than two centimeters. High-resolution imaging equipment is required to obtain images that can be used for analysis. The imaging conditions are harsh and there are few image samples. At this stage, the methods for improving the interlayer resolution of images in the later stage often require preset prior conditions and a large amount of high-quality data for learning, which is contrary to the reality that high-quality image data is difficult to obtain under actual conditions.

发明内容Contents of the invention

为解决现有成像设备针对小型动物的微小器官的MRI成像很难获取层间分辨率较高图像的缺陷,本发明的目的在于提供一种通过三次不同视角的扫描获取近似正交的低层间分辨率的图像,之后再经过一系列的处理,融合重建出各向同性的层间高分辨的各向同性三维图像,为小型哺乳动物的器官功能研究提供了高清图像数据支持,可操作性高,融合重建效果好的多视角各向异性三维图像的融合重建方法。In order to solve the defect that the existing imaging equipment is difficult to obtain images with high interlayer resolution for MRI imaging of the tiny organs of small animals, the purpose of the present invention is to provide a method to obtain approximately orthogonal low-layer interlayer images through three scans with different viewing angles. The high-resolution images, after a series of processing, are fused to reconstruct isotropic interlayer high-resolution isotropic three-dimensional images, which provide high-definition image data support for the study of organ functions of small mammals, with high operability , a fusion reconstruction method for multi-view anisotropic 3D images with good fusion reconstruction effect.

为实现上述目的,本发明采用了以下技术方案:一种多视角各向异性三维图像的融合重建方法,该方法包括下列顺序的步骤:In order to achieve the above object, the present invention adopts the following technical solution: a method for fusion and reconstruction of multi-view anisotropic three-dimensional images, the method includes the following sequential steps:

(1)获取小型哺乳动物器官的三幅正交的MRI图像;(1) Obtain three orthogonal MRI images of small mammalian organs;

(2)将三幅正交的MRI图像从nii格式转换成v3draw格式,通过计算片内分辨率与片间分辨率的比例关系,使用线性插值方式对层间像素进行插值,使三幅正交的MRI图像的空间比例达到一致,得到图像I1、I2和I3;(2) Convert the three orthogonal MRI images from the nii format to the v3draw format. By calculating the proportional relationship between the intra-slice resolution and the inter-slice resolution, the inter-layer pixels are interpolated using linear interpolation to make the three orthogonal MRI images The spatial proportions of the MRI images are consistent, and images I1, I2 and I3 are obtained;

(3)经线性插值后,选择三幅图像I1、I2和I3中质量最优的一幅图像I1作为参考图像R1,另外两幅图像I2和I3作为浮动图像,分别在三幅图像R1、I2和I3上点选少量表征图像方向的特征点,并使用2.5DHarris角点检测算法提取参考图像R1上的特征点;(3) After linear interpolation, select an image I1 with the best quality among the three images I1, I2 and I3 as the reference image R1, and the other two images I2 and I3 as floating images, respectively in the three images R1, I2 Select a small number of feature points representing the direction of the image on and I3, and use the 2.5DHarris corner detection algorithm to extract feature points on the reference image R1;

(4)利用CLM相干标记点映射匹配算法在两幅浮动图像I2和I3上搜索与参考图像R1上特征点相匹配的特征点,利用得出的参考图像和浮动图像对应的两组特征点对之间的对应关系计算参考图像和浮动图像的仿射变换矩阵;(4) Use the CLM coherent marker point mapping matching algorithm to search for feature points that match the feature points on the reference image R1 on the two floating images I2 and I3, and use the two sets of feature point pairs corresponding to the reference image and the floating image The correspondence between calculates the affine transformation matrix of the reference image and the floating image;

(5)利用仿射变换矩阵对两幅浮动图像进行仿射变换,使两幅浮动图像I2和I3与参考图像R1完全对齐,得到两幅生成图像R2、R3;(5) Carry out affine transformation to the two floating images using the affine transformation matrix, so that the two floating images I2 and I3 are completely aligned with the reference image R1, and two generated images R2 and R3 are obtained;

(6)以参考图像R1的亮度分布为参照,对两幅生成图像R2、R3做亮度调整,使三者的亮度分布保持一致,得到过亮度调整后的三幅图像R1’、R2’和R3’;(6) Taking the brightness distribution of the reference image R1 as a reference, adjust the brightness of the two generated images R2 and R3 to keep the brightness distribution of the three consistent, and obtain the three images R1', R2' and R3 after brightness adjustment ';

(7)对于三幅图像R1’、R2’和R3’,根据三维梯度增强的融合方法对三幅图像R1’、R2’和R3’进行融合重建,得到一幅三维各向同性图片即最终输出图像Io(7) For the three images R1', R2' and R3', according to the three-dimensional gradient enhancement fusion method, the three images R1', R2' and R3' are fused and reconstructed to obtain a three-dimensional isotropic picture that is the final output Image I o .

所述步骤(2)具体包括以下步骤:Described step (2) specifically comprises the following steps:

(2a)将各向异性的三维nii格式的图像转换为v3draw格式;(2a) Convert the image of the anisotropic three-dimensional nii format into the v3draw format;

(2b)计算出对图像片间的上采样倍数,根据图像片内分辨率与片间分辨率的比例关系,获取图像的片内分辨率为n*n微米,片间分辨率为m微米,m>n,计算出片间上采样倍数为α=m/n;(2b) Calculate the upsampling multiple between image slices, and according to the proportional relationship between the internal resolution of the image and the resolution between slices, the internal resolution of the acquired image is n*n microns, and the resolution between slices is m microns, m>n, calculate the inter-chip upsampling multiple as α=m/n;

(2c)根据上采样倍数求出上采样后的图像尺寸:初始各向异性图像尺寸为X*Y*Z,则经插值后的输出图像尺寸为X*Y*(Z*α);(2c) Calculate the upsampled image size according to the upsampling multiple: the initial anisotropic image size is X*Y*Z, then the interpolated output image size is X*Y*(Z*α);

(2d)根据输出图像尺寸对三维各向异性图像进行层间上采样,插值方式为双线性插值,对三幅图像分别进行相同方式的插值,得到三幅空间比例一致的图像I1、I2和I3。(2d) Interlayer upsampling is performed on the 3D anisotropic image according to the size of the output image, the interpolation method is bilinear interpolation, and the three images are interpolated in the same way to obtain three images I1, I2 and I3.

所述步骤(3)具体包括以下步骤:Described step (3) specifically comprises the following steps:

(3a)对经线性插值后的三幅图像进行比对,选择质量最优的图像I1作为参考图像,命名为R1,两幅浮动图像为I2和I3;所述质量最优是指视野范围最大,亮度分布均匀;(3a) Compare the three images after linear interpolation, select the image I1 with the best quality as the reference image, and name it as R1, and the two floating images are I2 and I3; the best quality refers to the largest field of view , the brightness distribution is uniform;

(3b)在三幅图像R1、I2和I3上分别点选8个能代表三幅图像大致方向分布的特征点;(3b) On the three images R1, I2 and I3, click and select 8 feature points that can represent the general direction distribution of the three images;

(3c)使用2.5DHarris角点检测算法,分别设置非最大值抑制窗口,其中,三维非最大值抑制窗口和二维的非最大值抑制窗口的半径大小分别设置为25和20,在参考图像R1上提取700个特征点;(3c) Using the 2.5DHarris corner detection algorithm, set the non-maximum suppression window respectively, wherein the radii of the three-dimensional non-maximum suppression window and the two-dimensional non-maximum suppression window are set to 25 and 20 respectively, in the reference image R1 Extract 700 feature points;

(3d)对生成的700个特征点进行筛选,最终保留500个特征点。(3d) Filter the generated 700 feature points, and finally keep 500 feature points.

所述步骤(4)具体包括以下步骤:Described step (4) specifically comprises the following steps:

(4a)利用点选的特征点,计算出浮动图像I2到参考图像R1,以及浮动图像I3到参考图像R1的粗精度三维仿射变换矩阵,利用粗精度三维仿射变换矩阵对两幅浮动图像I2和I3做向参考图像R1的初步仿射变换,得到浮动图像I2’和浮动图像I3’,三幅图像R1、I2’和I3’的基本方向保持一致;(4a) Using the selected feature points, calculate the rough-precision three-dimensional affine transformation matrix from the floating image I2 to the reference image R1, and from the floating image I3 to the reference image R1, and use the rough-precision three-dimensional affine transformation matrix to transform the two floating images I2 and I3 do preliminary affine transformation to the reference image R1 to obtain floating image I2' and floating image I3', and the basic directions of the three images R1, I2' and I3' are consistent;

(4b)输入参考图像R1和使用2.5DHarris角点检测算法提取参考图像R1上的特征点,以及经过方向调整后的浮动图像I2’和I3’,利用CLM相干标记点映射匹配算法,设置搜索半径为10,通过迭代搜索的方式,在浮动图像I2’和浮动I3’上找到与参考图像R1上特征点相匹配的特征点;(4b) Input the reference image R1 and use the 2.5DHarris corner detection algorithm to extract the feature points on the reference image R1, as well as the floating images I2' and I3' after orientation adjustment, and use the CLM coherent marker point mapping matching algorithm to set the search radius is 10, find the feature points matching the feature points on the reference image R1 on the floating images I2' and floating I3' by means of iterative search;

(4c)利用粗精度三维仿射变换矩阵的逆矩阵将浮动图像I2’和I3’上的特征点逆变换到浮动图像I2和I3上,得到浮动图像I2和I3上与参考图像R1准确匹配的特征点,利用这两组准确匹配的特征点对,计算出浮动图像I2向参考图像R1,以及浮动图像I3向参考图像R1变换的仿射变换矩阵。(4c) Use the inverse matrix of the coarse-precision three-dimensional affine transformation matrix to inversely transform the feature points on the floating images I2' and I3' to the floating images I2 and I3, and obtain the accurate matching of the floating images I2 and I3 with the reference image R1 Feature points, using these two sets of feature point pairs that are accurately matched, calculate the affine transformation matrix for transforming the floating image I2 to the reference image R1, and the floating image I3 to the reference image R1.

所述步骤(5)具体包括以下步骤:Described step (5) specifically comprises the following steps:

(5a)生成两幅与参考图像R1同样尺寸的像素值为0的全空图像,对两幅全空图像进行每个像素点像素值的获取,根据仿射变换矩阵分别求出两幅全空图像中每个像素点在两幅浮动图像I2和I3原图中的位置;(5a) Generate two full-space images with the same size as the reference image R1 with a pixel value of 0, obtain the pixel value of each pixel of the two full-space images, and obtain two full-space images according to the affine transformation matrix The position of each pixel in the image in the original images of the two floating images I2 and I3;

(5b)利用原图中此位置周围的八邻域位置的像素值对生成图像的像素值进行线性插值;如果目标图像像素点在原图中的位置超出边界,则超出边界部分的像素值取为参考图像R1在该位置的像素值,根据浮动图像I2得到生成图像R2,根据浮动图像I3得到生成图像R3。(5b) Use the pixel values of the eight neighborhood positions around this position in the original image to perform linear interpolation on the pixel values of the generated image; if the position of the target image pixel point in the original image exceeds the boundary, then the pixel value beyond the boundary is taken as Referring to the pixel value of the position of the image R1, the generated image R2 is obtained according to the floating image I2, and the generated image R3 is obtained according to the floating image I3.

所述步骤(6)具体包括以下步骤:Described step (6) specifically comprises the following steps:

(6a)选取滑动一维窗口的半径为5,分别对三幅图像R1、R2、R3的X、Y、Z三个视角进行均值平滑滤波,得到三幅平滑滤波后的图像S1,S2,S3;(6a) Select the radius of the sliding one-dimensional window as 5, and perform mean smoothing filtering on the three viewing angles of X, Y, and Z of the three images R1, R2, and R3 respectively, and obtain three smoothed and filtered images S1, S2, and S3 ;

(6b)逐像素遍历三幅图像R1、R2、R3,对生成图像R2和R3的亮度值进行调整,亮度匹配后对应位置的像素值R2(x,y,z)=R2(x,y,z)*(S1(x,y,z)/S2(x,y,z)),R3(x,y,z)=R3(x,y,z)*(S1(x,y,z)/S3(x,y,z)),得出经过亮度调整后且亮度保持一致的三幅图像R1’、R2’和R3’。(6b) Traversing the three images R1, R2, R3 pixel by pixel, adjusting the brightness values of the generated images R2 and R3, the pixel value R2(x, y, z) at the corresponding position after brightness matching = R2(x, y, z)*(S1(x,y,z)/S2(x,y,z)), R3(x,y,z)=R3(x,y,z)*(S1(x,y,z) /S3(x, y, z)) to obtain three images R1', R2' and R3' whose brightness has been adjusted and kept consistent.

所述步骤(7)具体包括以下步骤:Described step (7) specifically comprises the following steps:

(7a)生成一幅与三幅图像R1’、R2’和R3’同样尺寸的像素值为0的全空图像;(7a) Generate an all-empty image with a pixel value of 0 of the same size as the three images R1', R2' and R3';

(7b)针对步骤(7a)的全空图像,遍历每个像素位置,对于最终输出图像Io中的每个像素位置(x,y,z),分别取出三幅图像R1’、R2’和R3’对应位置以及邻域的像素值R’(x,y,z)、R’(x+1,y,z)、R’(x-1,y,z)、R’(x,y+1,z)、R’(x,y-1,z)、R’(x,y,z+1)、R’(x,y,z-1);(7b) For the empty image in step (7a), each pixel position is traversed, and for each pixel position (x, y, z) in the final output image Io , three images R1', R2' and R3' corresponds to the position and the pixel value of the neighborhood R'(x,y,z), R'(x+1,y,z), R'(x-1,y,z), R'(x,y +1,z), R'(x,y-1,z), R'(x,y,z+1), R'(x,y,z-1);

(7c)根据三维梯度增强的融合方法对三幅图像进行融合,方式如下:对于最终输出图像Io中的每个像素位置的像素值的赋值,取

Figure BDA0004114808380000041
Figure BDA0004114808380000042
Figure BDA0004114808380000043
其中,取a=4b,b的取值范围为0至1,根据融合效果进行调整,在上式中若Io(x,y,z)<0或者Io(x,y,z)>255,则取
Figure BDA0004114808380000044
逐次遍历,得到最终输出图像Io。(7c) According to the fusion method of three-dimensional gradient enhancement, the three images are fused in the following way: for the assignment of the pixel value of each pixel position in the final output image Io , take
Figure BDA0004114808380000041
Figure BDA0004114808380000042
Figure BDA0004114808380000043
Among them, take a=4b, the value range of b is 0 to 1, and adjust according to the fusion effect. In the above formula, if I o (x, y, z)<0 or I o (x, y, z)> 255, then take
Figure BDA0004114808380000044
Traverse successively to get the final output image I o .

由上述技术方案可知,本发明的有益效果为:第一,将不同视角的三幅图像进行精确的拼接对齐,可以将多视角所获取的不同视野范围的图像进行互补,对三幅图像包含的有效信息进行整合;第二,通过对多幅正交的低层间分辨率图像的融合重建,使原本低分辨率的图像的清晰度得到很大提升,为小型哺乳动物的器官功能研究提供了高清图像数据支持;第三,本发明提出一种新的图像获取方法,依靠获取的少量不同视角的图像即可进行融合重建,可操作性高,融合重建效果好。It can be seen from the above technical solution that the beneficial effects of the present invention are as follows: first, the three images of different viewing angles are accurately spliced and aligned, and the images of different viewing ranges obtained from multiple viewing angles can be complemented, and the images contained in the three images Second, through the fusion and reconstruction of multiple orthogonal low-resolution images, the clarity of the original low-resolution images is greatly improved, which provides a great opportunity for the study of organ functions in small mammals. High-definition image data support; third, the present invention proposes a new image acquisition method, which can be fused and reconstructed by relying on a small number of images acquired from different perspectives, with high operability and good fusion and reconstruction effects.

附图说明Description of drawings

图1为本发明的方法流程图;Fig. 1 is method flowchart of the present invention;

图2为三幅采集的层间低分辨率的各向异性图像;Figure 2 shows three interlayer low-resolution anisotropic images collected;

图3为低分辨率的各向异性图像经过层间插值的结果图;Fig. 3 is the result figure of interlayer interpolation of the anisotropic image of low resolution;

图4为两幅浮动图像向基准图像精确配准的效果图;Fig. 4 is an effect diagram of accurate registration of two floating images to the reference image;

图5为三幅图像亮度匹配的效果图;Fig. 5 is the effect diagram of brightness matching of three images;

图6为三幅图像经过融合重建后的效果图。Figure 6 is an effect diagram of three images after fusion and reconstruction.

具体实施方式Detailed ways

如图1所示,一种多视角各向异性三维图像的融合重建方法,该方法包括下列顺序的步骤:As shown in Figure 1, a method for fusion and reconstruction of multi-view anisotropic three-dimensional images, the method includes the steps in the following order:

(1)获取小型哺乳动物器官的三幅正交的MRI图像;由于成像技术限制,所获图像的层间分辨率低于层内分辨率,图像在操作成像时难以达到严格正交效果;(1) Obtain three orthogonal MRI images of small mammalian organs; due to the limitation of imaging technology, the inter-layer resolution of the obtained images is lower than the intra-layer resolution, and it is difficult to achieve a strict orthogonal effect during image manipulation;

(2)将三幅正交的MRI图像从nii格式转换成v3draw格式,通过计算片内分辨率与片间分辨率的比例关系,使用线性插值方式对层间像素进行插值,使三幅正交的MRI图像的空间比例达到一致,得到图像I1、I2和I3;(2) Convert the three orthogonal MRI images from the nii format to the v3draw format. By calculating the proportional relationship between the intra-slice resolution and the inter-slice resolution, the inter-layer pixels are interpolated using linear interpolation to make the three orthogonal MRI images The spatial proportions of the MRI images are consistent, and images I1, I2 and I3 are obtained;

(3)经线性插值后,选择三幅图像I1、I2和I3中质量最优的一幅图像I1作为参考图像R1,另外两幅图像I2和I3作为浮动图像,分别在三幅图像R1、I2和I3上点选少量表征图像方向的特征点,并使用2.5DHarris角点检测算法提取参考图像R1上的特征点;(3) After linear interpolation, select an image I1 with the best quality among the three images I1, I2 and I3 as the reference image R1, and the other two images I2 and I3 as floating images, respectively in the three images R1, I2 Select a small number of feature points representing the direction of the image on and I3, and use the 2.5DHarris corner detection algorithm to extract feature points on the reference image R1;

(4)利用CLM相干标记点映射匹配算法在两幅浮动图像I2和I3上搜索与参考图像R1上特征点相匹配的特征点,利用得出的参考图像和浮动图像对应的两组特征点对之间的对应关系计算参考图像和浮动图像的仿射变换矩阵;CLM相干标记点映射匹配算法出自论文:Cross-Modality Coherent Registration of Whole Mouse Brains;(4) Use the CLM coherent marker point mapping matching algorithm to search for feature points that match the feature points on the reference image R1 on the two floating images I2 and I3, and use the two sets of feature point pairs corresponding to the reference image and the floating image The corresponding relationship between calculates the affine transformation matrix of the reference image and the floating image; the CLM coherent marker point mapping matching algorithm comes from the paper: Cross-Modality Coherent Registration of Whole Mouse Brains;

(5)利用仿射变换矩阵对两幅浮动图像进行仿射变换,使两幅浮动图像I2和I3与参考图像R1完全对齐,得到两幅生成图像R2、R3;(5) Carry out affine transformation to the two floating images using the affine transformation matrix, so that the two floating images I2 and I3 are completely aligned with the reference image R1, and two generated images R2 and R3 are obtained;

(6)以参考图像R1的亮度分布为参照,对两幅生成图像R2、R3做亮度调整,使三者的亮度分布保持一致,得到过亮度调整后的三幅图像R1’、R2’和R3’;(6) Taking the brightness distribution of the reference image R1 as a reference, adjust the brightness of the two generated images R2 and R3 to keep the brightness distribution of the three consistent, and obtain the three images R1', R2' and R3 after brightness adjustment ';

(7)对于三幅图像R1’、R2’和R3’,根据三维梯度增强的融合方法对三幅图像R1’、R2’和R3’进行融合重建,得到一幅三维各向同性图片即最终输出图像Io(7) For the three images R1', R2' and R3', according to the three-dimensional gradient enhancement fusion method, the three images R1', R2' and R3' are fused and reconstructed to obtain a three-dimensional isotropic picture that is the final output Image I o .

所述步骤(2)具体包括以下步骤:Described step (2) specifically comprises the following steps:

(2a)将各向异性的三维nii格式的图像转换为v3draw格式;(2a) Convert the image of the anisotropic three-dimensional nii format into the v3draw format;

(2b)计算出对图像片间的上采样倍数,根据图像片内分辨率与片间分辨率的比例关系,获取图像的片内分辨率为n*n微米,片间分辨率为m微米,m>n,计算出片间上采样倍数为α=m/n;(2b) Calculate the upsampling multiple between image slices, and according to the proportional relationship between the internal resolution of the image and the resolution between slices, the internal resolution of the acquired image is n*n microns, and the resolution between slices is m microns, m>n, calculate the inter-chip upsampling multiple as α=m/n;

(2c)根据上采样倍数求出上采样后的图像尺寸:初始各向异性图像尺寸为X*Y*Z,则经插值后的输出图像尺寸为X*Y*(Z*α);(2c) Calculate the upsampled image size according to the upsampling multiple: the initial anisotropic image size is X*Y*Z, then the interpolated output image size is X*Y*(Z*α);

(2d)根据输出图像尺寸对三维各向异性图像进行层间上采样,插值方式为双线性插值,对三幅图像分别进行相同方式的插值,得到三幅空间比例一致的图像I1、I2和I3。(2d) Interlayer upsampling is performed on the 3D anisotropic image according to the size of the output image, the interpolation method is bilinear interpolation, and the three images are interpolated in the same way to obtain three images I1, I2 and I3.

所述步骤(3)具体包括以下步骤:Described step (3) specifically comprises the following steps:

(3a)对经线性插值后的三幅图像进行比对,选择质量最优的图像I1作为参考图像,命名为R1,两幅浮动图像为I2和I3;所述质量最优是指视野范围最大,亮度分布均匀;(3a) Compare the three images after linear interpolation, select the image I1 with the best quality as the reference image, and name it as R1, and the two floating images are I2 and I3; the best quality refers to the largest field of view , the brightness distribution is uniform;

(3b)在三幅图像R1、I2和I3上分别点选8个能代表三幅图像大致方向分布的特征点;(3b) On the three images R1, I2 and I3, click and select 8 feature points that can represent the general direction distribution of the three images;

(3c)使用2.5DHarris角点检测算法,分别设置非最大值抑制窗口,其中,三维非最大值抑制窗口和二维的非最大值抑制窗口的半径大小分别设置为25和20,在参考图像R1上提取700个特征点;(3c) Using the 2.5DHarris corner detection algorithm, set the non-maximum suppression window respectively, wherein the radii of the three-dimensional non-maximum suppression window and the two-dimensional non-maximum suppression window are set to 25 and 20 respectively, in the reference image R1 Extract 700 feature points;

(3d)对生成的700个特征点进行筛选,最终保留500个特征点。(3d) Filter the generated 700 feature points, and finally keep 500 feature points.

所述步骤(4)具体包括以下步骤:Described step (4) specifically comprises the following steps:

(4a)利用点选的特征点,计算出浮动图像I2到参考图像R1,以及浮动图像I3到参考图像R1的粗精度三维仿射变换矩阵,利用粗精度三维仿射变换矩阵对两幅浮动图像I2和I3做向参考图像R1的初步仿射变换,得到浮动图像I2’和浮动图像I3’,三幅图像R1、I2’和I3’的基本方向保持一致;(4a) Using the selected feature points, calculate the rough-precision three-dimensional affine transformation matrix from the floating image I2 to the reference image R1, and from the floating image I3 to the reference image R1, and use the rough-precision three-dimensional affine transformation matrix to transform the two floating images I2 and I3 do preliminary affine transformation to the reference image R1 to obtain floating image I2' and floating image I3', and the basic directions of the three images R1, I2' and I3' are consistent;

(4b)输入参考图像R1和使用2.5DHarris角点检测算法提取参考图像R1上的特征点,以及经过方向调整后的浮动图像I2’和I3’,利用CLM相干标记点映射匹配算法,设置搜索半径为10,通过迭代搜索的方式,在浮动图像I2’和浮动I3’上找到与参考图像R1上特征点相匹配的特征点;(4b) Input the reference image R1 and use the 2.5DHarris corner detection algorithm to extract the feature points on the reference image R1, as well as the floating images I2' and I3' after orientation adjustment, and use the CLM coherent marker point mapping matching algorithm to set the search radius is 10, find the feature points matching the feature points on the reference image R1 on the floating images I2' and floating I3' by means of iterative search;

(4c)利用粗精度三维仿射变换矩阵的逆矩阵将浮动图像I2’和I3’上的特征点逆变换到浮动图像I2和I3上,得到浮动图像I2和I3上与参考图像R1准确匹配的特征点,利用这两组准确匹配的特征点对,计算出浮动图像I2向参考图像R1,以及浮动图像I3向参考图像R1变换的仿射变换矩阵。(4c) Use the inverse matrix of the coarse-precision three-dimensional affine transformation matrix to inversely transform the feature points on the floating images I2' and I3' to the floating images I2 and I3, and obtain the accurate matching of the floating images I2 and I3 with the reference image R1 Feature points, using these two sets of feature point pairs that are accurately matched, calculate the affine transformation matrix for transforming the floating image I2 to the reference image R1, and the floating image I3 to the reference image R1.

所述步骤(5)具体包括以下步骤:Described step (5) specifically comprises the following steps:

(5a)生成两幅与参考图像R1同样尺寸的像素值为0的全空图像,对两幅全空图像进行每个像素点像素值的获取,根据仿射变换矩阵分别求出两幅全空图像中每个像素点在两幅浮动图像I2和I3原图中的位置;(5a) Generate two full-space images with the same size as the reference image R1 with a pixel value of 0, obtain the pixel value of each pixel of the two full-space images, and obtain two full-space images according to the affine transformation matrix The position of each pixel in the image in the original images of the two floating images I2 and I3;

(5b)利用原图中此位置周围的八邻域位置的像素值对生成图像的像素值进行线性插值;如果目标图像像素点在原图中的位置超出边界,则超出边界部分的像素值取为参考图像R1在该位置的像素值,根据浮动图像I2得到生成图像R2,根据浮动图像I3得到生成图像R3。(5b) Use the pixel values of the eight neighborhood positions around this position in the original image to perform linear interpolation on the pixel values of the generated image; if the position of the target image pixel point in the original image exceeds the boundary, then the pixel value beyond the boundary is taken as Referring to the pixel value of the position of the image R1, the generated image R2 is obtained according to the floating image I2, and the generated image R3 is obtained according to the floating image I3.

所述步骤(6)具体包括以下步骤:Described step (6) specifically comprises the following steps:

(6a)选取滑动一维窗口的半径为5,分别对三幅图像R1、R2、R3的X、Y、Z三个视角进行均值平滑滤波,得到三幅平滑滤波后的图像S1,S2,S3;(6a) Select the radius of the sliding one-dimensional window as 5, and perform mean smoothing filtering on the three viewing angles of X, Y, and Z of the three images R1, R2, and R3 respectively, and obtain three smoothed and filtered images S1, S2, and S3 ;

(6b)逐像素遍历三幅图像R1、R2、R3,对生成图像R2和R3的亮度值进行调整,亮度匹配后对应位置的像素值R2(x,y,z)=R2(x,y,z)*(S1(x,y,z)/S2(x,y,z)),R3(x,y,z)=R3(x,y,z)*(S1(x,y,z)/S3(x,y,z)),得出经过亮度调整后且亮度保持一致的三幅图像R1’、R2’和R3’。(6b) Traversing the three images R1, R2, R3 pixel by pixel, adjusting the brightness values of the generated images R2 and R3, the pixel value R2(x, y, z) at the corresponding position after brightness matching = R2(x, y, z)*(S1(x,y,z)/S2(x,y,z)), R3(x,y,z)=R3(x,y,z)*(S1(x,y,z) /S3(x, y, z)) to obtain three images R1', R2' and R3' whose brightness has been adjusted and kept consistent.

所述步骤(7)具体包括以下步骤:Described step (7) specifically comprises the following steps:

(7a)生成一幅与三幅图像R1’、R2’和R3’同样尺寸的像素值为0的全空图像;(7a) Generate an all-empty image with a pixel value of 0 of the same size as the three images R1', R2' and R3';

(7b)针对步骤(7a)的全空图像,遍历每个像素位置,对于最终输出图像Io中的每个像素位置(x,y,z),分别取出三幅图像R1’、R2’和R3’对应位置以及邻域的像素值R’(x,y,z)、R’(x+1,y,z)、R’(x-1,y,z)、R’(x,y+1,z)、R’(x,y-1,z)、R’(x,y,z+1)、R’(x,y,z-1);(7b) For the empty image in step (7a), each pixel position is traversed, and for each pixel position (x, y, z) in the final output image Io , three images R1', R2' and R3' corresponds to the position and the pixel value of the neighborhood R'(x,y,z), R'(x+1,y,z), R'(x-1,y,z), R'(x,y +1,z), R'(x,y-1,z), R'(x,y,z+1), R'(x,y,z-1);

(7c)根据三维梯度增强的融合方法对三幅图像进行融合,方式如下:对于最终输出图像Io中的每个像素位置的像素值的赋值,取

Figure BDA0004114808380000081
Figure BDA0004114808380000082
Figure BDA0004114808380000083
其中,取a=4b,b的取值范围为0至1,根据融合效果进行调整,在上式中若Io(x,y,z)<0或者Io(x,y,z)>255,则取
Figure BDA0004114808380000084
逐次遍历,得到最终输出图像Io。(7c) According to the fusion method of three-dimensional gradient enhancement, the three images are fused in the following way: for the assignment of the pixel value of each pixel position in the final output image Io , take
Figure BDA0004114808380000081
Figure BDA0004114808380000082
Figure BDA0004114808380000083
Among them, take a=4b, the value range of b is 0 to 1, and adjust according to the fusion effect. In the above formula, if I o (x, y, z)<0 or I o (x, y, z)> 255, then take
Figure BDA0004114808380000084
Traverse successively to get the final output image I o .

如图2所示,图像是由高强磁场下获取的三个近似正交的三幅树鼩大脑MRI图像,分辨率可达微米级别,从图像中可以看出,每幅图像在片内的分辨率远高于图像片间的分辨率,三维视图显示图像在空间比例上明显不一致。As shown in Figure 2, the images are three approximately orthogonal tree shrew brain MRI images acquired under a high-intensity magnetic field, and the resolution can reach the micron level. It can be seen from the images that the resolution of each image in the slice is The resolution is much higher than the resolution between image slices, and the 3D view shows that the images are obviously inconsistent in spatial scale.

如图3所示,三幅图像经过计算上采样比例,并通过双线性插值对层间像素进行填充,得到的三幅图像在原始分辨率较高的视质量最优,层间插值得到的图像在另外两个视图质量较低;中间的一幅图像成像覆盖面积最大,成像质量较好,作为参考图像,另外两幅图像作为浮动图像。As shown in Figure 3, the upsampling ratio of the three images is calculated, and the interlayer pixels are filled by bilinear interpolation. The images in the other two views are of lower quality; the image in the middle has the largest imaging coverage area and better image quality, and is used as a reference image, and the other two images are used as floating images.

如图4所示,为了后面的图像融合,将两幅浮动图像向参考图像做精确的配准变换,经过变换后三幅图像的特征位置完全对齐;此时三幅图像的亮度分布并不一致。As shown in Figure 4, for the subsequent image fusion, the two floating images are accurately registered and transformed to the reference image. After the transformation, the feature positions of the three images are completely aligned; at this time, the brightness distribution of the three images is not consistent.

如图5所示,为保证三幅图像在亮度分布上保持一致,以参考图像的亮度分布为基准,对另外两幅图像进行亮度匹配,经过亮度调整后的三幅图像的亮度分布得以保持一致。As shown in Figure 5, in order to ensure that the brightness distribution of the three images is consistent, the brightness distribution of the reference image is used as a benchmark to perform brightness matching on the other two images, and the brightness distribution of the three images after brightness adjustment can be kept consistent .

如图6所示,对于三幅已经完全对齐且经过亮度调整的图像,图像依然只有一个视图的质量较优,两外两个视图由于插值依然质量较低。经过三维梯度增强的融合方法融合,得出图6所示的结果,从结果可以看出图像在三个视图都能保证得到最优的效果。As shown in Figure 6, for the three images that have been fully aligned and adjusted for brightness, only one view of the image is still of good quality, and the quality of the other two views is still low due to interpolation. After the fusion method of three-dimensional gradient enhancement, the results shown in Figure 6 are obtained. From the results, it can be seen that the image can guarantee the optimal effect in the three views.

综上所述,本发明将不同视角的三幅图像进行精确的拼接对齐,可以将多视角所获取的不同视野范围的图像进行互补,对三幅图像包含的有效信息进行整合;通过对多幅正交的低层间分辨率图像的融合重建,使原本低分辨率的图像的清晰度得到很大提升,为小型哺乳动物的器官功能研究提供了高清图像数据支持。In summary, the present invention accurately stitches and aligns three images of different viewing angles, can complement images of different viewing ranges acquired from multiple viewing angles, and integrate the effective information contained in the three images; The fusion reconstruction of orthogonal low-resolution images greatly improves the clarity of the original low-resolution images, and provides high-definition image data support for the study of organ functions in small mammals.

Claims (7)

1.一种多视角各向异性三维图像的融合重建方法,其特征在于:该方法包括下列顺序的步骤:1. A method for fusion reconstruction of multi-view anisotropic three-dimensional images, characterized in that the method comprises the following steps in order: (1)获取小型哺乳动物器官的三幅正交的MRI图像;(1) Acquire three orthogonal MRI images of small mammal organs; (2)将三幅正交的MRI图像从nii格式转换成v3draw格式,通过计算片内分辨率与片间分辨率的比例关系,使用线性插值方式对层间像素进行插值,使三幅正交的MRI图像的空间比例达到一致,得到图像I1、I2和I3;(2) The three orthogonal MRI images are converted from nii format to v3draw format. By calculating the ratio between the intra-slice resolution and the inter-slice resolution, the inter-slice pixels are interpolated using linear interpolation to make the spatial ratios of the three orthogonal MRI images consistent, thus obtaining images I1, I2, and I3. (3)经线性插值后,选择三幅图像I1、I2和I3中质量最优的一幅图像I1作为参考图像R1,另外两幅图像I2和I3作为浮动图像,分别在三幅图像R1、I2和I3上点选少量表征图像方向的特征点,并使用2.5DHarris角点检测算法提取参考图像R1上的特征点;(3) After linear interpolation, the image I1 with the best quality among the three images I1, I2 and I3 is selected as the reference image R1, and the other two images I2 and I3 are used as floating images. A small number of feature points representing the image direction are selected on the three images R1, I2 and I3 respectively, and the feature points on the reference image R1 are extracted using the 2.5D Harris corner detection algorithm; (4)利用CLM相干标记点映射匹配算法在两幅浮动图像I2和I3上搜索与参考图像R1上特征点相匹配的特征点,利用得出的参考图像和浮动图像对应的两组特征点对之间的对应关系计算参考图像和浮动图像的仿射变换矩阵;(4) using the CLM coherent marker mapping matching algorithm to search for feature points on the two floating images I2 and I3 that match the feature points on the reference image R1, and using the correspondence between the two sets of feature point pairs corresponding to the reference image and the floating image to calculate the affine transformation matrix of the reference image and the floating image; (5)利用仿射变换矩阵对两幅浮动图像进行仿射变换,使两幅浮动图像I2和I3与参考图像R1完全对齐,得到两幅生成图像R2、R3;(5) Using the affine transformation matrix, the two floating images are affine transformed so that the two floating images I2 and I3 are completely aligned with the reference image R1, thereby obtaining two generated images R2 and R3; (6)以参考图像R1的亮度分布为参照,对两幅生成图像R2、R3做亮度调整,使三者的亮度分布保持一致,得到过亮度调整后的三幅图像R1’、R2’和R3’;(6) Taking the brightness distribution of the reference image R1 as a reference, the brightness of the two generated images R2 and R3 is adjusted to make the brightness distribution of the three images consistent, thereby obtaining three images R1', R2' and R3' after brightness adjustment; (7)对于三幅图像R1’、R2’和R3’,根据三维梯度增强的融合方法对三幅图像R1’、R2’和R3’进行融合重建,得到一幅三维各向同性图片即最终输出图像Io(7) For the three images R1', R2' and R3', the three images R1', R2' and R3' are fused and reconstructed according to the 3D gradient enhancement fusion method to obtain a 3D isotropic image, namely the final output image Io . 2.根据权利要求1所述的多视角各向异性三维图像的融合重建方法,其特征在于:所述步骤(2)具体包括以下步骤:2. The method for fusion reconstruction of multi-view anisotropic three-dimensional images according to claim 1, characterized in that: the step (2) specifically comprises the following steps: (2a)将各向异性的三维nii格式的图像转换为v3draw格式;(2a) Convert the anisotropic three-dimensional nii format image into v3draw format; (2b)计算出对图像片间的上采样倍数,根据图像片内分辨率与片间分辨率的比例关系,获取图像的片内分辨率为n*n微米,片间分辨率为m微米,m>n,计算出片间上采样倍数为α=m/n;(2b) Calculate the upsampling multiple between image slices. According to the ratio between the intra-slice resolution and the inter-slice resolution, the intra-slice resolution of the acquired image is n*n microns, and the inter-slice resolution is m microns. m>n, and the inter-slice upsampling multiple is calculated to be α=m/n. (2c)根据上采样倍数求出上采样后的图像尺寸:初始各向异性图像尺寸为X*Y*Z,则经插值后的输出图像尺寸为X*Y*(Z*α);(2c) Calculate the size of the upsampled image according to the upsampling multiple: the initial anisotropic image size is X*Y*Z, and the output image size after interpolation is X*Y*(Z*α); (2d)根据输出图像尺寸对三维各向异性图像进行层间上采样,插值方式为双线性插值,对三幅图像分别进行相同方式的插值,得到三幅空间比例一致的图像I1、I2和I3。(2d) The three-dimensional anisotropic image is upsampled between layers according to the output image size. The interpolation method is bilinear interpolation. The three images are interpolated in the same way to obtain three images I1, I2 and I3 with the same spatial scale. 3.根据权利要求1所述的多视角各向异性三维图像的融合重建方法,其特征在于:所述步骤(3)具体包括以下步骤:3. The method for fusion reconstruction of multi-view anisotropic three-dimensional images according to claim 1, characterized in that: the step (3) specifically comprises the following steps: (3a)对经线性插值后的三幅图像进行比对,选择质量最优的图像I1作为参考图像,命名为R1,两幅浮动图像为I2和I3;所述质量最优是指视野范围最大,亮度分布均匀;(3a) Compare the three images after linear interpolation, select the image I1 with the best quality as the reference image, named R1, and the two floating images are I2 and I3; the best quality means the largest field of view and uniform brightness distribution; (3b)在三幅图像R1、I2和I3上分别点选8个能代表三幅图像大致方向分布的特征点;(3b) Select 8 feature points on the three images R1, I2 and I3 that can represent the approximate direction distribution of the three images; (3c)使用2.5DHarris角点检测算法,分别设置非最大值抑制窗口,其中,三维非最大值抑制窗口和二维的非最大值抑制窗口的半径大小分别设置为25和20,在参考图像R1上提取700个特征点;(3c) Using the 2.5D Harris corner detection algorithm, non-maximum suppression windows are set respectively, where the radius of the three-dimensional non-maximum suppression window and the two-dimensional non-maximum suppression window are set to 25 and 20 respectively, and 700 feature points are extracted from the reference image R1; (3d)对生成的700个特征点进行筛选,最终保留500个特征点。(3d) The generated 700 feature points are screened and 500 feature points are finally retained. 4.根据权利要求1所述的多视角各向异性三维图像的融合重建方法,其特征在于:所述步骤(4)具体包括以下步骤:4. The method for fusion reconstruction of multi-view anisotropic three-dimensional images according to claim 1, characterized in that: the step (4) specifically comprises the following steps: (4a)利用点选的特征点,计算出浮动图像I2到参考图像R1,以及浮动图像I3到参考图像R1的粗精度三维仿射变换矩阵,利用粗精度三维仿射变换矩阵对两幅浮动图像I2和I3做向参考图像R1的初步仿射变换,得到浮动图像I2’和浮动图像I3’,三幅图像R1、I2’和I3’的基本方向保持一致;(4a) Using the selected feature points, the coarse-precision three-dimensional affine transformation matrix from the floating image I2 to the reference image R1 and from the floating image I3 to the reference image R1 are calculated. The coarse-precision three-dimensional affine transformation matrix is used to perform a preliminary affine transformation of the two floating images I2 and I3 toward the reference image R1 to obtain the floating images I2’ and I3’. The basic directions of the three images R1, I2’ and I3’ are consistent. (4b)输入参考图像R1和使用2.5DHarris角点检测算法提取参考图像R1上的特征点,以及经过方向调整后的浮动图像I2’和I3’,利用CLM相干标记点映射匹配算法,设置搜索半径为10,通过迭代搜索的方式,在浮动图像I2’和浮动I3’上找到与参考图像R1上特征点相匹配的特征点;(4b) Input the reference image R1 and extract the feature points on the reference image R1 using the 2.5D Harris corner detection algorithm, as well as the floating images I2' and I3' after direction adjustment, and use the CLM coherent marker mapping matching algorithm, set the search radius to 10, and find the feature points on the floating images I2' and I3' that match the feature points on the reference image R1 through an iterative search; (4c)利用粗精度三维仿射变换矩阵的逆矩阵将浮动图像I2’和I3’上的特征点逆变换到浮动图像I2和I3上,得到浮动图像I2和I3上与参考图像R1准确匹配的特征点,利用这两组准确匹配的特征点对,计算出浮动图像I2向参考图像R1,以及浮动图像I3向参考图像R1变换的仿射变换矩阵。(4c) The inverse matrix of the coarse-precision three-dimensional affine transformation matrix is used to inversely transform the feature points on the floating images I2’ and I3’ onto the floating images I2 and I3, thereby obtaining the feature points on the floating images I2 and I3 that accurately match the reference image R1. Using these two sets of accurately matched feature point pairs, the affine transformation matrices for transforming the floating image I2 to the reference image R1 and the floating image I3 to the reference image R1 are calculated. 5.根据权利要求1所述的多视角各向异性三维图像的融合重建方法,其特征在于:所述步骤(5)具体包括以下步骤:5. The method for fusion reconstruction of multi-view anisotropic three-dimensional images according to claim 1, characterized in that: the step (5) specifically comprises the following steps: (5a)生成两幅与参考图像R1同样尺寸的像素值为0的全空图像,对两幅全空图像进行每个像素点像素值的获取,根据仿射变换矩阵分别求出两幅全空图像中每个像素点在两幅浮动图像I2和I3原图中的位置;(5a) Generate two full-space images with the same size as the reference image R1 and with a pixel value of 0, obtain the pixel value of each pixel point of the two full-space images, and calculate the position of each pixel point in the two full-space images in the two floating images I2 and I3 according to the affine transformation matrix; (5b)利用原图中此位置周围的八邻域位置的像素值对生成图像的像素值进行线性插值;如果目标图像像素点在原图中的位置超出边界,则超出边界部分的像素值取为参考图像R1在该位置的像素值,根据浮动图像I2得到生成图像R2,根据浮动图像I3得到生成图像R3。(5b) Linear interpolation is performed on the pixel values of the generated image using the pixel values of the eight neighboring positions around this position in the original image; if the position of the target image pixel in the original image exceeds the boundary, the pixel value of the part exceeding the boundary is taken as the pixel value of the reference image R1 at this position, and the generated image R2 is obtained based on the floating image I2, and the generated image R3 is obtained based on the floating image I3. 6.根据权利要求1所述的多视角各向异性三维图像的融合重建方法,其特征在于:所述步骤(6)具体包括以下步骤:6. The method for fusion reconstruction of multi-view anisotropic three-dimensional images according to claim 1, characterized in that: the step (6) specifically comprises the following steps: (6a)选取滑动一维窗口的半径为5,分别对三幅图像R1、R2、R3的X、Y、Z三个视角进行均值平滑滤波,得到三幅平滑滤波后的图像S1,S2,S3;(6a) The radius of the sliding one-dimensional window is selected as 5, and the three images R1, R2, and R3 are subjected to mean smoothing filtering at the three viewing angles of X, Y, and Z, respectively, to obtain three smoothed filtered images S1, S2, and S3; (6b)逐像素遍历三幅图像R1、R2、R3,对生成图像R2和R3的亮度值进行调整,亮度匹配后对应位置的像素值R2(x,y,z)=R2(x,y,z)*(S1(x,y,z)/S2(x,y,z)),R3(x,y,z)=R3(x,y,z)*(S1(x,y,z)/S3(x,y,z)),得出经过亮度调整后且亮度保持一致的三幅图像R1’、R2’和R3’。(6b) The three images R1, R2, and R3 are traversed pixel by pixel, and the brightness values of the generated images R2 and R3 are adjusted. After brightness matching, the pixel values at corresponding positions are R2(x, y, z) = R2(x, y, z)*(S1(x, y, z)/S2(x, y, z)), and R3(x, y, z) = R3(x, y, z)*(S1(x, y, z)/S3(x, y, z)), and the three images R1’, R2’, and R3’ with consistent brightness after brightness adjustment are obtained. 7.根据权利要求1所述的多视角各向异性三维图像的融合重建方法,其特征在于:所述步骤(7)具体包括以下步骤:7. The method for fusion reconstruction of multi-view anisotropic three-dimensional images according to claim 1, characterized in that: the step (7) specifically comprises the following steps: (7a)生成一幅与三幅图像R1’、R2’和R3’同样尺寸的像素值为0的全空图像;(7a) Generate a completely empty image with the same size as the three images R1', R2' and R3' and with a pixel value of 0; (7b)针对步骤(7a)的全空图像,遍历每个像素位置,对于最终输出图像Io中的每个像素位置(x,y,z),分别取出三幅图像R1’、R2’和R3’对应位置以及邻域的像素值R’(x,y,z)、R’(x+1,y,z)、R’(x-1,y,z)、R’(x,y+1,z)、R’(x,y-1,z)、R’(x,y,z+1)、R’(x,y,z-1);(7b) For the full-space image of step (7a), traverse each pixel position, and for each pixel position (x, y, z) in the final output image Io , take out the pixel values R'(x, y, z), R'(x+1, y, z), R'(x-1, y, z), R'(x, y+1, z), R'(x, y-1, z), R'(x, y, z+1), R'(x, y, z-1) of the corresponding positions and neighborhoods of the three images R1', R2' and R3'respectively; (7c)根据三维梯度增强的融合方法对三幅图像进行融合,方式如下:对于最终输出图像Io中的每个像素位置的像素值的赋值,取
Figure FDA0004114808360000041
Figure FDA0004114808360000042
Figure FDA0004114808360000043
其中,取a=4b,b的取值范围为0至1,根据融合效果进行调整,在上式中若Io(x,y,z)<0或者Io(x,y,z)>255,则取
Figure FDA0004114808360000044
逐次遍历,得到最终输出图像Io
(7c) The three images are fused according to the three-dimensional gradient enhancement fusion method in the following manner: For the assignment of the pixel value of each pixel position in the final output image I o , take
Figure FDA0004114808360000041
Figure FDA0004114808360000042
Figure FDA0004114808360000043
Where a=4b, b ranges from 0 to 1 and is adjusted according to the fusion effect. In the above formula, if I o (x,y,z)<0 or I o (x,y,z)>255, then
Figure FDA0004114808360000044
After traversing each time, the final output image I o is obtained.
CN202310215727.0A 2023-03-08 2023-03-08 A Fusion and Reconstruction Method for Multi-View Anisotropic 3D Images Pending CN116152071A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310215727.0A CN116152071A (en) 2023-03-08 2023-03-08 A Fusion and Reconstruction Method for Multi-View Anisotropic 3D Images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310215727.0A CN116152071A (en) 2023-03-08 2023-03-08 A Fusion and Reconstruction Method for Multi-View Anisotropic 3D Images

Publications (1)

Publication Number Publication Date
CN116152071A true CN116152071A (en) 2023-05-23

Family

ID=86350676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310215727.0A Pending CN116152071A (en) 2023-03-08 2023-03-08 A Fusion and Reconstruction Method for Multi-View Anisotropic 3D Images

Country Status (1)

Country Link
CN (1) CN116152071A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036589A (en) * 2023-06-30 2023-11-10 成都飞机工业(集团)有限责任公司 Three-dimensional reconstruction method, device, equipment and medium based on multi-view geometry
CN117974448A (en) * 2024-04-02 2024-05-03 中国科学院自动化研究所 Three-dimensional medical image isotropy super-resolution method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036589A (en) * 2023-06-30 2023-11-10 成都飞机工业(集团)有限责任公司 Three-dimensional reconstruction method, device, equipment and medium based on multi-view geometry
CN117036589B (en) * 2023-06-30 2025-01-24 成都飞机工业(集团)有限责任公司 Three-dimensional reconstruction method, device, equipment and medium based on multi-view geometry
CN117974448A (en) * 2024-04-02 2024-05-03 中国科学院自动化研究所 Three-dimensional medical image isotropy super-resolution method and device

Similar Documents

Publication Publication Date Title
Zeng et al. Simultaneous single-and multi-contrast super-resolution for brain MRI images based on a convolutional neural network
Isaac et al. Super resolution techniques for medical image processing
CN116152071A (en) A Fusion and Reconstruction Method for Multi-View Anisotropic 3D Images
Zhao et al. A deep learning based anti-aliasing self super-resolution algorithm for MRI
CN106887039B (en) Organ and focus three-dimensional imaging method and system based on medical image
CN1442115B (en) Method for generating three-dimensional, multi-resolution object stereoscopic image
DE69922983T2 (en) Imaging system and method
CN111145134B (en) Block effect-based microlens light field camera full-focus image generation algorithm
CN103226822A (en) Medical image stitching method
Alegro et al. Multimodal whole brain registration: MRI and high resolution histology
EP2591456A1 (en) Image intensity standardization
CN115841591A (en) Cross-modal medical image synthesis system and method and computer equipment
CN117769722A (en) Generating a lossless image compression model for tomographic imaging based on an antagonism network
CN115661377B (en) Self-supervision deep learning and method for constructing isotropic super-resolution three-dimensional image
Liang et al. 3d MRI image super‐resolution for brain combining rigid and large diffeomorphic registration
Taxak et al. High PSNR based image fusion by weighted average brovery transform method
Chi et al. Fusion of perpendicular anisotropic MRI sequences
Mithra et al. Reference-based texture transfer for single image super-resolution of magnetic resonance images
Ramanarayanan et al. MRI super-resolution using laplacian pyramid convolutional neural networks with isotropic undecimated wavelet loss
Rousseau et al. A groupwise super-resolution approach: application to brain MRI
Ma et al. Image quality transfer with auto-encoding applied to dMRI super-resolution
Chakravarty et al. Three-dimensional reconstruction of serial histological mouse brain sections
CN110084770A (en) Brain image fusion method based on two-dimentional Littlewood-Paley experience wavelet transformation
CN117974448B (en) Three-dimensional medical image isotropy super-resolution method and device
Mehta et al. Comparison of Fusion Algorithms for Fusion of CT and MRI Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination