[go: up one dir, main page]

CN107789056B - A Medical Image Matching Fusion Method - Google Patents

A Medical Image Matching Fusion Method Download PDF

Info

Publication number
CN107789056B
CN107789056B CN201710976330.8A CN201710976330A CN107789056B CN 107789056 B CN107789056 B CN 107789056B CN 201710976330 A CN201710976330 A CN 201710976330A CN 107789056 B CN107789056 B CN 107789056B
Authority
CN
China
Prior art keywords
image
texture features
matching
human tissue
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710976330.8A
Other languages
Chinese (zh)
Other versions
CN107789056A (en
Inventor
孙品
刘广伟
卢云
张卓立
于綦悦
张宪祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Affiliated Hospital of University of Qingdao
Original Assignee
Affiliated Hospital of University of Qingdao
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Affiliated Hospital of University of Qingdao filed Critical Affiliated Hospital of University of Qingdao
Priority to CN201710976330.8A priority Critical patent/CN107789056B/en
Publication of CN107789056A publication Critical patent/CN107789056A/en
Application granted granted Critical
Publication of CN107789056B publication Critical patent/CN107789056B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2063Acoustic tracking systems, e.g. using ultrasound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Robotics (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

本发明提出了一种医学影像匹配融合方法,包括模拟过程和匹配过程,其中,模拟过程在术前进行,进行超声影像采集与存储;正式手术中,通过超声扫描进行图像引导手术导航,超声扫描获得实时超声影像,到达病灶区域后,启动匹配过程。本发明的医学影像匹配融合方法,实现了图像引导手术过程中超声影像与Dicom图像的匹配融合,手术操作过程中,当到达病灶位置时,显示器上呈现彼此对应的超声影像与Dicom图像,而且Dicom图像的成像角度根据医生的操作进行实时调整,方便医生得到直观的、彼此对应的超声影像与Dicom图像,提高手术的精确度。

Figure 201710976330

The invention provides a medical image matching and fusion method, including a simulation process and a matching process, wherein the simulation process is performed before surgery, and ultrasonic image acquisition and storage are performed; Real-time ultrasound images are obtained, and after reaching the lesion area, the matching process is started. The medical image matching and fusion method of the present invention realizes the matching and fusion of the ultrasonic image and the Dicom image during the image-guided operation. The imaging angle of the image is adjusted in real time according to the doctor's operation, which is convenient for the doctor to obtain the intuitive and corresponding ultrasound images and Dicom images, and improves the accuracy of the operation.

Figure 201710976330

Description

Medical image matching and fusing method
Technical Field
The invention relates to the field of medical imaging, in particular to a medical image matching fusion method.
Background
Ultrasonic imaging is to scan a human body by using an ultrasonic sound beam, and obtain an image of an internal organ by receiving and processing a reflected signal. In recent years, ultrasonic imaging techniques such as gray scale display and color display, real-time imaging, ultrasonic holography, transmission ultrasonic imaging, ultrasound parallel tomography, three-dimensional imaging, ultrasonic imaging in a body cavity, and the like have been developed. Ultrasound imaging has been known for its real-time, repeatable, and mobile properties, especially real-time properties, and sonographers use real-time imaging to apply ultrasound in various fields, such as real-time guided puncture, real-time ultrasound contrast imaging, real-time color doppler flow imaging, real-time elastography, and so on.
CT (computed tomography), which is an electronic computer tomography, utilizes precisely collimated X-ray beams, gamma rays, ultrasonic waves and the like to perform section scanning one by one around a certain part of a human body together with a detector with extremely high sensitivity, has the characteristics of short scanning time, clear images and the like, and can be used for checking various diseases; the following can be classified according to the radiation used: x-ray CT (X-CT), ultrasonic CT (uct), and gamma-ray CT (gamma-CT), etc.
MR (magnetic resonance imaging) applies a radio-frequency pulse of a certain specific frequency to a human body in a static magnetic field, so that hydrogen protons in the human body are excited to generate a magnetic resonance phenomenon; after the pulse is stopped, the protons generate MR signals in the relaxation process, and the MR signals are generated through the processes of receiving the MR signals, spatially encoding, reconstructing images and the like. Magnetic resonance imaging is also a type of emission tomography, which uses the phenomenon of magnetic resonance to acquire electromagnetic signals from the body and reconstruct body information.
PET is a novel imaging technology which reflects molecular metabolism and can display biomolecular metabolism, receptor and nerve medium activity on a living body, and the PET is widely used in the aspects of diagnosis and differential diagnosis of various diseases, disease judgment, curative effect evaluation, organ function research, new drug development and the like.
CT/MR/PET imaging is in the Dicom image format, widely used in clinical applications, and accepted with its good resolution and good communication with clinicians, but Dicom image format does not achieve real-time performance, much less mobility.
At present, an image-guided surgery needs an ultrasonic two-dimensional image to position a surgical instrument in real time, but a Dicom image is needed for identifying a focus, because the Dicom image is three-dimensional, the imaging resolution is higher, the visual field is wider, and the focus identification is clearer, so that the current image-guided surgery still needs a doctor to judge and process the focus by taking the ultrasonic image as a guide, the Dicom image obtained before the surgery is referred, a large part of the surgery time is used for image comparison, and the accuracy of the doctor operation is affected by repeated image comparison.
How to provide a method for fusing an ultrasonic image and a Dicom image is a problem to be solved urgently in the field of medical imaging at present.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a medical image matching and fusing method, which realizes matching and fusing of an ultrasonic image and a Dicom image in an image-guided surgery process, is convenient for doctors to obtain the visual and mutually corresponding ultrasonic image and Dicom image, and improves the accuracy of the surgery.
The technical scheme of the invention is realized as follows:
a medical image matching fusion method comprises a simulation process and a matching process;
wherein, the simulation process is performed before the operation, and comprises the following steps:
step (a1), carrying out ultrasonic scanning on a patient before an operation, carrying out image-guided operation navigation simulation through the ultrasonic scanning, and carrying out ultrasonic image acquisition and storage after reaching a focus area;
step (a2), decomposing the collected ultrasonic image, dividing the ultrasonic image into a plurality of ultrasonic image segments according to the operation process, wherein each ultrasonic image segment corresponds to a group of Dicom images;
step (a3), decomposing each ultrasonic image segment into a frame image sequence, compressing the frame image sequence, extracting a sub-frame image at each fixed interval, compressing the frame image sequence into a group of frame images, marking the focus point in each frame image, marking the front, back, upper and lower parts of the focus point by taking the focus point as the center, and marking the focus area by the marking point;
step (a4), taking each marking point as a center, and carrying out grid division on a focus area, wherein each marking point corresponds to a grid;
step (a5), extracting the texture features of human tissues and the texture features of blood vessels in each grid area to obtain the characteristic values of each marking point in the frame image;
step (a6), collecting the characteristic values of the same mark points in a group of frame images, and collecting the characteristic value sets of the mark points in an ultrasonic image section;
step (a7), storing the characteristic value set of each marking point in the ultrasonic image segment in a subarea manner;
in the formal operation, image-guided operation navigation is carried out through ultrasonic scanning, real-time ultrasonic images are obtained through the ultrasonic scanning, and after the real-time ultrasonic images reach a focus area, a matching process is started;
the matching process comprises the following steps:
step (b1), decomposing the real-time ultrasonic image, dividing the continuous ultrasonic image into frame image sequences, and extracting a frame image at each fixed interval;
step (b2), carrying out grid segmentation on the frame image in the step (b1), wherein the grid size is 1/50-1/20 of the minimum grid in the step (a4), and extracting the texture features of the human tissues and the texture features of blood vessels in each grid after segmentation;
a step (b3) of comparing the human tissue texture features and the blood vessel texture features extracted in the step (b2) with the set of feature values stored in the step (a 7);
a step (b4) of setting the same number and capacity of memory partitions as those in the step (a7), and when the human tissue texture features and the blood vessel texture features extracted in the step (b2) are matched with the feature value set stored in the step (a7), storing the human tissue texture features and the blood vessel texture features of the matched mesh into the corresponding memory partitions;
and (b5) after matching is finished, collecting the human tissue texture features and the blood vessel texture features of each storage partition, matching the real-time ultrasonic image with the ultrasonic image section in the step (a7) when the human tissue texture features and the blood vessel texture feature sets stored in the storage partitions reach 80% or more of the characteristic value set stored in the step (a7), and calling out a corresponding group of Dicom images, wherein the real-time ultrasonic image is matched with the group of Dicom images.
Optionally, in the simulation process and the matching process, decomposing and compressing the continuous ultrasonic image through an external image processing computer; the external image processing computer has the advantages of high operation speed, short fixed interval time, low operation speed and long fixed interval time.
Optionally, in the step (a2), each ultrasound image segment corresponds to a group of Dicom images, the ultrasound image segment is further divided, and the angles of the Dicom images are adjusted according to the positions of the human tissues corresponding to the ultrasound images.
Optionally, in the step (b5), after the matching is finished, the process of collecting the human tissue texture features and the blood vessel texture features of each storage partition includes: and calculating the total capacity of the human tissue texture features and the blood vessel texture features stored in the storage partitions.
Optionally, in the step (b5), when the total capacity of the human tissue texture features and the blood vessel texture features stored in the respective storage partitions reaches 80% or more of the total capacity of the set of feature values stored in the step (a7), the real-time ultrasound image is matched with the ultrasound image segment in the step (a 7).
The invention has the beneficial effects that:
(1) the matching fusion of the ultrasonic image and the Dicom image in the image-guided surgery process is realized;
(2) in the operation process, when the navigation reaches the focus position, the ultrasonic image and the Dicom image which correspond to each other are presented, and the imaging angle of the Dicom image is adjusted in real time according to the operation of a doctor, so that the doctor can conveniently obtain the visual ultrasonic image and the Dicom image which correspond to each other, and the accuracy of the operation is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic view of a method according to the present invention for marking a focal zone;
FIG. 2 is a flow chart of a simulation process of a medical image matching fusion method according to the present invention;
fig. 3 is a flowchart of a matching process of a medical image matching fusion method according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The existing image-guided surgery needs an ultrasonic two-dimensional image to position a surgical instrument in real time, a Dicom image obtained before the surgery is referred to for lesion judgment and treatment, and since the ultrasonic image and the Dicom image do not correspond in real time, a doctor needs a comparison judgment process in the surgical operation process, and the ultrasonic image and the Dicom image are unified, so that time consumption is inevitably caused, and the judgment of the doctor is also hindered.
The invention provides a medical image matching and fusing method, which can effectively fuse a real-time ultrasonic image and a Dicom image of an operation area in the operation process and display the image on a display, so that a doctor can intuitively obtain a real-time navigation image and a focus image, and the operation is convenient.
The medical image matching and fusing method comprises a simulation process and a matching process, wherein the simulation process is performed before an operation, and as shown in fig. 2, the medical image matching and fusing method comprises the following steps:
and (a1) performing ultrasonic scanning on the patient before operation, performing image-guided surgery navigation simulation through the ultrasonic scanning, and acquiring and storing ultrasonic images after reaching a focus area.
In the simulation process, the area needing to be operated in the actual operation process is collected, and images of the same area at different angles are collected according to the actual operation requirement.
The acquisition and storage of the ultrasonic images are realized by an external image processing computer, the external image processing computer can achieve higher speed, and the processing operation of the stored images is also finished in the external image computer, so that the operation space of the computer for image-guided surgery navigation simulation is not occupied.
And (a2) decomposing the acquired ultrasonic image, dividing the ultrasonic image into a plurality of ultrasonic image segments according to the operation process, wherein each ultrasonic image segment corresponds to a group of Dicom images, and the finer the ultrasonic image segment is, the higher the matching degree of the Dicom images and the ultrasonic image segments is.
And (a3) decomposing each ultrasonic image segment into a frame image sequence, compressing the frame image sequence, extracting a frame image at fixed intervals, compressing the frame image sequence into a group of frame images, deleting the frame images which are not extracted, identifying the focus point in each frame image, marking the focus point front, back, upper and lower parts by taking the focus point as the center, and identifying the focus area through the marking point. If the focus points are multiple and continuous, the main axis and the upper, lower, front and back are marked by taking the continuous multiple focus points as the main axis.
Fig. 1 shows a schematic view of the identification of a lesion point, the lesion 10 being marked by a plurality of marking points 20.
And (a4) meshing the lesion area with each marker as the center, wherein each marker corresponds to a mesh. Because the distance between the mark points is not fixed, the size and the shape of the grid divided by taking each mark point as the center are not fixed, and the grid division principle is as follows: and dividing grids by taking the vertical central lines of two adjacent marking points as boundary lines, wherein the marking points are positioned at the edge positions, and the grid lines at the outer sides are arranged according to the grid lines at the inner sides.
And (a5) extracting the human tissue texture features and the blood vessel texture features in each grid area to obtain the feature values of each marking point in the frame image.
Because the human tissue texture and the blood vessel texture have higher identification degree, and the coincidence probability of the human tissue texture or the blood vessel texture in different areas is lower than one millionth, the human tissue texture feature and the blood vessel texture feature are simultaneously extracted, and the coincidence probability of the texture features is further reduced. Each mesh region can be uniquely identified by human tissue texture features and blood vessel texture features.
And (a6) collecting the characteristic values of the same mark points in a group of frame images and collecting the characteristic value sets of the mark points in the ultrasonic image section.
Since the ultrasound image segment is decomposed and compressed into a group of frame images, the feature value of a mark point in the group of frame images is collected to obtain the feature value set of the mark point in the ultrasound image segment.
And similarly, collecting the characteristic values of all the mark points in a group of frame images, and collecting the characteristic values of all the mark points in the ultrasonic image section.
And (a7) storing the characteristic value set of each marking point in the ultrasonic image segment in a subarea manner, wherein each storage subarea stores the characteristic value set of one characteristic point. For example, a group of compressed frame images has 100 frames of pictures, one marker has 100 feature values, the 100 feature values of the marker are stored in one storage partition, the 100 feature value sets stored in the storage partition correspond to the feature value of the marker in the ultrasound image segment, and each marker in the ultrasound image segment has a separate storage partition for storing the feature value sets.
Since the simulation process matches the acquired ultrasound image segment with the Dicom image, in the formal surgery, if the real-time ultrasound image at a certain moment matches the acquired ultrasound image segment, the real-time ultrasound image at the moment matches with the corresponding Dicom image group. Next, the matching process of the ultrasound image segment and the Dicom image will be described in detail.
In the formal operation, image-guided surgical navigation is performed through ultrasonic scanning, the ultrasonic scanning obtains a real-time ultrasonic image, and after the real-time ultrasonic image reaches a focus area, a matching process is started, as shown in fig. 3, the matching process comprises the following steps:
and (b1) decomposing the real-time ultrasonic image, dividing the continuous ultrasonic image into a frame image sequence, and extracting one frame image at each fixed interval.
The method is characterized in that the method comprises the steps of decomposing a real-time ultrasonic image through an external image processing computer, wherein the external image processing computer is higher in speed, the external image processing computer synchronously decomposes the ultrasonic image in the real-time ultrasonic image generating and displaying process, continuous ultrasonic images are divided into frame image sequences, and one frame image is extracted at each fixed interval.
And (b2) carrying out grid segmentation on the frame image in the step (b1), wherein the grid size is 1/50-1/20 of the minimum grid in the step (a4), and extracting the texture features of the human tissues and the texture features of the blood vessels in each grid after segmentation.
In the step (a4), the size and the shape of the grid are not fixed, so that the minimum grid is selected, the frame image obtained in the step (b1) is subjected to grid division by taking 1/50-1/20 of the grid as a standard, and the texture features of the human tissue and the texture features of the blood vessels in each divided grid are extracted.
Step (b3), comparing the human tissue texture features and the blood vessel texture features extracted in step (b2) with the feature value set stored in step (a7), wherein the comparison process specifically comprises the following steps:
comparing the texture features of the human tissues and the texture features of the blood vessels of the meshes to be compared with the characteristic values in the storage subareas in the step (a7), wherein the meshes to be compared are smaller, so that the meshes to be compared only generate certain matching errors at the edge positions of the lesion areas and are completely matched with the inner areas of the lesions.
A step (b4) of setting the same number and capacity of memory partitions as those in the step (a7), and when the human tissue texture features and the blood vessel texture features extracted in the step (b2) are matched with the feature value set stored in the step (a7), storing the human tissue texture features and the blood vessel texture features of the matched mesh into the corresponding memory partitions;
and (b5) after matching is finished, collecting the human tissue texture features and the blood vessel texture features of each storage partition, matching the real-time ultrasonic image with the ultrasonic image section in the step (a7) when the human tissue texture features and the blood vessel texture feature sets stored in the storage partitions reach 80% or more of the characteristic value set stored in the step (a7), and calling out a corresponding group of Dicom images, wherein the real-time ultrasonic image is matched with the group of Dicom images.
In the step (b5), after the matching is finished, the process of collecting the human tissue texture features and the blood vessel texture features of each storage partition is specifically as follows: and (3) calculating the total capacity of the storage space occupied by the human tissue texture features and the blood vessel texture features stored in each storage partition, wherein the stored human tissue texture features and the blood vessel texture features are both matched grids, so that the total capacity of the storage space occupied by the matched grids is calculated and compared with the total capacity of the storage space of the characteristic value set stored in the step (a 7). When the total capacity of the human tissue texture features and the blood vessel texture features stored in the storage partitions reaches 80% or more of the total capacity of the feature value set stored in the step (a7), the real-time ultrasound image is matched with the ultrasound image segment in the step (a 7).
By the matching fusion method, the ultrasonic images are displayed on the display at each stage in the operation, the Dicom images corresponding to the ultrasonic images are synchronously displayed, and doctors synchronously refer to the corresponding Dicom images while observing the ultrasonic images, so that the operation efficiency is improved.
In the simulation process and the matching process, decomposing and compressing the continuous ultrasonic images through an external image processing computer; the external image processing computer has the advantages of high operation speed, short fixed interval time, low operation speed and long fixed interval time.
In the operation process, the Dicom image with a fixed angle cannot meet the operation requirement of a doctor, because the real-time ultrasonic image needs to adjust the angle according to the operation instrument according to the operation requirement, although the operation area is unchanged, the operation angle is changed, and the real-time ultrasonic image and the Dicom image have certain angle deviation.
The invention provides a medical image matching and fusing method, which realizes matching and fusing of an ultrasonic image and a Dicom image in an image-guided operation process, and in the operation process, when a focus position is reached, the ultrasonic image and the Dicom image which correspond to each other are displayed on a display, and the imaging angle of the Dicom image is adjusted in real time according to the operation of a doctor, so that the doctor can conveniently obtain the visual ultrasonic image and the Dicom image which correspond to each other, and the accuracy of the operation is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (4)

1.一种医学影像匹配融合系统,其特征在于,包括外置的图像处理计算机和显示器,用于执行以下步骤:包括模拟过程和匹配过程;1. A medical image matching and fusion system, comprising an external image processing computer and a display for performing the following steps: comprising a simulation process and a matching process; 其中,模拟过程在术前进行,包括以下步骤:Among them, the simulation process is carried out before surgery, including the following steps: 步骤(a1),对采集的超声影像进行分解,根据手术进程,分为多个超声影像段,每个超声影像段对应一组Dicom图像;In step (a1), the collected ultrasonic images are decomposed, and divided into multiple ultrasonic image segments according to the operation process, and each ultrasonic image segment corresponds to a group of Dicom images; 步骤(a2),将每个超声影像段分解为帧图像序列,并对帧图像序列进行压缩,每固定间隔提取一副帧图像,帧图像序列压缩成一组帧图像,对每一帧图像中病灶点进行标识,以病灶点为中心,对其前后上下进行标记,通过标记点标识出病灶区域;Step (a2), decompose each ultrasound image segment into a frame image sequence, compress the frame image sequence, extract a pair of frame images at a fixed interval, compress the frame image sequence into a set of frame images, and analyze the lesions in each frame image. Mark the point, take the lesion point as the center, mark its front and back, up and down, and mark the lesion area through the mark point; 步骤(a3),以各个标记点为中心,对病灶区域进行网格划分,每个标记点对应一个网格;Step (a3), with each marked point as the center, the lesion area is divided into grids, and each marked point corresponds to a grid; 步骤(a4),对每个网格区域中的人体组织纹理特征和血管纹理特征进行提取,获得该帧图像中各个标记点的特征值;Step (a4), extracts the texture feature of human body tissue and the texture feature of blood vessels in each grid area, and obtains the feature value of each marked point in the frame image; 步骤(a5),将一组帧图像中相同标记点的特征值汇集,汇集成超声影像段中该标记点的特征值集合;Step (a5), assemble the eigenvalues of the same marked point in a group of frame images, into the eigenvalue set of this marked point in the ultrasound image segment; 步骤(a6),将超声影像段中各个标记点的特征值集合分区存储;Step (a6), the feature value set of each marked point in the ultrasound image segment is stored by partition; 匹配过程包括以下步骤:The matching process includes the following steps: 步骤(b1),对实时超声影像进行分解,将连续超声影像划分为帧图像序列,每固定间隔提取一副帧图像;Step (b1), decompose the real-time ultrasound image, divide the continuous ultrasound image into a frame image sequence, and extract a pair of frame images every fixed interval; 步骤(b2),将步骤(b1)中帧图像进行网格分割,网格大小为步骤(a4)中最小网格的1/50~1/20,提取分割后每个网格中的人体组织纹理特征和血管纹理特征;In step (b2), the frame image in step (b1) is divided into grids, and the grid size is 1/50~1/20 of the smallest grid in step (a4), and the human tissue in each grid after segmentation is extracted. Texture features and blood vessel texture features; 步骤(b3),将步骤(b2)中提取的人体组织纹理特征和血管纹理特征与步骤(a6 )中存储的特征值集合相比对;Step (b3), comparing the human tissue texture feature and blood vessel texture feature extracted in step (b2) with the feature value set stored in step (a6); 步骤(b4),设置与步骤(a6 )中相同数量和容量的存储分区,当步骤(b2)中提取的人体组织纹理特征和血管纹理特征与步骤(a6 )中存储的特征值集合相匹配时,将匹配的网格的人体组织纹理特征和血管纹理特征存储到相应的存储分区;Step (b4), set the same number and capacity of storage partitions as in step (a6), when the texture features of human tissue and blood vessels extracted in step (b2) match the set of eigenvalues stored in step (a6) , and store the human tissue texture features and blood vessel texture features of the matched meshes into the corresponding storage partitions; 步骤(b5),匹配结束后,对各个存储分区的人体组织纹理特征和血管纹理特征进行汇集,当存储分区中存储的人体组织纹理特征和血管纹理特征集合达到步骤(a6 )中存储的特征值集合的80%及以上,该实时超声影像与步骤(a6 )中的超声影像段相匹配,将对应的一组Dicom图像调出,该实时超声影像与该组Dicom图像匹配。Step (b5), after the matching, the human tissue texture features and blood vessel texture features of each storage partition are collected, and when the human tissue texture features and blood vessel texture feature sets stored in the storage partition reach the feature value stored in step (a6) 80% or more of the set, the real-time ultrasound image matches the ultrasound image segment in step (a6), and a corresponding group of Dicom images is called out, and the real-time ultrasound image matches the group of Dicom images. 2.如权利要求1所述的一种医学影像匹配融合系统,其特征在于,所述步骤(a1 )中,每个超声影像段对应一组Dicom图像,将该超声影像段进一步划分,根据超声影像对应的人体组织位置,调整Dicom图像的角度。2. The medical image matching and fusion system according to claim 1, wherein in the step (a1), each ultrasound image segment corresponds to a group of Dicom images, and the ultrasound image segment is further divided, according to the ultrasound image segment. Adjust the angle of the Dicom image according to the position of the human tissue corresponding to the image. 3.如权利要求1所述的一种医学影像匹配融合系统,其特征在于,所述步骤(b5)中,匹配结束后,对各个存储分区的人体组织纹理特征和血管纹理特征进行汇集的过程,具体为:计算各个存储分区存储的人体组织纹理特征和血管纹理特征所占存储空间的总容量。3. A kind of medical image matching and fusion system as claimed in claim 1, it is characterized in that, in described step (b5), after matching, the process that the human tissue texture features and blood vessel texture features of each storage partition are collected , specifically: calculating the total capacity of the storage space occupied by the human tissue texture features and blood vessel texture features stored in each storage partition. 4.如权利要求3所述的一种医学影像匹配融合系统,其特征在于,所述步骤(b5)中,当各个存储分区中存储的人体组织纹理特征和血管纹理特征所占存储空间的总容量达到步骤(a6 )中存储的特征值集合总容量的80%及以上,该实时超声影像与步骤(a6 )中超声影像段相匹配。4. a kind of medical image matching and fusion system as claimed in claim 3, is characterized in that, in described step (b5), when the human tissue texture feature and the blood vessel texture feature stored in each storage subarea account for the total amount of storage space occupied. When the capacity reaches 80% or more of the total capacity of the feature value set stored in step (a6), the real-time ultrasound image matches the ultrasound image segment in step (a6).
CN201710976330.8A 2017-10-19 2017-10-19 A Medical Image Matching Fusion Method Expired - Fee Related CN107789056B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710976330.8A CN107789056B (en) 2017-10-19 2017-10-19 A Medical Image Matching Fusion Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710976330.8A CN107789056B (en) 2017-10-19 2017-10-19 A Medical Image Matching Fusion Method

Publications (2)

Publication Number Publication Date
CN107789056A CN107789056A (en) 2018-03-13
CN107789056B true CN107789056B (en) 2021-04-13

Family

ID=61534225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710976330.8A Expired - Fee Related CN107789056B (en) 2017-10-19 2017-10-19 A Medical Image Matching Fusion Method

Country Status (1)

Country Link
CN (1) CN107789056B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148127B (en) * 2019-05-23 2021-05-11 数坤(北京)网络科技有限公司 Intelligent film selection method, device and storage equipment for blood vessel CTA post-processing image
EP3973896A4 (en) * 2020-02-04 2023-07-12 Tianli Zhao SYSTEM AND PROCEDURE FOR POSITIONING A PUNCTURE NEEDLE
CN111938699B (en) * 2020-08-21 2022-04-01 电子科技大学 System and method for guiding use of ultrasonic equipment
CN112418322B (en) * 2020-11-24 2024-08-06 苏州爱医斯坦智能科技有限公司 Image data processing method and device, electronic equipment and storage medium
CN113610826B (en) * 2021-08-13 2024-12-24 武汉推想医疗科技有限公司 Puncture positioning method and device, electronic device and storage medium
CN116531089B (en) * 2023-07-06 2023-10-20 中国人民解放军中部战区总医院 Image-enhancement-based blocking anesthesia ultrasonic guidance data processing method
CN117974475B (en) * 2024-04-02 2024-06-18 华中科技大学同济医学院附属同济医院 Method and system for fusion of lesion images under four-dimensional endoscopic ultrasound

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SU1232213A1 (en) * 1983-07-11 1986-05-23 Днепропетровский медицинский институт Method of estimating myocardium contractility
US5279301A (en) * 1991-01-18 1994-01-18 Olympus Optical Co., Ltd. Ultrasonic image analyzing apparatus
CN1342291A (en) * 1999-02-19 2002-03-27 Pc多媒体公司 Matching engine
CN1869994A (en) * 2006-03-15 2006-11-29 张小粤 Log-on method of medical image/image character istic pick-up and dissecting position
CN102068281A (en) * 2011-01-20 2011-05-25 深圳大学 Processing method for space-occupying lesion ultrasonic images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SU1232213A1 (en) * 1983-07-11 1986-05-23 Днепропетровский медицинский институт Method of estimating myocardium contractility
US5279301A (en) * 1991-01-18 1994-01-18 Olympus Optical Co., Ltd. Ultrasonic image analyzing apparatus
CN1342291A (en) * 1999-02-19 2002-03-27 Pc多媒体公司 Matching engine
CN1869994A (en) * 2006-03-15 2006-11-29 张小粤 Log-on method of medical image/image character istic pick-up and dissecting position
CN102068281A (en) * 2011-01-20 2011-05-25 深圳大学 Processing method for space-occupying lesion ultrasonic images

Also Published As

Publication number Publication date
CN107789056A (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN107789056B (en) A Medical Image Matching Fusion Method
JP7555911B2 (en) Lung volume gated x-ray imaging system and method
KR102522539B1 (en) Medical image displaying apparatus and medical image processing method thereof
US10413253B2 (en) Method and apparatus for processing medical image
US7935055B2 (en) System and method of measuring disease severity of a patient before, during and after treatment
JP6220310B2 (en) Medical image information system, medical image information processing method, and program
JP7661327B2 (en) SYSTEM AND METHOD FOR DETERMINING RADIATION PARAMETERS - Patent application
JP2019018032A (en) Systems and methods for image-based object modeling using multiple image acquisitions or reconstructions
EP3559903A1 (en) Machine learning of anatomical model parameters
US10685451B2 (en) Method and apparatus for image registration
KR102510760B1 (en) Image processing apparatus, image processing method thereof and recording medium
CN110264559B (en) A method and system for reconstructing a bone tomographic image
CN111369675B (en) Method and device for three-dimensional visual model reconstruction based on visceral pleura projection of pulmonary nodules
Naseera et al. A review on image processing applications in medical field
CN114943688A (en) Method for extracting interest region in mammary gland image based on palpation and ultrasonic data
US20120078101A1 (en) Ultrasound system for displaying slice of object and method thereof
RU2694330C1 (en) Method for visualizing a patient's chest surface and determining the coordinates of electrodes in non-invasive electrophysiological cardiac mapping
KR102250086B1 (en) Method for registering medical images, apparatus and computer readable media including thereof
KR102185724B1 (en) The method and apparatus for indicating a point adjusted based on a type of a caliper in a medical image
CN108877922A (en) Lesion degree judging system and method
Kahla et al. Finite Element Method and Medical Imaging Techniques in Bone Biomechanics
Mara et al. Medical Imaging for Use Condition Measurement
Ma Accuracy Evaluation of Computer Intelligent Projection Technology in Medical Measurement
Roy et al. 5 Enhancing with Modality-Based Patient Care Image Registration in Modern Healthcare
Monteiro Deep Learning Approach for the Segmentation of Spinal Structures in Ultrasound Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Sun Pin

Inventor after: Liu Guangwei

Inventor after: Lu Yun

Inventor after: Zhang Zhuoli

Inventor after: Yu Qiyue

Inventor after: Zhang Xianxiang

Inventor before: Liu Guangwei

Inventor before: Lu Yun

Inventor before: Zhang Zhuoli

Inventor before: Yu Qiyue

Inventor before: Zhang Xianxiang

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210413

CF01 Termination of patent right due to non-payment of annual fee