[go: up one dir, main page]

CN108510493A - Boundary alignment method, storage medium and the terminal of target object in medical image - Google Patents

Boundary alignment method, storage medium and the terminal of target object in medical image Download PDF

Info

Publication number
CN108510493A
CN108510493A CN201810310092.1A CN201810310092A CN108510493A CN 108510493 A CN108510493 A CN 108510493A CN 201810310092 A CN201810310092 A CN 201810310092A CN 108510493 A CN108510493 A CN 108510493A
Authority
CN
China
Prior art keywords
target object
medical image
network model
segmentation
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810310092.1A
Other languages
Chinese (zh)
Inventor
周永进
曾雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201810310092.1A priority Critical patent/CN108510493A/en
Priority to PCT/CN2018/082986 priority patent/WO2019196099A1/en
Publication of CN108510493A publication Critical patent/CN108510493A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses boundary alignment method, storage medium and the terminal of target object in medical image, the method is applied to technical field of medical image processing, specifically includes:Terminal obtains the pending target object video data that clinical acquisitions arrive, and video data is input to the pre- segmentation network model for first passing through off-line training foundation;The segmentation network model is partitioned into the region where the target object in video data automatically, exports segmentation result.The present invention is by the way that pending medical image video input to be split to study is first passed through in advance with training foundation segmentation network model, the boundary of target object can be accurately positioned out in real time, eliminate the influence that bloom speck and artefact in imaging position target area upper and lower interface, the accuracy that target area boundaries judge is improved, the high-precision requirement clinically required is reached.

Description

医学图像内目标对象的边界定位方法、存储介质及终端Boundary positioning method, storage medium and terminal of target object in medical image

技术领域technical field

本发明涉及医学图像处理技术领域,具体涉及医学图像内目标对象的边界定位方法、存储介质及终端。The invention relates to the technical field of medical image processing, in particular to a boundary positioning method, a storage medium and a terminal of a target object in a medical image.

背景技术Background technique

现有技术中,对于医学图像中某一特定的目标对象的边界定位技术的研究一直是重要的课题。例如在医学领域中,角膜厚度是术前必须考虑的一个条件,同时也会影响手术方法的选择,切削区大小的设定,术后恢复的评估等。因此角膜边界的准确定位是角膜厚度测量的首要前提。In the prior art, the research on the boundary location technology of a specific target object in medical images has always been an important subject. For example, in the medical field, corneal thickness is a condition that must be considered before surgery, and it will also affect the choice of surgical method, the setting of the size of the cutting area, and the evaluation of postoperative recovery. Therefore, accurate positioning of the corneal boundary is the first prerequisite for corneal pachymetry.

目前,角膜边界的定位多采用与图像灰度直接相关的方法。具体地,先确定角膜区的大概位置,然后通过Otsu阈值处理分割出角膜,再确定角膜边界,继而进行接下来的厚度测量。但是,由于难以消除仪器误差的存在,所以基于灰度的分割方法极容易受到角膜附近的随机高亮斑块和条状成像伪迹的影响,严重影响到角膜厚度的准确测量。因此,现有技术中对于医学图像中的目标对象的边界定位精度难以达到临床上要求的高精度。At present, the positioning of the corneal boundary mostly adopts the method directly related to the gray level of the image. Specifically, first determine the approximate location of the corneal region, then segment the cornea through Otsu threshold processing, then determine the corneal boundary, and then perform the subsequent thickness measurement. However, because it is difficult to eliminate the existence of instrument errors, the grayscale-based segmentation method is extremely susceptible to random high-brightness plaques and strip imaging artifacts near the cornea, which seriously affect the accurate measurement of corneal thickness. Therefore, in the prior art, the boundary positioning accuracy of the target object in the medical image is difficult to achieve the clinically required high accuracy.

因此,现有技术还有待于改进和发展。Therefore, the prior art still needs to be improved and developed.

发明内容Contents of the invention

本发明要解决的技术问题在于,针对现有技术的上述缺陷,提供一种医学图像内目标对象的边界定位方法、存储介质及终端,旨在解决现有技术中的对于医学图像中的目标对象的边界定位方法容易受到成像过程中随机性伪迹干扰,导致误差较大,尤其是在垂直方向的精度不够,无法达到临床要求的高精度的问题。The technical problem to be solved by the present invention is to provide a boundary location method, storage medium and terminal of a target object in a medical image, aiming at solving the problem of the target object in a medical image in the prior art. The boundary location method of the current method is susceptible to interference from random artifacts in the imaging process, resulting in large errors, especially in the vertical direction, which is not accurate enough to achieve the high precision required by clinical practice.

本发明解决技术问题所采用的技术方案如下:The technical solution adopted by the present invention to solve technical problems is as follows:

一种医学图像内目标对象的边界定位方法,其中,所述方法应用于医学图像处理技术领域,具体包括:A method for locating the boundary of a target object in a medical image, wherein the method is applied in the technical field of medical image processing, and specifically includes:

步骤A、终端获取临床采集到的待处理的目标对象视频数据,并将视频数据输入至预先通过离线训练建立的分割网络模型;Step A, the terminal acquires the video data of the target object collected clinically to be processed, and inputs the video data into the segmentation network model established in advance through offline training;

步骤B、所述分割网络模型自动分割出视频数据中的目标对象所在的区域,输出分割结果。Step B. The segmentation network model automatically segments the region where the target object is located in the video data, and outputs the segmentation result.

所述的医学图像内目标对象的边界定位方法,其中,所述步骤A之前还包括:The boundary location method of the target object in the medical image, wherein, before the step A, it also includes:

步骤S、预先通过离线训练的方式建立用于自动分割出目标对象所在的区域的分割网络模型。In step S, a segmentation network model for automatically segmenting the region where the target object is located is established in advance through offline training.

所述的医学图像内目标对象的边界定位方法,其中,所述步骤B之后还包括:The boundary location method of the target object in the medical image, wherein, after the step B, also includes:

步骤C、对分割出的目标对象区域进行多项式拟合,得到目标对象的上下边界。Step C, performing polynomial fitting on the segmented target object area to obtain the upper and lower boundaries of the target object.

所述的医学图像内目标对象的边界定位方法,其中,所述步骤S包括:The boundary location method of the target object in the medical image, wherein the step S includes:

步骤S1、获取采集到的目标对象视频,选取第一帧图像并对第一帧图像中的目标对象所在的区域进行标注;Step S1. Obtain the collected video of the target object, select the first frame of image and mark the area where the target object is located in the first frame of image;

步骤S2、获取目标对象的区域特征,并进行深度学习与训练;Step S2, obtaining the regional features of the target object, and performing deep learning and training;

步骤S3、在训练过程中通过分段仿射变换引入类正弦波的位移,模拟目标对象边界的变化,并建立用于自动分割出目标对象所在区域的分割网络模型。Step S3. In the training process, a sinusoidal-like displacement is introduced by piecewise affine transformation, the change of the boundary of the target object is simulated, and a segmentation network model for automatically segmenting the region where the target object is located is established.

所述的医学图像内目标对象的边界定位方法,其中,所述步骤S还包括:The boundary location method of the target object in the medical image, wherein, the step S also includes:

在训练过程中加入常规仿射变换、投射变换或者透射变换,并采用图像腐蚀技术或者尖刺去除技术去平滑前后帧图像中的目标对象区域变化中的锐利边缘。Add conventional affine transformation, projective transformation or transmission transformation in the training process, and use image erosion technology or spike removal technology to smooth the sharp edges in the change of the target object area in the front and rear frame images.

所述的医学图像内目标对象的边界定位方法,其中,所述步骤B包括:The boundary location method of the target object in the medical image, wherein, the step B includes:

步骤B1、所述分割网络模型依次对目标对象视频数据中的每一帧图像进行目标对象区域分割;Step B1, the segmentation network model sequentially performs target object region segmentation on each frame of image in the target object video data;

步骤B2、当对下一帧图像进行目标对象区域分割时,自动将前一帧图像的分割结果输入所述分割网络模型;Step B2, when performing target object area segmentation on the next frame image, automatically input the segmentation result of the previous frame image into the segmentation network model;

步骤B3、所述分割网络模型将前一帧图像的分割结果作为参照,对下一帧图像进行目标对象区域分割。Step B3, the segmentation network model uses the segmentation result of the previous frame image as a reference, and performs target object area segmentation on the next frame image.

所述的医学图像内目标对象的边界定位方法,其中,所述步骤B还包括:The boundary location method of target object in described medical image, wherein, described step B also comprises:

将所述分割结果输入至分割网络模型中,以对所述分割网络模型进行更新。The segmentation result is input into the segmentation network model to update the segmentation network model.

所述的医学图像内目标对象的边界定位方法,其中,所述步骤C之后还包括:The boundary location method of the target object in the medical image, wherein, after the step C, also includes:

步骤D、根据目标对象的上下边界,进行目标对象厚度的计算。Step D. Calculate the thickness of the target object according to the upper and lower boundaries of the target object.

一种存储介质,其上存储有多条指令,其中,所述指令适于由处理器加载并执行,以实现上述任一项所述的医学图像内目标对象的边界定位方法。A storage medium, on which a plurality of instructions are stored, wherein the instructions are suitable for being loaded and executed by a processor, so as to realize the method for locating the boundary of a target object in a medical image according to any one of the above.

一种终端,其中,包括:处理器、与处理器通信连接的存储介质,所述存储介质适于存储多条指令;所述处理器适于调用所述存储介质中的指令,以执行实现上述任一项所述的医学图像内目标对象的边界定位方法。A terminal, including: a processor, and a storage medium communicated with the processor, the storage medium is adapted to store a plurality of instructions; the processor is adapted to call the instructions in the storage medium to execute the above-mentioned The boundary location method of the target object in the medical image described in any one.

本发明的有益效果:本发明通过将待处理的医学图像视频输入至预先通过学习与训练建立分割网络模型进行分割,能够实时准确定位出目标对象的边界,消除成像中的高光亮斑和伪迹对目标区域上下界面定位的影响,提高目标区域边界判断的准确度,达到临床上要求的高精度的要求。Beneficial effects of the present invention: the present invention can accurately locate the boundary of the target object in real time by inputting the medical image video to be processed to the segmentation network model established through learning and training in advance, and eliminate the bright spots and artifacts in the imaging The impact on the positioning of the upper and lower interfaces of the target area improves the accuracy of the boundary judgment of the target area and meets the high-precision requirements required in clinical practice.

附图说明Description of drawings

图1是本发明的医学图像内目标对象的边界定位方法的较佳实施例的流程示意图。Fig. 1 is a schematic flowchart of a preferred embodiment of the method for locating the boundary of a target object in a medical image according to the present invention.

图2是本发明的建立角膜分割网络模型时获取的第一帧图像A以及标注后的图像B。Fig. 2 is the first frame image A and the marked image B obtained when the cornea segmentation network model is established according to the present invention.

图3是本发明的建立奇偶阿莫分割网络模型时通过分段仿射变换模拟的角膜边界的变化示意图。Fig. 3 is a schematic diagram of the change of the corneal boundary simulated by the piecewise affine transformation when the odd-even Amor segmentation network model is established in the present invention.

图4是本发明的利用角膜分割网络模型对角膜分割和经过多项式拟合后的效果图。Fig. 4 is an effect diagram of the cornea segmentation and polynomial fitting by using the cornea segmentation network model of the present invention.

图5是本发明的终端的功能原理图。Fig. 5 is a schematic functional diagram of the terminal of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案及优点更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention more clear and definite, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

由于现有技术中对于医学图像中的目标对象的边界定位精度难以达到临床上要求的高精度要求。特别是对于角膜区域的定位分析,一直难以攻克的难题。因此,本发明提供一种医学图像内目标对象的边界定位方法,尤其是针对角膜区域的边界定位方法。如图1所示,图1是本发明的医学图像内目标对象的边界定位方法的较佳实施例的流程示意图。所述方法包括:Because the boundary positioning accuracy of the target object in the medical image in the prior art is difficult to meet the clinically required high precision requirement. Especially for the positioning analysis of the corneal region, it has been a difficult problem to overcome. Therefore, the present invention provides a method for locating a boundary of a target object in a medical image, especially a method for locating a boundary of a corneal region. As shown in FIG. 1 , FIG. 1 is a schematic flowchart of a preferred embodiment of the method for locating the boundary of a target object in a medical image according to the present invention. The methods include:

步骤S100、终端获取临床采集到的待处理的目标对象视频数据,并将视频数据输入至预先通过离线训练建立的角膜分割网络模型。Step S100 , the terminal obtains the clinically collected video data of the target object to be processed, and inputs the video data into the cornea segmentation network model previously established through offline training.

具体地,为了更好的说明本发明的技术方案,本发明以对角膜区域进行分析来作为实施例。因此在本实施例中,目标对象就为角膜,分割网络模型就为角膜分割网络模型。由于角膜厚度是很多视力矫正方法必须考虑的一个条件,同时也会影响手术方法的选择,切削区大小的设定,术后恢复的评估等。而角膜边界的准确定位是角膜厚度测量的首要前提。Specifically, in order to better illustrate the technical solution of the present invention, the present invention takes the analysis of the corneal region as an example. Therefore, in this embodiment, the target object is the cornea, and the segmentation network model is the cornea segmentation network model. Since corneal thickness is a condition that must be considered in many vision correction methods, it will also affect the choice of surgical method, the setting of the size of the cutting area, and the evaluation of postoperative recovery. The accurate positioning of the corneal boundary is the first prerequisite for corneal pachymetry.

现有的角膜边界的定位多采用与图像灰度直接相关的方法。具体地,先确定角膜区的大概位置,然后通过Otsu阈值处理分割出角膜,再确定角膜边界,继而进行接下来的厚度测量。但是,由于难以消除仪器误差的存在,所以基于灰度的分割方法极容易受到角膜附近的随机高亮斑块和条状成像伪迹的影响,此时若继续采用Otsu方法得出的阈值会有很大的误差,其实不仅是Otsu的阈值分割方法,其他诸如Li, Mean, Yen等方法的分割结果也都不理想,严重影响到角膜厚度的准确测量,远远达不到临床上要求的高精度。Most of the existing methods for locating the corneal boundary are directly related to the gray level of the image. Specifically, first determine the approximate location of the corneal region, then segment the cornea through Otsu threshold processing, then determine the corneal boundary, and then perform the subsequent thickness measurement. However, because it is difficult to eliminate the existence of instrument errors, the grayscale-based segmentation method is extremely susceptible to the influence of random high-brightness plaques and strip imaging artifacts near the cornea. If you continue to use the Otsu method, the threshold value will be The large error is actually not only Otsu's threshold segmentation method, but also the segmentation results of other methods such as Li, Mean, Yen, etc. are also unsatisfactory, which seriously affects the accurate measurement of corneal thickness, which is far below the high clinical requirements. precision.

针对角膜区域的分割,由于人体是弹性组织,所以分割方法需要对目标区域的弹性变形和亮度变化有持续的可操作性。在医学图像处理的领域,深度神经网络通常用来判断特定疾病的存在与否或者分割人体相关的病灶或重要器官。随着深度学习的发展,使得计算机辅助的图像前期诊断的正确率大大提高,极大的提高了医生的问诊效率,也助力医疗行业的健康发展,深度学习在医疗领域的将拥有更广阔的前景。For the segmentation of the corneal region, since the human body is an elastic tissue, the segmentation method needs to have continuous operability for the elastic deformation and brightness changes of the target area. In the field of medical image processing, deep neural networks are usually used to judge the presence or absence of specific diseases or to segment lesions or vital organs related to the human body. With the development of deep learning, the correct rate of computer-aided image pre-diagnosis has been greatly improved, which has greatly improved the efficiency of doctors' consultations, and also contributed to the healthy development of the medical industry. Deep learning will have a broader role in the medical field. prospect.

因此,本实施例采用深度学习的方法,建立一种可以自动对视频图像中的角膜区域进行分割的角膜分割网络模型。具体地,通过获取临床采集到的角膜视频,选取第一帧图像并对第一帧图像中的角膜区域进行标注。如图2所示,图2是本发明的建立角膜分割网络模型时获取的第一帧图像A以及标注后的图像B。较佳地,通常采用手工描记的方法对角膜区域进行标注,并且尽量保证准确度,以便建立更加精确的网络模型。然后获取标记后的角膜区域特征,并进行深度学习与训练;在训练的过程中,通过分段仿射变换引入类正弦波的位移,模拟角膜边界的变化。如图3所示,图3是本发明的建立角膜分割网络模型时通过分段仿射变换模拟的角膜边界的变化示意图。本发明基于卷积神经网络进行深度学习,本实施例通过深度神经网络学习到的特征对于随机亮斑不敏感,能准确鲁棒的定位出角膜的上下边界。并且通过深度学习自动获取图像特征,代替了传统繁琐的手工获取特征的不便,排除了随机噪声的干扰,提高了特征获取的精度。Therefore, in this embodiment, a deep learning method is adopted to establish a corneal segmentation network model that can automatically segment the corneal region in the video image. Specifically, by acquiring the clinically collected corneal video, the first frame of image is selected and the corneal region in the first frame of image is marked. As shown in FIG. 2 , FIG. 2 is the first frame image A and the marked image B obtained when the cornea segmentation network model is established in the present invention. Preferably, the corneal area is usually marked by manual tracing, and the accuracy is ensured as much as possible, so as to establish a more accurate network model. Then obtain the marked corneal region features, and carry out deep learning and training; in the process of training, a sine-wave-like displacement is introduced through a piecewise affine transformation to simulate the change of the corneal boundary. As shown in FIG. 3 , FIG. 3 is a schematic diagram of changes in corneal boundaries simulated by segmented affine transformation when establishing a corneal segmentation network model in the present invention. The present invention performs deep learning based on a convolutional neural network. The features learned by the deep neural network in this embodiment are not sensitive to random bright spots, and can accurately and robustly locate the upper and lower boundaries of the cornea. In addition, image features are automatically obtained through deep learning, which replaces the inconvenience of traditional cumbersome manual feature acquisition, eliminates the interference of random noise, and improves the accuracy of feature acquisition.

进一步地,在训练的过程中,为了更加精确地模拟前一帧的分割结果,本发明在离线训练时还加入常规的仿射变换、投射变换或者透射变换,比如上下位移。值得说明的是,在此步骤中,需采用结构元素为1像素的图像腐蚀技术或者尖刺去除技术去平滑前后帧图像中的角膜区域变化中的锐利边缘,从而更好的模拟角膜边界的变化。上述步骤中加入的仿射变换、投射变换、透射变换,以及去平滑是所采用的技术仅仅为了说明本发明的技术方案,并不用于限定本发明,其他形式的变换技术或者去平滑技术均属于本发明的保护范围。Furthermore, during the training process, in order to more accurately simulate the segmentation result of the previous frame, the present invention also adds conventional affine transformation, projective transformation or transmission transformation, such as up and down displacement, during offline training. It is worth noting that in this step, it is necessary to use the image erosion technology with a structural element of 1 pixel or the spike removal technology to smooth the sharp edges in the corneal area changes in the front and rear frame images, so as to better simulate the changes in the corneal boundary . The affine transformation, projective transformation, transmission transformation, and de-smoothing technology added in the above steps are only used to illustrate the technical solution of the present invention, and are not used to limit the present invention. Other forms of transformation technology or de-smoothing technology belong to protection scope of the present invention.

本实施例将建立好的角膜分割网络模型嵌入至终端中,使得终端具有自动进行角膜分割的功能。当需要对临床上采集的视频数据进行角膜分割时,终端获取待处理的角膜视频数据,并将视频数据输入至上述的通过离线训练建立的角膜分割网络模型中。In this embodiment, the established corneal segmentation network model is embedded into the terminal, so that the terminal has the function of automatically performing corneal segmentation. When it is necessary to perform corneal segmentation on clinically collected video data, the terminal acquires the corneal video data to be processed, and inputs the video data into the aforementioned corneal segmentation network model established through offline training.

进一步地,步骤S200、所述分割网络模型自动分割出视频数据中的目标对象所在的区域,输出分割结果。Further, in step S200, the segmentation network model automatically segments the region where the target object is located in the video data, and outputs the segmentation result.

较佳地,所述步骤S200具体包括:Preferably, the step S200 specifically includes:

步骤S201、所述分割网络模型依次对目标对象视频数据中的每一帧图像进行目标对象区域分割;Step S201, the segmentation network model sequentially performs target object region segmentation on each frame image in the target object video data;

步骤S202、当对下一帧图像进行目标对象区域分割时,自动将前一帧图像的分割结果输入所述分割网络模型;Step S202, when performing target object region segmentation on the next frame image, automatically input the segmentation result of the previous frame image into the segmentation network model;

步骤S203、所述分割网络模型将前一帧图像的分割结果作为参照,对下一帧图像进行目标对象区域分割。Step S203 , the segmentation network model uses the segmentation result of the previous frame image as a reference, and performs target object region segmentation on the next frame image.

具体实施时,由于在本实施例中目标对象为角膜,分割网络模型为角膜分割网络模型,因此角膜分割网络模型依次对角膜视频数据中的每一帧图像进行角膜区域分割。传统的对单张图像进行处理时,并不会考虑到时序问题,而本发明的角膜分割网络模型对下一帧图像进行角膜区域分割时,自动将前一帧图像的分割结果输入所述角膜分割网络模型;将前一帧图像的分割结果作为参照,并对角膜边界作粗略估计,再对下一帧图像进行角膜区域分割,从而更好的提高角膜分割的精度。During specific implementation, since the target object is the cornea in this embodiment, and the segmentation network model is the cornea segmentation network model, the cornea segmentation network model sequentially performs corneal region segmentation on each frame of image in the corneal video data. When traditionally processing a single image, the timing problem is not taken into consideration, but when the corneal segmentation network model of the present invention performs corneal region segmentation on the next frame image, the segmentation result of the previous frame image is automatically input into the cornea Segmentation network model; the segmentation result of the previous frame image is used as a reference, and the corneal boundary is roughly estimated, and then the corneal region is segmented for the next frame image, so as to better improve the accuracy of corneal segmentation.

较佳地,本实施例还将所述分割结果输入至角膜分割网络模型中,以对所述角膜分割网络模型进行更新,从而进一步提高角膜分割的精度。Preferably, in this embodiment, the segmentation result is further input into the corneal segmentation network model, so as to update the corneal segmentation network model, thereby further improving the accuracy of corneal segmentation.

具体地,本实施例可以将对分割结果进行上下边界的多项式拟合,最后的结果如图4所示,图4是本发明的利用角膜分割网络模型对角膜分割和经过多项式拟合后的效果图。从图4中可以看到即使有随机高光伪影的存在,对于拟合结果的影响也非常小。值得注意的是,在进行多项式拟合时,因为次数较高,易发生过拟合的情况,所以本技术在处理此问题时,将图像的左右边界大约60像素的区域不予考虑,可以大大提高拟合的准确性。Specifically, this embodiment can perform polynomial fitting on the upper and lower boundaries of the segmentation results, and the final result is shown in Figure 4. Figure 4 is the effect of the cornea segmentation and polynomial fitting using the cornea segmentation network model of the present invention picture. It can be seen from Figure 4 that even if there are random highlight artifacts, the effect on the fitting result is very small. It is worth noting that when performing polynomial fitting, because the number of times is high, overfitting is prone to occur. Therefore, when dealing with this problem, this technology does not consider the area of about 60 pixels on the left and right borders of the image, which can be greatly improved. Improve the accuracy of the fit.

进一步地,本实施例还可以根据确定出的角膜的上下边界进行角膜厚度的计算。为了进一步提高鲁棒性,计算厚度时采用图像中央左右10像素,共20像素的区域计算厚度的平均值。本发明通过建立的角膜分割网络模型所自动测量出的角膜边界更加精确,因此所得到的角膜厚度也更加准确。Further, in this embodiment, the corneal thickness can be calculated according to the determined upper and lower boundaries of the cornea. In order to further improve the robustness, when calculating the thickness, an area of 10 pixels around the center of the image, with a total of 20 pixels, is used to calculate the average thickness. The corneal boundary automatically measured by the corneal segmentation network model established by the present invention is more accurate, so the obtained corneal thickness is also more accurate.

当然,本发明所提出的医学图像内目标对象的边界定位方法,虽然以角膜边界定位的方法作为实施例,但是并不限定本发明仅用于医学图像的处理,也可以用在自然图像视频的分割中。Of course, although the method for locating the boundary of the target object in the medical image proposed by the present invention takes the method of locating the corneal boundary as an example, it does not limit the present invention to be used only for medical image processing, and can also be used in natural image video Splitting.

基于上述实施例,本发明还公开了一种终端,如图5所示,包括:处理器(processor)10、与处理器10连接的存储介质(memory)20;其中,所述处理器10用于调用所述存储介质20中的程序指令,以执行上述实施例所提供的方法,例如执行:Based on the above embodiments, the present invention also discloses a terminal, as shown in FIG. 5 , including: a processor (processor) 10, and a storage medium (memory) 20 connected to the processor 10; Invoking the program instructions in the storage medium 20 to execute the method provided by the above embodiment, for example:

步骤S100、终端获取临床采集到的待处理的目标对象视频数据,并将视频数据输入至预先通过离线训练建立的分割网络模型;Step S100, the terminal obtains the clinically collected video data of the target object to be processed, and inputs the video data into the segmentation network model established in advance through offline training;

步骤S200、所述分割网络模型自动分割出视频数据中的目标对象所在的区域,输出分割结果。Step S200 , the segmentation network model automatically segments the region where the target object is located in the video data, and outputs the segmentation result.

本发明实施例还提供一种存储介质,所述存储介质上存储计算机指令,所述计算机指令使计算机执行上述各实施例所提供的方法。An embodiment of the present invention also provides a storage medium, where computer instructions are stored on the storage medium, and the computer instructions cause a computer to execute the methods provided in the foregoing embodiments.

综上所述,本发明提供的医学图像内目标对象的边界定位方法、存储介质及终端,所述方法应用于医学图像处理技术领域,具体包括:终端获取临床采集到的待处理的目标对象视频数据,并将视频数据输入至预先通过离线训练建立的分割网络模型;所述分割网络模型自动分割出视频数据中的目标对象所在的区域,输出分割结果。本发明通过将待处理的医学图像视频输入至预先通过学习与训练建立分割网络模型进行分割,能够实时准确定位出目标对象的边界,消除成像中的高光亮斑和伪迹对目标区域上下界面定位的影响,提高目标区域边界判断的准确度,达到临床上要求的高精度的要求。In summary, the present invention provides a method for locating the boundary of a target object in a medical image, a storage medium, and a terminal. The method is applied in the technical field of medical image processing, and specifically includes: the terminal acquires the video of the target object to be processed collected clinically data, and input the video data to the segmentation network model established in advance through offline training; the segmentation network model automatically segments the area where the target object in the video data is located, and outputs the segmentation result. In the present invention, by inputting the medical image video to be processed into the segmentation network model established through learning and training in advance for segmentation, the boundary of the target object can be accurately located in real time, and the high-light spots and artifacts in the imaging can be eliminated to locate the upper and lower interfaces of the target area. To improve the accuracy of boundary judgment of the target area and meet the clinically required high-precision requirements.

应当理解的是,本发明的应用不限于上述的举例,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,所有这些改进和变换都应属于本发明所附权利要求的保护范围。It should be understood that the application of the present invention is not limited to the above examples, and those skilled in the art can make improvements or transformations according to the above descriptions, and all these improvements and transformations should belong to the protection scope of the appended claims of the present invention.

Claims (10)

1.一种医学图像内目标对象的边界定位方法,其特征在于,所述方法应用于医学图像处理技术领域,具体包括:1. a boundary location method of target object in a medical image, it is characterized in that, described method is applied to medical image processing technical field, specifically comprises: 步骤A、终端获取临床采集到的待处理的目标对象视频数据,并将视频数据输入至预先通过离线训练建立的分割网络模型;Step A, the terminal acquires the video data of the target object collected clinically to be processed, and inputs the video data into the segmentation network model established in advance through offline training; 步骤B、所述分割网络模型自动分割出视频数据中的目标对象所在的区域,输出分割结果。Step B. The segmentation network model automatically segments the region where the target object is located in the video data, and outputs the segmentation result. 2.根据权利要求1中所述的医学图像内目标对象的边界定位方法,其特征在于,所述步骤A之前还包括:2. according to the boundary localization method of target object in medical image described in claim 1, it is characterized in that, also comprise before described step A: 步骤S、预先通过离线训练的方式建立用于自动分割出目标对象所在的区域的分割网络模型。In step S, a segmentation network model for automatically segmenting the region where the target object is located is established in advance through offline training. 3.根据权利要求1中所述的医学图像内目标对象的边界定位方法,其特征在于,所述步骤B之后还包括:3. according to the boundary localization method of target object in medical image described in claim 1, it is characterized in that, also comprise after described step B: 步骤C、对分割出的目标对象的区域进行多项式拟合,得到目标对象的上下边界。Step C, performing polynomial fitting on the segmented region of the target object to obtain the upper and lower boundaries of the target object. 4.根据权利要求2中所述的医学图像内目标对象的边界定位方法,其特征在于,所述步骤S包括:4. according to the boundary localization method of target object in medical image described in claim 2, it is characterized in that, described step S comprises: 步骤S1、获取采集到的目标对象视频,选取第一帧图像并对第一帧图像中的目标对象所在的区域进行标注;Step S1. Obtain the collected video of the target object, select the first frame of image and mark the area where the target object is located in the first frame of image; 步骤S2、获取目标对象的区域特征,并进行深度学习与训练;Step S2, obtaining the regional features of the target object, and performing deep learning and training; 步骤S3、在训练过程中通过分段仿射变换引入类正弦波的位移,模拟目标对象边界的变化,并建立用于自动分割出目标对象所在区域的分割网络模型。Step S3. In the training process, a sinusoidal-like displacement is introduced by piecewise affine transformation, the change of the boundary of the target object is simulated, and a segmentation network model for automatically segmenting the region where the target object is located is established. 5.根据权利要求2中所述的医学图像内目标对象的边界定位方法,其特征在于,所述步骤S还包括:5. according to the boundary localization method of target object in medical image described in claim 2, it is characterized in that, described step S also comprises: 在训练过程中加入常规仿射变换、投射变换或者透射变换,并采用图像腐蚀技术或者尖刺去除技术去平滑前后帧图像中的目标对象区域变化中的锐利边缘。Add conventional affine transformation, projective transformation or transmission transformation in the training process, and use image erosion technology or spike removal technology to smooth the sharp edges in the change of the target object area in the front and rear frame images. 6.根据权利要求1中所述的医学图像内目标对象的边界定位方法,其特征在于,所述步骤B包括:6. according to the boundary localization method of target object in medical image described in claim 1, it is characterized in that, described step B comprises: 步骤B1、所述分割网络模型依次对目标对象视频数据中的每一帧图像进行目标对象区域分割;Step B1, the segmentation network model sequentially performs target object region segmentation on each frame of image in the target object video data; 步骤B2、当对下一帧图像进行目标对象区域分割时,自动将前一帧图像的分割结果输入所述分割网络模型;Step B2, when performing target object area segmentation on the next frame image, automatically input the segmentation result of the previous frame image into the segmentation network model; 步骤B3、所述分割网络模型将前一帧图像的分割结果作为参照,对下一帧图像进行目标对象区域分割。Step B3, the segmentation network model uses the segmentation result of the previous frame image as a reference, and performs target object area segmentation on the next frame image. 7.根据权利要求1中所述的医学图像内目标对象的边界定位方法,其特征在于,所述步骤B还包括:7. according to the boundary localization method of target object in medical image described in claim 1, it is characterized in that, described step B also comprises: 将所述分割结果输入至分割网络模型中,以对所述分割网络模型进行更新。The segmentation result is input into the segmentation network model to update the segmentation network model. 8.根据权利要求3中所述的医学图像内目标对象的边界定位方法,其特征在于,所述步骤C之后还包括:8. according to the boundary localization method of target object in medical image described in claim 3, it is characterized in that, also comprise after described step C: 步骤D、根据目标对象的上下边界,进行目标对象厚度的计算。Step D. Calculate the thickness of the target object according to the upper and lower boundaries of the target object. 9.一种存储介质,其上存储有多条指令,其特征在于,所述指令适于由处理器加载并执行,以实现上述权利要求1-8任一项所述的医学图像内目标对象的边界定位方法。9. A storage medium on which a plurality of instructions are stored, wherein the instructions are adapted to be loaded and executed by a processor, so as to realize the target object in a medical image according to any one of claims 1-8 The boundary location method. 10.一种终端,其特征在于,包括:处理器、与处理器通信连接的存储介质,所述存储介质适于存储多条指令;所述处理器适于调用所述存储介质中的指令,以执行实现上述权利要求1-8任一项所述的医学图像内目标对象的边界定位方法。10. A terminal, characterized by comprising: a processor, a storage medium communicatively connected to the processor, the storage medium is suitable for storing multiple instructions; the processor is suitable for calling the instructions in the storage medium, In order to implement the method for locating the boundary of the target object in the medical image described in any one of claims 1-8 above.
CN201810310092.1A 2018-04-09 2018-04-09 Boundary alignment method, storage medium and the terminal of target object in medical image Pending CN108510493A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810310092.1A CN108510493A (en) 2018-04-09 2018-04-09 Boundary alignment method, storage medium and the terminal of target object in medical image
PCT/CN2018/082986 WO2019196099A1 (en) 2018-04-09 2018-04-13 Method for positioning boundaries of target object in medical image, storage medium, and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810310092.1A CN108510493A (en) 2018-04-09 2018-04-09 Boundary alignment method, storage medium and the terminal of target object in medical image

Publications (1)

Publication Number Publication Date
CN108510493A true CN108510493A (en) 2018-09-07

Family

ID=63381276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810310092.1A Pending CN108510493A (en) 2018-04-09 2018-04-09 Boundary alignment method, storage medium and the terminal of target object in medical image

Country Status (2)

Country Link
CN (1) CN108510493A (en)
WO (1) WO2019196099A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969640A (en) * 2018-09-29 2020-04-07 Tcl集团股份有限公司 Video image segmentation method, terminal device and computer-readable storage medium
CN111627017A (en) * 2020-05-29 2020-09-04 昆山戎影医疗科技有限公司 Blood vessel lumen automatic segmentation method based on deep learning
CN112396601A (en) * 2020-12-07 2021-02-23 中山大学 Real-time neurosurgical instrument segmentation method and device based on endoscope image and storage medium
CN112819831A (en) * 2021-01-29 2021-05-18 北京小白世纪网络科技有限公司 Segmentation model generation method and device based on convolution Lstm and multi-model fusion
WO2021171255A1 (en) * 2020-02-26 2021-09-02 Bright Clinical Research Limited A radar system for dynamically monitoring and guiding ongoing clinical trials
CN113358042A (en) * 2021-06-30 2021-09-07 长江存储科技有限责任公司 Method for measuring film thickness
CN113850778A (en) * 2021-09-24 2021-12-28 杭州脉流科技有限公司 Coronary OCT image automatic segmentation method, device, computing device and storage medium
US11734826B2 (en) 2018-11-27 2023-08-22 Tencent Technologv (Chenzhen) Company Limited Image segmentation method and apparatus, computer device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435459B (en) * 2021-02-08 2024-07-26 中国石油化工股份有限公司 Rock component identification method, device, equipment and medium based on machine learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136135A (en) * 2011-03-16 2011-07-27 清华大学 Method for extracting inner outline of cornea from optical coherence tomography image of anterior segment of eye and method for extracting inner outline of anterior chamber from optical coherence tomography image of anterior segment of eye
US20160171688A1 (en) * 2010-01-20 2016-06-16 Duke University Segmentation and identification of layered structures in images
CN106846314A (en) * 2017-02-04 2017-06-13 苏州大学 A kind of image partition method based on post-operative cornea OCT image datas
CN107274406A (en) * 2017-08-07 2017-10-20 北京深睿博联科技有限责任公司 A kind of method and device of detection sensitizing range

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015199257A1 (en) * 2014-06-25 2015-12-30 삼성전자 주식회사 Apparatus and method for supporting acquisition of area-of-interest in ultrasound image
US9607224B2 (en) * 2015-05-14 2017-03-28 Google Inc. Entity based temporal segmentation of video streams
CN107622257A (en) * 2017-10-13 2018-01-23 深圳市未来媒体技术研究院 A kind of neural network training method and three-dimension gesture Attitude estimation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171688A1 (en) * 2010-01-20 2016-06-16 Duke University Segmentation and identification of layered structures in images
CN102136135A (en) * 2011-03-16 2011-07-27 清华大学 Method for extracting inner outline of cornea from optical coherence tomography image of anterior segment of eye and method for extracting inner outline of anterior chamber from optical coherence tomography image of anterior segment of eye
CN106846314A (en) * 2017-02-04 2017-06-13 苏州大学 A kind of image partition method based on post-operative cornea OCT image datas
CN107274406A (en) * 2017-08-07 2017-10-20 北京深睿博联科技有限责任公司 A kind of method and device of detection sensitizing range

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ANNA K.等: "Learning Video Object Segmentation from Static Images", 《ARXIV》 *
MARTA E.R.等: "Corneal deformation dynamics in normal and glaucoma patients utilizing scheimpflug imaging", 《2015 37TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC)》 *
OLAF R.等: "U-Net: Convolutional Networks for Biomedical Image Segmentation", 《ARXIV》 *
王一丁 等: "《数字图像处理》", 31 August 2015, 西安:西安电子科技大学出版社 *
王炳锡 等: "《数字水印技术》", 30 November 2003, 西安:西安电子科技大学出版社 *
田小林 等: "《光学相干层析图像处理及应用》", 31 January 2015, 北京:北京理工大学出版社 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969640A (en) * 2018-09-29 2020-04-07 Tcl集团股份有限公司 Video image segmentation method, terminal device and computer-readable storage medium
US11734826B2 (en) 2018-11-27 2023-08-22 Tencent Technologv (Chenzhen) Company Limited Image segmentation method and apparatus, computer device, and storage medium
WO2021171255A1 (en) * 2020-02-26 2021-09-02 Bright Clinical Research Limited A radar system for dynamically monitoring and guiding ongoing clinical trials
CN111627017A (en) * 2020-05-29 2020-09-04 昆山戎影医疗科技有限公司 Blood vessel lumen automatic segmentation method based on deep learning
CN111627017B (en) * 2020-05-29 2024-02-23 苏州博动戎影医疗科技有限公司 Automatic segmentation method for vascular lumen based on deep learning
CN112396601A (en) * 2020-12-07 2021-02-23 中山大学 Real-time neurosurgical instrument segmentation method and device based on endoscope image and storage medium
CN112396601B (en) * 2020-12-07 2022-07-29 中山大学 A real-time neurosurgical instrument segmentation method based on endoscopic images
CN112819831A (en) * 2021-01-29 2021-05-18 北京小白世纪网络科技有限公司 Segmentation model generation method and device based on convolution Lstm and multi-model fusion
CN112819831B (en) * 2021-01-29 2024-04-19 北京小白世纪网络科技有限公司 Segmentation model generation method and device based on convolution Lstm and multi-model fusion
CN113358042A (en) * 2021-06-30 2021-09-07 长江存储科技有限责任公司 Method for measuring film thickness
CN113358042B (en) * 2021-06-30 2023-02-14 长江存储科技有限责任公司 Method for measuring film thickness
CN113850778A (en) * 2021-09-24 2021-12-28 杭州脉流科技有限公司 Coronary OCT image automatic segmentation method, device, computing device and storage medium

Also Published As

Publication number Publication date
WO2019196099A1 (en) 2019-10-17

Similar Documents

Publication Publication Date Title
CN108510493A (en) Boundary alignment method, storage medium and the terminal of target object in medical image
CN109859203B (en) Defect tooth image identification method based on deep learning
CN109829942B (en) An automatic quantification method of retinal vessel diameter in fundus images
JP6564018B2 (en) Radiation image lung segmentation technology and bone attenuation technology
CN108257126B (en) Method, device and application for blood vessel detection and registration in 3D retinal OCT images
CN111161290A (en) Image segmentation model construction method, image segmentation method and image segmentation system
CN108470375B (en) Deep learning-based automatic nerve conduit detection method
CN107564048B (en) Feature registration method based on bifurcation point
CN109087310B (en) Method, system, storage medium and intelligent terminal for segmentation of meibomian gland texture area
CN108520512B (en) Method and device for measuring eye parameters
CN119579623B (en) A CT image segmentation method for spinal surgery
KR102174246B1 (en) Catheter tracking system and controlling method thereof
CN116485814A (en) Intracranial hematoma region segmentation method based on CT image
CN113570618A (en) Deep learning-based weighted bone age assessment method and system
WO2022127043A1 (en) Detection method and apparatus based on convolutional neural network, and computer device and storage medium
CN111179298A (en) CT image-based three-dimensional lung automatic segmentation and left-right lung separation method and system
CN108596897A (en) The full-automatic detection method of masopharyngeal mirror lower jaw pharynx closure based on image procossing
CN102831614A (en) Sequential medical image quick segmentation method based on interactive dictionary migration
CN117911264A (en) Hand acupoint detection method based on image fusion and attention mechanism
Yu et al. Automatic localization and segmentation of optic disc in fundus image using morphology and level set
CN117373070A (en) Method and device for labeling blood vessel segments, electronic equipment and storage medium
CN117618110A (en) A marker-free surgical navigation method and system based on 3D structured light
CN108074229A (en) A kind of tracheae tree extracting method and device
CN114820587A (en) Method and system for intelligently measuring vessel diameter in ultrasonic examination
CN105894489B (en) Cornea topography image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180907

RJ01 Rejection of invention patent application after publication