[go: up one dir, main page]

CN112132232A - Method, system and server for classification and labeling of medical images - Google Patents

Method, system and server for classification and labeling of medical images Download PDF

Info

Publication number
CN112132232A
CN112132232A CN202011114703.9A CN202011114703A CN112132232A CN 112132232 A CN112132232 A CN 112132232A CN 202011114703 A CN202011114703 A CN 202011114703A CN 112132232 A CN112132232 A CN 112132232A
Authority
CN
China
Prior art keywords
labeling
medical image
annotation
feature map
original medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011114703.9A
Other languages
Chinese (zh)
Other versions
CN112132232B (en
Inventor
李黎
张文浩
翟石磊
孙安玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Champath Image Technology Co ltd
Original Assignee
Wuhan Champath Image Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Champath Image Technology Co ltd filed Critical Wuhan Champath Image Technology Co ltd
Priority to CN202011114703.9A priority Critical patent/CN112132232B/en
Publication of CN112132232A publication Critical patent/CN112132232A/en
Application granted granted Critical
Publication of CN112132232B publication Critical patent/CN112132232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

本发明公开了医学图像的分类标注方法和系统、服务器,属于医学图像技术领域。该方法包括:获取原始医学图像并发送给人工标注终端;获取人工标注终端返回的人工标注图并通过边缘提取算法得到标注区域的像素边界;通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图;在像素边界,将特征图与人工标注图进行比较;如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正;反之,则选取特征图上该像素点周围的多个像素点,以多个像素点中数量最多的类别作为该像素点的类别;以修正后的特征图作为网络标注图输出。

Figure 202011114703

The invention discloses a medical image classification and labeling method, system and server, and belongs to the technical field of medical images. The method includes: obtaining an original medical image and sending it to a manual labeling terminal; obtaining a manual labeling image returned by the manual labeling terminal and obtaining the pixel boundary of the labeling area through an edge extraction algorithm; The image is processed to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types; at the pixel boundary, the feature map is compared with the manual annotation map; The pixel is corrected; otherwise, multiple pixels around the pixel on the feature map are selected, and the category with the largest number of pixels is used as the category of the pixel; the corrected feature map is used as the network annotation map. output.

Figure 202011114703

Description

医学图像的分类标注方法和系统、服务器Method, system and server for classification and labeling of medical images

技术领域technical field

本发明属于医学图像处理技术领域,特别涉及一种医学图像的分类标注方法和系统、服务器。The invention belongs to the technical field of medical image processing, and in particular relates to a method, system and server for classifying and labeling medical images.

背景技术Background technique

先进医学成像技术如超声成像、磁共振成像技术等产生了大量的二维与三维医学图像用于医疗诊断,这些图像中包含了病理组织、器官等多种信息,需要医务工作人员结合专业知识进行判断。然而医学成像和常规图像相比具有对比度差、噪声大等特点,造成诊断耗时、容易出错等问题。通过预先进行医学图像地分割与标注,把目标区域从背景区域中定量地分离出来,不仅可以减小医学专家在疾病诊断过程中的工作量与成本,而且可以减少人工过程的出错率,从而提高诊断效率与准确度。Advanced medical imaging technologies such as ultrasound imaging and magnetic resonance imaging technology generate a large number of 2D and 3D medical images for medical diagnosis. These images contain various information such as pathological tissues and organs, which require medical staff to combine professional knowledge. judge. However, compared with conventional images, medical imaging has the characteristics of poor contrast and high noise, resulting in time-consuming and error-prone diagnosis. By segmenting and labeling medical images in advance, the target area can be quantitatively separated from the background area, which can not only reduce the workload and cost of medical experts in the process of disease diagnosis, but also reduce the error rate of manual processes, thereby improving Diagnostic efficiency and accuracy.

目前,对于医学图像的分割处理多采用基于阈值、边缘检测等的传统图像分割方法,这些方法容易受噪声影响,难以保证分割边缘的连续性与封闭性。以神经网络为代表的人工智能算法在自动驾驶等图像处理领域中得到成熟应用,成为解决医疗图像处理问题的新兴技术。At present, traditional image segmentation methods based on threshold and edge detection are mostly used for segmentation of medical images. These methods are easily affected by noise, and it is difficult to ensure the continuity and closure of segmentation edges. Artificial intelligence algorithms represented by neural networks have been maturely applied in image processing fields such as autonomous driving, and have become an emerging technology to solve medical image processing problems.

发明内容SUMMARY OF THE INVENTION

为了解决上述问题,本发明实施例提供了一种医学图像的分类标注方法和系统、服务器。所述技术方案如下:In order to solve the above problems, the embodiments of the present invention provide a method, system, and server for classifying and labeling medical images. The technical solution is as follows:

一方面,本发明实施例提供了医学图像的分类标注方法,该方法包括:On the one hand, an embodiment of the present invention provides a method for classifying and labeling medical images, the method comprising:

S101:获取原始医学图像并发送给人工标注终端;S101: Obtain the original medical image and send it to the manual labeling terminal;

S102:获取人工标注终端返回的人工标注图并通过边缘提取算法得到标注区域的像素边界,所述人工标注图由人工标注终端在原始医学图像上的目标区域通过人工按照预定策略分类标注得到;S102: Obtain the manual annotation map returned by the manual annotation terminal and obtain the pixel boundary of the marked area through an edge extraction algorithm, where the manual annotation map is obtained by manually labeling the target area on the original medical image by the manual annotation terminal according to a predetermined strategy;

S103:通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图,所述特征图深度方向上像素值最大的点所在的层数序号为标注的类别号;S103: Process the original medical image through a pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original medical image and whose depth is the number of annotation types, where the point with the largest pixel value in the depth direction of the feature map is located. The layer number is the marked category number;

S104:在像素边界,将特征图与人工标注图进行比较;如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正;反之,则选取特征图上该像素点周围的多个像素点,以多个像素点中数量最多的类别作为该像素点的类别;S104: at the pixel boundary, compare the feature map with the manual annotation map; if the pixel point is consistent with the results on the manual annotation map and the feature map, do not correct the pixel point; otherwise, select the pixel point on the feature map around the pixel point. For multiple pixels, the category with the largest number of pixels is used as the category of the pixel;

S105:以经过步骤S104处理后的特征图作为网络标注图输出。S105: Output the feature map processed in step S104 as the network annotation map.

其中,在人工标注终端,人工绘制目标区域的边缘轮廓,通过连通图算法将边缘轮廓围成的区域进行填充,人工对填充后的区域进行分类标注,所述人工标注图与特征图标注的类别对应。Among them, at the manual labeling terminal, the edge contour of the target area is manually drawn, the area enclosed by the edge contour is filled by the connected graph algorithm, and the filled area is manually classified and marked. correspond.

具体地,本发明实施例中人工标注终端的标注方法为:Specifically, the labeling method for manually labeling the terminal in the embodiment of the present invention is:

S201:人工选择特定颜色的画笔并选择粗线型,不同颜色代表不同的组织且对应相应的标注分类;S201: Manually select a brush of a specific color and select a thick line type, different colors represent different tissues and correspond to corresponding labeling classifications;

S202:人工使用画笔勾勒出目标区域的大致轮廓;S202: Manually use a brush to outline the rough outline of the target area;

S203:通过连通图算法,将位于轮廓内的像素点标注为画笔颜色以实现将边缘轮廓围成的区域进行填充;S203: through the connected graph algorithm, mark the pixels located in the outline as the brush color to fill the area enclosed by the edge outline;

S204:人工选择细线型,对区域边缘进行修补;S204: Manually select the thin line type, and repair the edge of the area;

S205:如果还有区域要标注,则重复步骤S201- S204,如果没有则得到人工标注图。S205: If there is still an area to be marked, repeat steps S201-S204, if not, obtain a manual marked map.

进一步地,本发明提供的分类标注方法还包括:每张原始医学图像均具有唯一的数据标识码;在步骤S101中,获取原始医学图像及对应的数据标识码并一起发送给人工标注终端;在步骤S102中,获取人工标注图及对应的数据标识码,在步骤S105中,输出网络标注图及对应的数据标识码;数据标识码的生成方法为:Further, the classification and labeling method provided by the present invention further includes: each original medical image has a unique data identification code; in step S101, the original medical image and the corresponding data identification code are obtained and sent to the manual labeling terminal together; In step S102, the manual annotation diagram and the corresponding data identification code are obtained, and in step S105, the network annotation diagram and the corresponding data identification code are output; the generation method of the data identification code is:

S301:原始医学图像X通过线性投影矩阵W矩阵运算得到向量Y;S301: the original medical image X obtains the vector Y through the linear projection matrix W matrix operation;

Figure 100002_DEST_PATH_IMAGE002
Figure 100002_DEST_PATH_IMAGE002

其中,原始医学图像X的大小为:宽度为w,高度为3h;线性投影矩阵W的大小为1*w;向量Y的尺寸为1*3h;线性投影矩阵W根据下列公式计算得到:Among them, the size of the original medical image X is: the width is w, the height is 3h; the size of the linear projection matrix W is 1*w; the size of the vector Y is 1*3h; the linear projection matrix W is calculated according to the following formula:

Figure 100002_DEST_PATH_IMAGE004
Figure 100002_DEST_PATH_IMAGE004

S302:向量Y的每个元素向下取整到最近的整数,使Y向量的每个元素都在[0,255]之间;S302: Round down each element of the vector Y to the nearest integer, so that each element of the Y vector is between [0, 255];

S303:通过哈希散列函数,将Y向量映射为特定数字标识码ID;如果特定数字标识码ID与已有图像ID相同,则为特定数字标识码ID加上后缀标识符与已有图像ID进行区分;带或不带后缀标识符的特定数字标识码ID即为数据标识码。S303: Map the Y vector to a specific digital identification code ID through a hash function; if the specific digital identification code ID is the same as the existing image ID, add a suffix identifier to the specific digital identification code ID and the existing image ID To distinguish; the specific digital identification code ID with or without the suffix identifier is the data identification code.

其中,步骤S103具体包括:Wherein, step S103 specifically includes:

S401:通过多个

Figure 100002_DEST_PATH_IMAGE006
卷积核和
Figure 100002_DEST_PATH_IMAGE008
最大池化提取原始医学图像的特征信息,得到初始特征图;S401: Pass multiple
Figure 100002_DEST_PATH_IMAGE006
convolution kernel and
Figure 100002_DEST_PATH_IMAGE008
Maximum pooling extracts the feature information of the original medical image to obtain the initial feature map;

S402:通过平均值池化将初始特征图降采样至一系列尺寸,通过

Figure 100002_DEST_PATH_IMAGE010
卷积核卷积处理;再依次通过双线性插值方法与等尺寸卷积将系列尺寸特征图还原到初始特征图大小,得到还原特征图;S402: Downsample the initial feature map to a range of sizes through mean pooling, and pass
Figure 100002_DEST_PATH_IMAGE010
Convolution kernel convolution processing; then through the bilinear interpolation method and equal-size convolution in turn, the series size feature map is restored to the initial feature map size, and the restored feature map is obtained;

S403:将还原特征图和初始特征图拼接得到拼接特征图;S403: splicing the restored feature map and the initial feature map to obtain a spliced feature map;

S404:依次通过双线性插值方法与等尺寸卷积将拼接特征图还原到原始医学图像大小,再通过softmax算法,得到尺寸为原始医学图像大小、深度为标注类型数目的特征图。S404: The stitched feature map is restored to the size of the original medical image through the bilinear interpolation method and equal-size convolution in turn, and then a feature map whose size is the size of the original medical image and the depth is the number of annotation types is obtained through the softmax algorithm.

优选地,本发明提供的分类标注方法还包括:选取特征图与人工标注图误差较小的原始医学图像为高质量样本,以高质量样本作为全卷积神经网络模型的输入,以该原始医学图像对应的网络标注图作为全卷积神经网络模型的输出,对全卷积神经网络模型进行优化训练。Preferably, the classification and labeling method provided by the present invention further includes: selecting an original medical image with a smaller error between the feature map and the manual labeling map as a high-quality sample, using the high-quality sample as the input of the fully convolutional neural network model, and using the original medical image as the input of the full convolutional neural network model. The network annotation map corresponding to the image is used as the output of the fully convolutional neural network model, and the fully convolutional neural network model is optimized for training.

具体地,在步骤S104中,如果特征图与人工标注图中结果不一致的像素点在像素边界的占比小于预定阈值时,则认为特征图与人工标注图误差较小,对应的原始医学图像为高质量样本;所述优化训练方法包括:以原有的已标注医学图像作为验证集,对高质量样本进行k折交叉验证。Specifically, in step S104, if the proportion of pixels whose results are inconsistent between the feature map and the manual annotation map in the pixel boundary is less than a predetermined threshold, it is considered that the error between the feature map and the manual annotation map is small, and the corresponding original medical image is High-quality samples; the optimization training method includes: using the original marked medical images as a verification set, and performing k-fold cross-validation on the high-quality samples.

具体地,在步骤S104中:多个像素点的数量为15-25个。Specifically, in step S104: the number of the plurality of pixel points is 15-25.

另一方面,本发明实施例还提供了一种医学图像的分类标注系统,该系统包括:On the other hand, an embodiment of the present invention also provides a system for classifying and labeling medical images, the system comprising:

用户:用于将原始医学图像上传至服务器,并能显示网络标注图;User: used to upload the original medical image to the server, and can display the network annotation map;

中央数据库:用于获取并存储用户上传的原始医学图像,将原始医学图像分发给服务器,存储网络标注图并将网络标注图发送至用户;Central database: used to acquire and store the original medical images uploaded by users, distribute the original medical images to the server, store the network annotation map and send the network annotation map to the user;

服务器:用于将原始医学图像分发给人工标注终端,对原始医学图像通过预先训练的全卷积神经网络模型进行处理得到尺寸为原始图像大小、深度为标注类型数目的特征图,获取人工标注图并通过边缘提取算法对人工标注图进行处理得到标注区域的像素边界;在像素边界,通过与人工标注图对比对特征图进行修正得到网络标注图,将网络标注图上传至中央数据库;Server: used to distribute the original medical image to the manual labeling terminal, process the original medical image through a pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original image and the depth is the number of labeling types, and obtain the manual labeling map And through the edge extraction algorithm, the artificial annotation map is processed to obtain the pixel boundary of the labeled area; at the pixel boundary, the feature map is corrected by comparing with the artificial annotation map to obtain the network annotation map, and the network annotation map is uploaded to the central database;

人工标注终端:用于显示原始医学图像和人工标注图,对原始医学图像进行人工标注得到人工标注图,并将人工标注图上传至服务器。Manual labeling terminal: used to display the original medical image and manual labeling map, manually label the original medical image to obtain the manual labeling map, and upload the manual labeling map to the server.

又一方面,本发明实施例还提供了一种服务器,包括:In another aspect, an embodiment of the present invention also provides a server, including:

数据收发模块:用于接收中央数据库发送的原始医学图像、向人工标注终端发送原始医学图像和向中央数据库发送网络标注图;Data transceiver module: used to receive the original medical image sent by the central database, send the original medical image to the manual labeling terminal, and send the network labeling map to the central database;

图像处理模块:用于通过边缘提取算法对人工标注图进行处理得到标注区域的像素边界;Image processing module: It is used to process the artificially labeled image through the edge extraction algorithm to obtain the pixel boundary of the labeled area;

全卷积神经网络模块:用于通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图,所述特征图深度方向上像素值最大的点所在的层数序号为标注的类别号;Fully convolutional neural network module: used to process the original medical image through the pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original medical image and whose depth is the number of annotation types. The depth direction of the feature map is The layer number of the point with the largest pixel value is the labeled category number;

修正模块:用于在像素边界,将特征图与人工标注图进行比较;如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正;反之,则选取特征图上该像素点周围的多个像素点,以多个像素点中数量最多的类别作为该像素点的类别;以修正后的特征图作为网络标注图。Correction module: It is used to compare the feature map with the artificial annotation map at the pixel boundary; if the result of the pixel point on the artificial annotation map and the feature map is consistent, the pixel point will not be corrected; otherwise, the pixel on the feature map will be selected. For multiple pixels around the point, the category with the largest number of pixels is used as the category of the pixel; the corrected feature map is used as the network annotation map.

其中,本发明实施例提供的全卷积神经网络模块包括:Wherein, the fully convolutional neural network module provided by the embodiment of the present invention includes:

特征图提取单元:用于通过多次卷积与池化提取原始医学图像的特征信息,得到初始特征图;Feature map extraction unit: used to extract the feature information of the original medical image through multiple convolutions and pooling to obtain the initial feature map;

多尺度卷积池化单元:用于通过平均池化将初始特征图降采样至一系列尺寸,通过

Figure 145973DEST_PATH_IMAGE010
卷积核卷积处理,再依次通过双线性插值方法与等尺寸卷积将系列尺寸特征图还原到初始特征图大小,得到还原特征图;Multi-scale convolution pooling unit: used to downsample the initial feature map to a range of sizes through average pooling, through
Figure 145973DEST_PATH_IMAGE010
Convolution kernel convolution processing, and then through the bilinear interpolation method and equal-size convolution to restore the series size feature map to the initial feature map size to obtain the restored feature map;

拼接单元:用于将还原特征图和初始特征图拼接得到拼接特征图;Splicing unit: used to splicing the restored feature map and the original feature map to obtain a spliced feature map;

上采样单元:用于依次通过双线性插值方法与等尺寸卷积将拼接特征图还原到原始医学图像大小,再通过softmax算法,得到尺寸为原始医学图像大小、深度为标注类型数目的特征图。Upsampling unit: It is used to restore the stitched feature map to the original medical image size through the bilinear interpolation method and equal-size convolution in turn, and then use the softmax algorithm to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types. .

又一方面,本发明实施例还提供了一种人工标注终端,包括:原始图显示模块:用于显示原始医学图像并能人工标注,人工标注时,先采用粗线型勾勒出目标区域的大致轮廓,再采用细线型对区域边缘进行修补;In another aspect, an embodiment of the present invention also provides a manual labeling terminal, including: an original image display module: used to display the original medical image and be able to manually label it. When manually labeling, a rough outline of the target area is first outlined with a thick line. outline, and then use thin lines to repair the edge of the area;

控制模块:用于对标注后的原始医学图像进行处理以得到人工标注图;Control module: used to process the labeled original medical images to obtain manual labeled maps;

人工标注图显示模块:用于显示人工标注图;Manual annotation map display module: used to display manual annotation map;

选择模块:用于选择画笔颜色和线型粗细,不同颜色代表不同的组织且对应相应的标注分类;Selection module: used to select brush color and line thickness, different colors represent different tissues and correspond to corresponding label classifications;

通信模块:用于接收原始医学图像,并上传人工标注图。Communication module: used to receive original medical images and upload manual annotations.

本发明实施例提供的技术方案带来的有益效果是:本发明基于全卷积神经网络构建一套医学图像分割标注方法与系统,可以实现原始医学图像像素到网络标注图像像素端到端的映射关系,张量化运算大大提高了图像的处理速度;与传统图像处理算法相比,本发明中使用的神经网络标注方法不需要人为设计图像处理算子,在使用优良的训练数据集的情况下,能够用于不同类型、不同尺寸的医学图像的标注;通过综合卷积神经网络输出标注结果和人工标注结果,对神经网络权重进行优化,从而改善图像分割与标注的结果,在图像处理过程实现正反馈,不断提升医学影像诊断的效率与准确度。The beneficial effects brought by the technical solutions provided by the embodiments of the present invention are: the present invention constructs a medical image segmentation and labeling method and system based on a fully convolutional neural network, which can realize the end-to-end mapping relationship between the original medical image pixels and the network labeled image pixels. , the tensorization operation greatly improves the processing speed of the image; compared with the traditional image processing algorithm, the neural network labeling method used in the present invention does not require artificial design of image processing operators, and can be used in the case of using an excellent training data set. It is used for labeling medical images of different types and sizes; by synthesizing the output labeling results and manual labeling results of the convolutional neural network, the weights of the neural network are optimized to improve the results of image segmentation and labeling, and realize positive feedback in the image processing process. , and continuously improve the efficiency and accuracy of medical imaging diagnosis.

附图说明Description of drawings

图1是本发明实施例提供的医学图像的分类标注方法的流程图;1 is a flowchart of a method for classifying and labeling medical images provided by an embodiment of the present invention;

图2是本发明实施例提供的人工标注终端的标注方法的流程图;2 is a flowchart of a method for manually labeling a terminal provided by an embodiment of the present invention;

图3是本发明实施例提供的人工标注终端的操作界面图;3 is an operation interface diagram of a manual labeling terminal provided by an embodiment of the present invention;

图4是本发明实施例提供的人工标注终端的操作流程图;4 is an operation flowchart of a manual labeling terminal provided by an embodiment of the present invention;

图5是本发明实施例提供的数据标识码的生成方法的流程图;5 is a flowchart of a method for generating a data identification code provided by an embodiment of the present invention;

图6是步骤S103的具体流程图;Fig. 6 is the concrete flow chart of step S103;

图7是步骤S103的具体处理方式;Fig. 7 is the specific processing mode of step S103;

图8是本发明实施例提供的医学图像的分类标注系统的原理框图;8 is a schematic block diagram of a system for classifying and labeling medical images provided by an embodiment of the present invention;

图9是本发明实施例提供的服务器的原理框图;9 is a schematic block diagram of a server provided by an embodiment of the present invention;

图10是本发明实施例提供的全卷积神经网络模块的原理框图;10 is a schematic block diagram of a fully convolutional neural network module provided by an embodiment of the present invention;

图11是本发明实施例提供的人工标注终端的原理框图。FIG. 11 is a schematic block diagram of a terminal for manual annotation provided by an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings.

实施例1Example 1

参见图1,实施例1公开了一种医学图像的分类标注方法,该方法包括:Referring to FIG. 1, Embodiment 1 discloses a method for classifying and labeling medical images, the method includes:

S101:获取原始医学图像并发送给人工标注终端,同时也可获取数据标识码。S101: Obtain the original medical image and send it to the manual labeling terminal, and also obtain the data identification code.

S102:获取人工标注终端返回的人工标注图(及对应的数据标识码)并通过边缘提取算法得到标注区域的像素边界;其中,人工标注图由人工标注终端在原始医学图像上的目标区域通过人工按照预定策略分类标注得到;在本实施例中,仅需要标注出目标区域的轮廓即可(即粗标注),工作强度低,且标注速度非常快。其中,像素边界为附近为标注结果可以出错的区域。S102: Obtain the manual labeling map (and the corresponding data identification code) returned by the manual labeling terminal, and obtain the pixel boundary of the labeling area through an edge extraction algorithm; wherein, the target area of the manual labeling map on the original medical image is manually labelled by the terminal. It is obtained by classification and labeling according to a predetermined strategy; in this embodiment, only the outline of the target area needs to be labelled (that is, rough labeling), the work intensity is low, and the labeling speed is very fast. Among them, the pixel boundary is the nearby area where the labeling result can be wrong.

S103:通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图。其中,特征图深度方向上像素值最大的点所在的层数(对应图7中的N)序号为标注的类别号。S103: Through the pre-trained fully convolutional neural network model, the original medical image is processed to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types. Among them, the number of layers (corresponding to N in Figure 7) where the point with the largest pixel value in the depth direction of the feature map is located is the labeled category number.

S104:在像素边界,将特征图与人工标注图进行比较(像素点)。如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正。反之,则选取特征图上该像素点周围的多个(如20个)像素点,以多个像素点中数量最多的类别作为该像素点的类别以对特征图中对应的像素点进行修正。该步骤实现对特征图进行修正,不但能提升网络标注图的准确度,还能用于反馈优化训练。具体地,标注不同的像素点周围的20个像素点中,有8个像素点的类别号为分类1、5个像素点的类别号为分类2,3个像素点的类别号为分类3、2个像素点的类别号为分类4和2个像素点的类别号为分类4,则数量最多的类别为分类1,则标注不同的像素点的类别号为分类1。具体地,在步骤S104中:多个像素点的数量为15-25个,具体可以为20个;像素点数量的选择根据图像的复杂度选择,图像越复杂,数量越大。S104: At the pixel boundary, compare the feature map with the manual annotation map (pixel points). If the result of the pixel point on the artificial annotation map and the feature map is consistent, the pixel point will not be corrected. On the contrary, select multiple (eg 20) pixels around the pixel on the feature map, and use the category with the largest number of pixels as the category of the pixel to correct the corresponding pixel in the feature map. This step realizes the correction of the feature map, which can not only improve the accuracy of the network annotation map, but also be used for feedback optimization training. Specifically, among the 20 pixels around the marked different pixels, there are 8 pixels whose category numbers are category 1, 5 pixels whose category numbers are category 2, and 3 pixels whose category numbers are category 3, The category number of 2 pixels is category 4 and the category number of 2 pixels is category 4, then the category with the largest number is category 1, and the category numbers marked with different pixels are category 1. Specifically, in step S104: the number of the plurality of pixels is 15-25, specifically 20; the selection of the number of pixels is selected according to the complexity of the image, the more complex the image, the greater the number.

S105:以修正后(经过步骤S104处理后)的特征图(及对应的数据标识码)作为网络标注图输出;当然,也可一起输出人工标注图。S105: Output the corrected feature map (and the corresponding data identification code) as the network annotation map; of course, the manual annotation map may also be output together.

其中,在人工标注终端,人工绘制目标区域的边缘轮廓,通过连通图算法将边缘轮廓围成的区域进行填充(具体可以采用与边缘轮廓相同的颜色),人工对填充后的区域进行分类标注(可以直接采用颜色进行分类,也可填充后再定义),人工标注图与特征图标注的类别对应,人工标注时要按照一定的策略(如根据实际情况定义,癌组织的颜色、病变组织的颜色、正常组织的颜色等)进行,保证人工标注图与特征图标注的类别对应。Among them, at the manual labeling terminal, the edge contour of the target area is manually drawn, and the area enclosed by the edge contour is filled by the connected graph algorithm (specifically, the same color as the edge contour can be used), and the filled area is manually classified and marked ( Color can be directly used for classification, or it can be filled and then defined). The manual labeling map corresponds to the category labeled by the feature map. When manually labeling, a certain strategy should be followed (such as the definition according to the actual situation, the color of cancer tissue, the color of diseased tissue) , the color of normal tissue, etc.) to ensure that the manual annotation map corresponds to the category marked by the feature map.

具体地,参见图2-4,本发明实施例中人工标注终端的标注方法为:Specifically, referring to FIGS. 2-4 , a labeling method for manually labeling a terminal in the embodiment of the present invention is:

S201:人工选择特定颜色的画笔并选择粗线型,不同颜色代表不同的组织且对应相应的标注分类。S201: Manually select a brush of a specific color and select a thick line type. Different colors represent different tissues and correspond to corresponding labeling classifications.

S202:人工使用画笔勾勒出目标区域的大致轮廓(如采用粗虚线进行勾勒),点击确认。S202: Manually use a brush to outline the rough outline of the target area (for example, use a thick dotted line to outline), and click OK.

S203:通过连通图算法,将位于轮廓内的像素点标注为画笔颜色以实现将边缘轮廓围成的区域进行填充。S203: Using a connected graph algorithm, mark the pixels located in the outline as the brush color to fill the area enclosed by the edge outline.

S204:人工选择细线型,对区域边缘进行修补,使标注区域尽可能正确。S204: Manually select the thin line type, and repair the edge of the area to make the marked area as correct as possible.

S205:如果还有区域要标注,则重复步骤S201- S204(不同部位采用相应的颜色),如果没有则得到人工标注图,则点击下一张,进行下一张标注。S205: If there is still an area to be marked, repeat steps S201-S204 (different parts use corresponding colors), if not, get a manual marked map, click the next one to mark the next one.

S206:如果有需要修改的图像,可点击上一张进行回看。S206: If there is an image that needs to be modified, click the previous image to review it.

S207:所有图像标注完成后,打包发送给服务器。S207: After all images are marked, packaged and sent to the server.

其中,本发明的标注过程可以在常规的PC端实现,优选在触屏设备上实现以便于勾勒轮廓。Wherein, the marking process of the present invention can be implemented on a conventional PC, preferably on a touch screen device to facilitate outline drawing.

具体地,操作员登陆服务器分配的账号,获取原始医学图像。人工标注终端具备医学图像的显示、切换、存储等功能。完成一张医学图像标注后,在人工标注终端中暂存,并进行下一张图像的标注。当完成一组例如20张图像的粗标注后,人工标注终端将标注数据回传至服务器,由服务器进行下一步处理。类似的,人工标注终端也可采用在线工作方式,例如用户每完成一张医学图像的粗标注,终端便将粗标注后的图像回转至服务器,并从服务器下载一张新的医学图像进行标注。Specifically, the operator logs in to the account assigned by the server to obtain the original medical image. The manual labeling terminal has the functions of displaying, switching, and storing medical images. After completing the annotation of a medical image, it is temporarily stored in the manual annotation terminal, and the next image is annotated. After completing the rough labeling of a group of, for example, 20 images, the manual labeling terminal sends the labeling data back to the server, and the server performs the next step of processing. Similarly, the manual labeling terminal can also work online. For example, every time the user completes the rough labeling of a medical image, the terminal returns the crudely labelled image to the server, and downloads a new medical image from the server for labeling.

进一步地,本发明提供的分类标注方法还包括:每张原始医学图像均具有唯一的数据标识码;在步骤S101中,获取原始医学图像及对应的数据标识码并一起发送给人工标注终端;在步骤S102中,获取人工标注图及对应的数据标识码,在步骤S105中,输出网络标注图及对应的数据标识码。Further, the classification and labeling method provided by the present invention further includes: each original medical image has a unique data identification code; in step S101, the original medical image and the corresponding data identification code are obtained and sent to the manual labeling terminal together; In step S102, the manual annotation map and the corresponding data identification code are obtained, and in step S105, the network annotation map and the corresponding data identification code are output.

具体地,中央数据库生成数据标识码后,将原始医学图像与数据识别码发送给服务器,由服务器将待标注医学图像(原始医学图像)随机分发给人工标注终端。人工标注终端接收到数据后,由操作员手动粗标注,并将粗标注的人工标注图回传给服务器。服务器将粗标注的人工标注图进行运算处理,提取出粗标注特征的准确像素边缘,再结合全卷积神经网络模型可实现对目标组织或器官的像素级别的分割与多分类。图像标注完成后,各个服务器将标注后的图像(网络标注图)回传给中央数据库,再由中央数据库返回给用户。中央数据库可对用户上传的每张原始医学图像生成唯一的数据识别码以建立存储与索引,该数据识别码随对应的图像一起传送给服务器。服务器完成医学图像的处理后将图像发送回中央数据库,中央数据库根据该数据识别码建立已标注医学图像与原始医学图像的一一对应关系,进行存储。用户根据需要,可以从中央数据库提取原始医学图像、人工标注图和网络标注图。Specifically, after the central database generates the data identification code, the original medical image and the data identification code are sent to the server, and the server randomly distributes the medical image to be labeled (original medical image) to the manual labeling terminal. After the manual labeling terminal receives the data, the operator manually labels it roughly, and sends the crude labeling map back to the server. The server processes the rough-labeled manual annotation map, extracts the accurate pixel edges of the coarse-labeled features, and combines with the fully convolutional neural network model to achieve pixel-level segmentation and multi-classification of the target tissue or organ. After the image annotation is completed, each server sends the marked image (network annotation map) back to the central database, and then the central database returns it to the user. The central database can generate a unique data identification code for each original medical image uploaded by the user to establish storage and indexing, and the data identification code is transmitted to the server together with the corresponding image. After the server completes the processing of the medical image, the image is sent back to the central database, and the central database establishes a one-to-one correspondence between the marked medical image and the original medical image according to the data identification code and stores it. Users can extract original medical images, manual annotation maps and network annotation maps from the central database according to their needs.

具体地,参见图5,本发明实施例中的数据标识码的生成方法为:Specifically, referring to Fig. 5, the method for generating the data identification code in the embodiment of the present invention is:

S301:原始医学图像X通过线性投影矩阵W矩阵运算得到向量Y;S301: the original medical image X obtains the vector Y through the linear projection matrix W matrix operation;

Figure DEST_PATH_IMAGE002A
Figure DEST_PATH_IMAGE002A

其中,原始医学图像X的大小为:宽度为w,高度为3h;线性投影矩阵W的大小为1*w;向量Y的尺寸为1*3h;线性投影矩阵W根据下列公式计算得到:Among them, the size of the original medical image X is: the width is w, the height is 3h; the size of the linear projection matrix W is 1*w; the size of the vector Y is 1*3h; the linear projection matrix W is calculated according to the following formula:

Figure DEST_PATH_IMAGE004A
Figure DEST_PATH_IMAGE004A
.

S302:向量Y的每个元素向下取整到最近的整数,使Y向量的每个元素都在[0,255]之间。S302: Each element of the vector Y is rounded down to the nearest integer, so that each element of the Y vector is between [0, 255].

S303:通过哈希散列函数,将Y向量映射为特定数字标识码ID;如果特定数字标识码ID与已有图像ID相同,则为特定数字标识码ID加上后缀标识符与已有图像ID进行区分;带(与已有图像ID相同)或不带(不与已有图像ID相同)后缀标识符的特定数字标识码ID即为数据标识码。S303: Map the Y vector to a specific digital identification code ID through a hash function; if the specific digital identification code ID is the same as the existing image ID, add a suffix identifier to the specific digital identification code ID and the existing image ID To distinguish; the specific digital identification code ID with (same as the existing image ID) or without (not the same as the existing image ID) suffix identifier is the data identification code.

其中,参见图6和7,步骤S103具体包括:6 and 7, step S103 specifically includes:

S401:通过多次卷积与池化提取原始医学图像(具体为512*512*3)的特征信息,在减小图像尺寸的同时,加深特征图的深度,从而提取出原始图像中包含的深层特征信息,得到初始特征图。S401: Extract the feature information of the original medical image (specifically, 512*512*3) through multiple convolutions and pooling, while reducing the size of the image, deepen the depth of the feature map, thereby extracting the deep layers contained in the original image. feature information to obtain the initial feature map.

S402:通过平均值池化将初始特征图降采样至一系列尺寸(如按8*8*256、4*4*256、2*2*256和1*1*256尺度进行降采样),通过

Figure 738759DEST_PATH_IMAGE010
卷积核卷积处理(结果分别为8*8*256、4*4*256、2*2*256和1*1*256)。再依次通过双线性插值方法与等尺寸卷积将系列尺寸特征图还原到初始特征图大小,得到还原特征图。S402: Downsample the initial feature map to a range of sizes (such as downsampling by 8*8*256, 4*4*256, 2*2*256, and 1*1*256) through mean pooling, and pass
Figure 738759DEST_PATH_IMAGE010
Convolution kernel convolution processing (results are 8*8*256, 4*4*256, 2*2*256 and 1*1*256 respectively). Then, the series size feature map is restored to the original feature map size by bilinear interpolation method and equal-size convolution in turn, and the restored feature map is obtained.

S403:将还原特征图和初始特征图拼接得到拼接特征图,与常规技术一致。S403: Splicing the restored feature map and the initial feature map to obtain a spliced feature map, which is consistent with the conventional technology.

S404:依次通过双线性插值方法与等尺寸卷积将拼接特征图还原到原始医学图像大小,再通过softmax算法,得到尺寸为原始医学图像大小、深度为标注类型数目的特征图(512*512*N,其中,N为标注类型数)。S404: Restore the stitched feature map to the size of the original medical image through the bilinear interpolation method and equal-size convolution in turn, and then use the softmax algorithm to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types (512*512 *N, where N is the number of annotation types).

具体地,步骤S401包括:通过多个

Figure 133969DEST_PATH_IMAGE006
卷积核和
Figure 571510DEST_PATH_IMAGE008
最大池化提取原始医学图像的特征信息,得到初始特征图。具体一次为卷积3*3*64、最大池化2*2、卷积3*3*256、最大池化2*2、卷积3*3*512。Specifically, step S401 includes: through multiple
Figure 133969DEST_PATH_IMAGE006
convolution kernel and
Figure 571510DEST_PATH_IMAGE008
Maximum pooling extracts the feature information of the original medical image and obtains the initial feature map. The specific one is convolution 3*3*64, max pooling 2*2, convolution 3*3*256, max pooling 2*2, and convolution 3*3*512.

优选地,本发明提供的分类标注方法还包括:选取特征图与人工标注图误差较小的原始医学图像为高质量样本,以高质量样本作为全卷积神经网络模型的输入,以该原始医学图像对应的网络标注图作为全卷积神经网络模型的输出,对全卷积神经网络模型进行优化训练,优化训练为本领域内的常规技术,本实施例省略详细描述。Preferably, the classification and labeling method provided by the present invention further includes: selecting an original medical image with a smaller error between the feature map and the manual labeling map as a high-quality sample, using the high-quality sample as the input of the fully convolutional neural network model, and using the original medical image as the input of the full convolutional neural network model. The network annotation map corresponding to the image is used as the output of the fully convolutional neural network model, and the fully convolutional neural network model is optimized and trained. The optimization training is a conventional technique in the field, and detailed description is omitted in this embodiment.

具体地,在步骤S104中,如果特征图与人工标注图中结果不一致的像素点在像素边界(像素点集合)的占比(像素点数量占比)小于预定阈值(根据实际需要进行设计,具体可以为1%)时,则认为特征图与人工标注图误差较小,对应的原始医学图像为高质量样本。优化训练方法包括:以原有的已标注医学图像(保存在中央数据库中,作为训练用)作为验证集,对高质量样本进行k折交叉验证,提升全卷积神经网络模型输出与图像实际标注图的平均交并比。Specifically, in step S104, if the proportion of pixels whose results are inconsistent between the feature map and the manual annotation map in the pixel boundary (pixel set) (the proportion of the number of pixels) is less than a predetermined threshold (design according to actual needs, specific can be 1%), it is considered that the error between the feature map and the manual annotation map is small, and the corresponding original medical image is a high-quality sample. The optimized training method includes: using the original annotated medical images (stored in the central database for training) as the validation set, performing k-fold cross-validation on high-quality samples, and improving the output of the fully convolutional neural network model and the actual image annotation Average intersection ratio of graphs.

实施例2Example 2

参见图8,实施例2公开了一种医学图像的分类标注系统,该系统包括:Referring to FIG. 8 , Embodiment 2 discloses a system for classifying and labeling medical images, and the system includes:

用户:用于将原始医学图像上传至服务器,并能显示网络标注图。User: used to upload the original medical images to the server, and can display the network annotation map.

中央数据库:用于获取并存储用户上传的原始医学图像,将原始医学图像分发给服务器,存储网络标注图并将网络标注图发送至用户。Central database: used to acquire and store the original medical images uploaded by users, distribute the original medical images to the server, store the network annotation map and send the network annotation map to the user.

服务器:用于将原始医学图像分发给人工标注终端,对原始医学图像通过预先训练的全卷积神经网络模型进行处理得到尺寸为原始图像大小、深度为标注类型数目的特征图,获取人工标注图并通过边缘提取算法对人工标注图进行处理得到标注区域的像素边界;在像素边界,通过与人工标注图对比对特征图进行修正(具体可以参见步骤S104)得到网络标注图,将网络标注图上传至中央数据库。Server: used to distribute the original medical image to the manual labeling terminal, process the original medical image through a pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original image and the depth is the number of labeling types, and obtain the manual labeling map And through the edge extraction algorithm, the artificial annotation map is processed to obtain the pixel boundary of the annotation area; at the pixel boundary, the feature map is corrected by comparing with the manual annotation map (for details, please refer to step S104) to obtain the network annotation map, and the network annotation map is uploaded. to the central database.

人工标注终端:用于显示原始医学图像和人工标注图,对原始医学图像进行人工标注得到人工标注图,并将人工标注图上传至服务器。Manual labeling terminal: used to display the original medical image and manual labeling map, manually label the original medical image to obtain the manual labeling map, and upload the manual labeling map to the server.

其中,用户如医院、医学科研院所、医疗公司等在获取医学图像后,将医学图像上传到中央数据库中。Among them, users such as hospitals, medical research institutes, medical companies, etc., after obtaining medical images, upload the medical images to the central database.

其中,中央数据库可根据实际医学图像数量连接一台或多台服务器,并自动将原始医学图像分发给不同的服务器进行处理。当需要处理的医学图像数量较少时,可将服务器与中央数据库合并为一体。服务器在接收到原始医学图像后,自动将图像分发给一台或多台人工标注终端进行粗标注。中央数据库可以架设在用户单位局域网、云存储等,用户可以通过无线网络、以太网、USB等通信方式进行数据上传。中央数据库为每个用户设置专用的账号与密码,分配读取、写入、更改等权限,以保证数据的安全性。中央数据库在获取用户上传的医学图像后,为每张图像生成唯一的数字识别码,与相应用户进行绑定,并将原始医学图像和数字识别码分发给服务器。Among them, the central database can connect one or more servers according to the actual number of medical images, and automatically distribute the original medical images to different servers for processing. When the number of medical images to be processed is small, the server can be combined with the central database. After receiving the original medical image, the server automatically distributes the image to one or more manual labeling terminals for rough labeling. The central database can be set up in the local area network, cloud storage, etc. of the user unit, and users can upload data through wireless network, Ethernet, USB and other communication methods. The central database sets a dedicated account and password for each user, and assigns permissions such as read, write, and change to ensure data security. After acquiring the medical images uploaded by the user, the central database generates a unique digital identification code for each image, binds it with the corresponding user, and distributes the original medical image and the digital identification code to the server.

其中,服务器可以采用实体服务器、云服务器、个人计算机等多种运算平台。服务器为每个人工标注终端分配账号与密码,在接收到原始医学图像与数字识别码后,分发给人工标注终端,由人工标注单元进行粗标注。Among them, the server may adopt various computing platforms such as physical server, cloud server, personal computer, etc. The server assigns an account number and password to each manual labeling terminal, and after receiving the original medical image and digital identification code, distributes it to the manual labeling terminal, and the manual labeling unit performs rough labeling.

其中,人工标注终端可运行在Windows、Android、IOS等多种操作系统上,且可以进行跨平台的数据同步。当操作员完成原始图像的粗标注后,可以将人工标注图回传至服务器。Among them, the manual labeling terminal can run on Windows, Android, IOS and other operating systems, and can perform cross-platform data synchronization. After the operator completes the rough annotation of the original image, the manual annotation map can be sent back to the server.

人工标注终端将人工标注图回传至服务器中;服务器将处理得到的网络标注图传输回中央数据库,供用户进行下载。图8中K、M、N等指未知数量。在实际应用中,根据实际图像数量可以增加或减少服务器,每台服务器上运行相同的指令与全卷积神经网络模型。此处中央数据库与服务器指相应的功能单元,在实际应用中,中央数据库与服务器也可集成为一体。人工标注终端可以是计算机、笔记本电脑、平板电脑、手机等,可以运行在Windows、Android、IOS等操作系统中,其数量可根据实际应用进行增加与减少。The manual labeling terminal sends the manual labeling map back to the server; the server transmits the processed network labeling map back to the central database for users to download. K, M, N, etc. in FIG. 8 refer to unknown quantities. In practical applications, servers can be increased or decreased according to the actual number of images, and each server runs the same instructions and a fully convolutional neural network model. Here, the central database and the server refer to corresponding functional units. In practical applications, the central database and the server can also be integrated into one. Manual labeling terminals can be computers, laptops, tablet computers, mobile phones, etc., and can run on Windows, Android, IOS and other operating systems, and the number of them can be increased or decreased according to actual applications.

实施例3Example 3

参见图9,实施例3公开了一种服务器,包括:Referring to FIG. 9, Embodiment 3 discloses a server, including:

数据收发模块:用于接收与存储中央数据库发送的原始医学图像、向人工标注终端发送原始医学图像和向中央数据库发送网络标注图。为常见结构。Data transceiver module: used to receive and store the original medical images sent by the central database, send the original medical images to the manual labeling terminal, and send the network labeling map to the central database. is a common structure.

图像处理模块:用于通过边缘提取算法对人工标注图进行处理得到标注区域的像素边界。Image processing module: It is used to process the artificially labeled map through the edge extraction algorithm to obtain the pixel boundary of the labeled area.

全卷积神经网络模块:用于通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图,特征图深度方向上像素值最大的点所在的层数序号为标注的类别号。Fully convolutional neural network module: used to process the original medical image through the pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original medical image and whose depth is the number of annotation types, and the pixel value in the depth direction of the feature map. The layer number where the largest point is located is the labeled category number.

修正模块:用于在像素边界,将特征图与人工标注图进行比较;如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正;反之,则选取特征图上该像素点周围的多个像素点,以多个像素点中数量最多的类别作为该像素点的类别以对特征图中对应的像素点进行修正;以修正后(经过步骤S104处理后)的特征图作为网络标注图。Correction module: It is used to compare the feature map with the artificial annotation map at the pixel boundary; if the result of the pixel point on the artificial annotation map and the feature map is consistent, the pixel point will not be corrected; otherwise, the pixel on the feature map will be selected. For multiple pixels around the point, the category with the largest number of pixels is used as the category of the pixel to correct the corresponding pixel in the feature map; the corrected (after step S104) feature map is used as Network annotation diagram.

存储模块:用于存储数据。Storage module: used to store data.

进一步地,服务器还包括反馈模块,用于选取特征图与人工标注图误差较小的原始医学图像为高质量样本,以高质量样本作为全卷积神经网络模型的输入,以该原始医学图像对应的网络标注图作为全卷积神经网络模型的输出,对全卷积神经网络模型进行优化训练。Further, the server also includes a feedback module for selecting an original medical image with a smaller error between the feature map and the manual annotation map as a high-quality sample, using the high-quality sample as the input of the full convolutional neural network model, and using the original medical image corresponding to the high-quality sample. The network annotation map is used as the output of the fully convolutional neural network model, and the fully convolutional neural network model is optimized for training.

其中,参见图10,本发明实施例提供的全卷积神经网络模块包括:10, the fully convolutional neural network module provided by the embodiment of the present invention includes:

特征图提取单元:用于通过多次卷积与池化提取原始医学图像的特征信息,得到初始特征图;具体通过多个

Figure 393972DEST_PATH_IMAGE006
卷积核和
Figure 598689DEST_PATH_IMAGE008
最大池化提取原始医学图像的特征信息,得到初始特征图。Feature map extraction unit: used to extract the feature information of the original medical image through multiple convolutions and pooling to obtain the initial feature map;
Figure 393972DEST_PATH_IMAGE006
convolution kernel and
Figure 598689DEST_PATH_IMAGE008
Maximum pooling extracts the feature information of the original medical image and obtains the initial feature map.

多尺度卷积池化单元:用于通过平均池化将初始特征图降采样至一系列尺寸,通过

Figure 797589DEST_PATH_IMAGE010
卷积核卷积处理,再依次通过双线性插值方法与等尺寸卷积将系列尺寸特征图还原到初始特征图大小,得到还原特征图。Multi-scale convolution pooling unit: used to downsample the initial feature map to a range of sizes through average pooling, through
Figure 797589DEST_PATH_IMAGE010
Convolution kernel convolution processing, and then through the bilinear interpolation method and equal-size convolution to restore the series size feature map to the initial feature map size to obtain the restored feature map.

拼接单元:用于将还原特征图和初始特征图拼接得到拼接特征图;Splicing unit: used to splicing the restored feature map and the original feature map to obtain a spliced feature map;

上采样单元:用于依次通过双线性插值方法与等尺寸卷积将拼接特征图还原到原始医学图像大小,再通过softmax算法,得到尺寸为原始医学图像大小、深度为标注类型数目的特征图。Upsampling unit: It is used to restore the stitched feature map to the original medical image size through the bilinear interpolation method and equal-size convolution in turn, and then use the softmax algorithm to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types. .

实施例4Example 4

参见图11,实施例4公开了一种人工标注终端,包括:原始图显示模块:用于显示原始医学图像并能人工标注,人工标注时,先采用粗线型勾勒出目标区域的大致轮廓,再采用细线型对区域边缘进行修补。Referring to FIG. 11 , Embodiment 4 discloses a manual labeling terminal, including: an original image display module: used to display the original medical image and be able to manually label it. When manually labeling, a rough outline of the target area is first outlined with a thick line. Then use a thin line to repair the edge of the area.

控制模块:用于对标注后的原始医学图像进行处理以得到人工标注图,同时对整个人工标注终端进行控制,如向原始图显示模块和人工标注图显示模块输出显示数据。Control module: It is used to process the labeled original medical images to obtain manual labeling images, and at the same time control the entire manual labeling terminal, such as outputting display data to the original image display module and the manual labeling image display module.

人工标注图显示模块:用于显示人工标注图。Manual annotation map display module: used to display manual annotation map.

选择模块:用于选择画笔颜色和线型粗细,不同颜色代表不同的组织且对应相应的标注分类。具体地,可选择多种线型(至少两种)和多种颜色(按照预定策略进行定义)。Selection module: used to select the brush color and line thickness. Different colors represent different organizations and correspond to corresponding label classifications. Specifically, multiple line types (at least two) and multiple colors (defined according to a predetermined strategy) can be selected.

通信模块:用于接收原始医学图像,并上传人工标注图。Communication module: used to receive original medical images and upload manual annotations.

具体地,原始图显示模块、人工标注图显示模块和选择模块可由同一块触摸显示屏实现。Specifically, the original image display module, the manual annotation image display module and the selection module can be implemented by the same touch display screen.

具体地,原始图显示模块生成与原始图像大小相同的空白画布,选择模块可以选择不同颜色的画笔和不同粗细线型,其上还可包括确认按键、下一张按键、上一张按键和完成按键,操作员完成的原始图像的大致标注后,点击所述确认按键,则将标注保存到标注图缓存区并将填充处理后的标注图在人工标注图显示模块上显示;操作员点击所述下一张按键,可以跳转到下一张未标注原始图像进行标注;上一张按键可以回看之前已经进行标注的图像,如果回看时对图像进行修改,修改结果会保存到标注图缓存区。而原始图显示区为空白时,表明服务器目前未分配任务,则点击所述完成按键,完成标注。Specifically, the original image display module generates a blank canvas with the same size as the original image, the selection module can select brushes of different colors and line types of different thicknesses, and the selection module can also include a confirmation button, a next button, a previous button and a finish button. After the operator completes the rough annotation of the original image and clicks the confirmation button, the annotation will be saved to the annotation map cache area and the filled annotation map will be displayed on the manual annotation map display module; the operator clicks the The next button can jump to the next unmarked original image for annotation; the previous button can review the image that has been marked before. If the image is modified during review, the modified result will be saved to the marked image cache Area. When the original image display area is blank, it indicates that the server has not assigned tasks at present, and the completion button is clicked to complete the labeling.

以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.

Claims (10)

1. A method for classification labeling of medical images, the method comprising:
s101: acquiring an original medical image and sending the original medical image to a manual labeling terminal;
s102: acquiring an artificial annotation figure returned by an artificial annotation terminal and obtaining a pixel boundary of an annotation region through an edge extraction algorithm, wherein the artificial annotation figure is obtained by artificially classifying and annotating a target region of the artificial annotation terminal on an original medical image according to a preset strategy;
s103: processing an original medical image through a pre-trained full convolution neural network model to obtain a feature map with the size of the original medical image and the depth of the feature map as the number of labeled types, wherein the sequence number of the layer number of a point with the largest pixel value in the depth direction of the feature map is a labeled class number;
s104: comparing the feature map with the artificial labeling map at the pixel boundary; if the results of the pixel points on the artificial annotation graph and the characteristic graph are consistent, the pixel points are not corrected; otherwise, selecting a plurality of pixel points around the pixel point on the characteristic diagram, and taking the category with the largest number in the plurality of pixel points as the category of the pixel point;
s105: and outputting the feature graph processed in the step S104 as a network annotation graph.
2. The medical image classification labeling method according to claim 1, wherein, at the manual labeling terminal, the edge contour of the target region is manually drawn, the region surrounded by the edge contour is filled through a connected graph algorithm, the filled region is manually classified and labeled, and the manual labeling graph corresponds to the classification of the feature graph labeling.
3. The classification labeling method of medical images according to claim 2, wherein the labeling method of the manual labeling terminal is as follows:
s201: manually selecting a painting brush with a specific color and selecting a thick line type, wherein different colors represent different tissues and correspond to corresponding label classifications;
s202: manually drawing a rough outline of the target area by using a painting brush;
s203: marking pixel points in the outline as the color of a painting brush by a connected graph algorithm so as to fill the area enclosed by the edge outline;
s204: manually selecting a thin line type, and repairing the edge of the area;
s205: if the regions are still to be labeled, the steps S201-S204 are repeated, and if not, a manual labeling diagram is obtained.
4. The method for classification labeling of medical images according to claim 1, further comprising: each original medical image has a unique data identification code; in step S101, an original medical image and a corresponding data identification code are obtained and sent to a manual labeling terminal together; in step S102, acquiring a manual annotation graph and a corresponding data identification code, and in step S105, outputting a network annotation graph and a corresponding data identification code;
the generation method of the data identification code comprises the following steps:
s301: obtaining a vector Y of the original medical image X through linear projection matrix W matrix operation;
Figure DEST_PATH_IMAGE002
wherein the size of the original medical image X is: the width is w, and the height is 3 h; the size of the linear projection matrix W is 1 × W; the size of the vector Y is 1 x 3 h; the linear projection matrix W is calculated according to the following formula:
Figure DEST_PATH_IMAGE004
s302: each element of the vector Y is rounded down to the nearest integer, such that each element of the Y vector is between [0,255 ];
s303: mapping the Y vector into a specific digital identification code ID through a hash function; if the specific digital identification code ID is the same as the existing image ID, distinguishing the specific digital identification code ID and the existing image ID by adding a suffix identifier; the specific digital identification code ID with or without a suffix identifier is the data identification code.
5. The method for classifying and labeling medical images according to claim 1, wherein the step S103 specifically comprises:
s401: by a plurality of
Figure DEST_PATH_IMAGE006
Convolution kernel sum
Figure DEST_PATH_IMAGE008
Extracting feature information of the original medical image in a maximum pooling manner to obtain an initial feature map;
s402: downsampling an initial feature map to a series of sizes by mean pooling
Figure DEST_PATH_IMAGE010
Performing convolution kernel convolution processing; reducing the series size characteristic diagram to the size of the initial characteristic diagram through a bilinear interpolation method and equal-size convolution to obtain a reduced characteristic diagram;
s403: splicing the restored characteristic diagram and the initial characteristic diagram to obtain a spliced characteristic diagram;
s404: and restoring the spliced feature map to the size of the original medical image through a bilinear interpolation method and equal-size convolution in sequence, and obtaining the feature map with the size of the original medical image and the depth of the marked type number through a softmax algorithm.
6. The method for classification labeling of medical images according to claim 1, further comprising: and selecting an original medical image with a small error between the characteristic graph and the artificial annotation graph as a high-quality sample, taking the high-quality sample as the input of the full convolution neural network model, taking the network annotation graph corresponding to the original medical image as the output of the full convolution neural network model, and performing optimization training on the full convolution neural network model.
7. The method for classification labeling of medical images according to claim 6,
in step S104, if the ratio of the pixel points of the feature map and the artificial labeling map with inconsistent results in the pixel boundary is smaller than the predetermined threshold, the error between the feature map and the artificial labeling map is considered to be small, and the corresponding original medical image is a high-quality sample;
the optimization training method comprises the following steps: and taking the original labeled medical image as a verification set, and performing k-fold cross verification on the high-quality sample.
8. A system for classification labeling of medical images, the system comprising:
the user: the system is used for uploading the original medical image to a server and displaying a network annotation graph;
a central database: the system comprises a server, a network annotation graph and a user, wherein the server is used for acquiring and storing an original medical image uploaded by the user, distributing the original medical image to the server, storing the network annotation graph and sending the network annotation graph to the user;
a server: the system comprises a data processing module, an edge extraction algorithm module, a data processing module and a data processing module, wherein the data processing module is used for distributing an original medical image to an artificial labeling terminal, processing the original medical image through a pre-trained full convolution neural network model to obtain a feature map with the size of the original image and the depth of the feature map as the number of labeling types, acquiring an artificial labeling map and processing the artificial labeling map through the edge extraction algorithm to obtain a pixel boundary of a labeling area; at the pixel boundary, correcting the characteristic graph by comparing with the artificial labeled graph to obtain a network labeled graph, and uploading the network labeled graph to a central database;
manual labeling of the terminal: the system is used for displaying the original medical image and the artificial annotation drawing, carrying out artificial annotation on the original medical image to obtain the artificial annotation drawing, and uploading the artificial annotation drawing to the server.
9. A server, comprising:
a data receiving and transmitting module: the system comprises a central database, a manual annotation terminal and a network annotation terminal, wherein the central database is used for receiving an original medical image sent by the central database, sending the original medical image to the manual annotation terminal and sending a network annotation drawing to the central database;
an image processing module: the pixel boundary of the labeling area is obtained by processing the artificial labeling graph through an edge extraction algorithm;
a full convolution neural network module: the method comprises the steps that a pre-trained full convolution neural network model is used for processing an original medical image to obtain a feature map with the size of the original medical image and the depth of the feature map as the number of labeled types, and the serial number of the layer number where a point with the largest pixel value in the depth direction of the feature map is located is a labeled class number;
a correction module: the characteristic graph and the artificial labeling graph are compared at the boundary of the pixel; if the results of the pixel points on the artificial annotation graph and the characteristic graph are consistent, the pixel points are not corrected; otherwise, selecting a plurality of pixel points around the pixel point on the characteristic diagram, and taking the category with the largest number in the plurality of pixel points as the category of the pixel point; and taking the corrected feature diagram as a network annotation diagram.
10. The server of claim 9, wherein the full convolutional neural network module comprises:
a feature map extraction unit: the method comprises the steps of extracting characteristic information of an original medical image through multiple convolution and pooling to obtain an initial characteristic map;
a multi-scale convolution pooling unit: for down-sampling the initial feature map to a range of sizes by averaging pooling
Figure 242260DEST_PATH_IMAGE010
Performing convolution kernel convolution processing, and reducing the series size characteristic diagram to the size of the initial characteristic diagram through a bilinear interpolation method and equal-size convolution in sequence to obtain a reduced characteristic diagram;
splicing unit: the method comprises the steps of splicing a reduction characteristic diagram and an initial characteristic diagram to obtain a spliced characteristic diagram;
an up-sampling unit: and the feature map with the size of the original medical image and the depth of the marked type number is obtained through a softmax algorithm.
CN202011114703.9A 2020-10-19 2020-10-19 Medical image classification and annotation method, system and server Active CN112132232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011114703.9A CN112132232B (en) 2020-10-19 2020-10-19 Medical image classification and annotation method, system and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011114703.9A CN112132232B (en) 2020-10-19 2020-10-19 Medical image classification and annotation method, system and server

Publications (2)

Publication Number Publication Date
CN112132232A true CN112132232A (en) 2020-12-25
CN112132232B CN112132232B (en) 2024-12-20

Family

ID=73853160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011114703.9A Active CN112132232B (en) 2020-10-19 2020-10-19 Medical image classification and annotation method, system and server

Country Status (1)

Country Link
CN (1) CN112132232B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989610A (en) * 2021-12-27 2022-01-28 广州思德医疗科技有限公司 Intelligent image labeling method, device and system
CN114550129A (en) * 2022-01-26 2022-05-27 江苏联合职业技术学院苏州工业园区分院 Machine learning model processing method and system based on data set
WO2022218012A1 (en) * 2021-04-13 2022-10-20 北京百度网讯科技有限公司 Feature extraction method and apparatus, device, storage medium, and program product
CN117592517A (en) * 2023-11-02 2024-02-23 新疆新华水电投资股份有限公司 Model training method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335303A (en) * 2018-01-28 2018-07-27 浙江大学 A kind of multiple dimensioned palm bone segmentation method applied to palm X-ray
CN109378052A (en) * 2018-08-31 2019-02-22 透彻影像(北京)科技有限公司 The preprocess method and system of image labeling
US20190073447A1 (en) * 2017-09-06 2019-03-07 International Business Machines Corporation Iterative semi-automatic annotation for workload reduction in medical image labeling
CN109658422A (en) * 2018-12-04 2019-04-19 大连理工大学 A kind of retinal images blood vessel segmentation method based on multiple dimensioned deep supervision network
CN111445440A (en) * 2020-02-20 2020-07-24 上海联影智能医疗科技有限公司 Medical image analysis method, equipment and storage medium
CN111680753A (en) * 2020-06-10 2020-09-18 创新奇智(上海)科技有限公司 Data labeling method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190073447A1 (en) * 2017-09-06 2019-03-07 International Business Machines Corporation Iterative semi-automatic annotation for workload reduction in medical image labeling
CN108335303A (en) * 2018-01-28 2018-07-27 浙江大学 A kind of multiple dimensioned palm bone segmentation method applied to palm X-ray
CN109378052A (en) * 2018-08-31 2019-02-22 透彻影像(北京)科技有限公司 The preprocess method and system of image labeling
CN109658422A (en) * 2018-12-04 2019-04-19 大连理工大学 A kind of retinal images blood vessel segmentation method based on multiple dimensioned deep supervision network
CN111445440A (en) * 2020-02-20 2020-07-24 上海联影智能医疗科技有限公司 Medical image analysis method, equipment and storage medium
CN111680753A (en) * 2020-06-10 2020-09-18 创新奇智(上海)科技有限公司 Data labeling method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022218012A1 (en) * 2021-04-13 2022-10-20 北京百度网讯科技有限公司 Feature extraction method and apparatus, device, storage medium, and program product
CN113989610A (en) * 2021-12-27 2022-01-28 广州思德医疗科技有限公司 Intelligent image labeling method, device and system
CN114550129A (en) * 2022-01-26 2022-05-27 江苏联合职业技术学院苏州工业园区分院 Machine learning model processing method and system based on data set
CN117592517A (en) * 2023-11-02 2024-02-23 新疆新华水电投资股份有限公司 Model training method and device

Also Published As

Publication number Publication date
CN112132232B (en) 2024-12-20

Similar Documents

Publication Publication Date Title
CN112132232A (en) Method, system and server for classification and labeling of medical images
CN108345890B (en) Image processing method, device and related equipment
CN108230339B (en) Annotation completion method for gastric cancer pathological slices based on pseudo-label iterative annotation
CN109493417B (en) Three-dimensional object reconstruction method, device, equipment and storage medium
WO2021120834A1 (en) Biometrics-based gesture recognition method and apparatus, computer device, and medium
CN111902825A (en) Polygonal object labeling system and method for training object labeling system
CN116310076A (en) Three-dimensional reconstruction method, device, equipment and storage medium based on nerve radiation field
CN108335303A (en) A kind of multiple dimensioned palm bone segmentation method applied to palm X-ray
US12165295B2 (en) Digital image inpainting utilizing a cascaded modulation inpainting neural network
US20230368339A1 (en) Object class inpainting in digital images utilizing class-specific inpainting neural networks
CN113239977B (en) Training method, device and equipment of multi-domain image conversion model and storage medium
CN111583264B (en) Training method for image segmentation network, image segmentation method, and storage medium
Li et al. Multi-view convolutional vision transformer for 3D object recognition
CN110827341A (en) Picture depth estimation method and device and storage medium
CN116051392A (en) Image restoration method and system based on deep learning interactive network
CN110717405A (en) Face feature point positioning method, device, medium and electronic equipment
Zhang et al. End-to-end learning of self-rectification and self-supervised disparity prediction for stereo vision
AU2023210623A1 (en) Panoptically guided inpainting utilizing a panoptic inpainting neural network
AU2023210622A1 (en) Learning parameters for neural networks using a semantic discriminator and an object-level discriminator
US12086965B2 (en) Image reprojection and multi-image inpainting based on geometric depth parameters
CN116486071A (en) Image blocking feature extraction method, device and storage medium
CN116258756A (en) A self-supervised monocular depth estimation method and system
CN114387294A (en) Image processing method and storage medium
CN115840507B (en) Large-screen equipment interaction method based on 3D image control
CN119067839B (en) An image BEV perspective conversion method based on negation probability and depth estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant