CN112132232A - Method, system and server for classification and labeling of medical images - Google Patents
Method, system and server for classification and labeling of medical images Download PDFInfo
- Publication number
- CN112132232A CN112132232A CN202011114703.9A CN202011114703A CN112132232A CN 112132232 A CN112132232 A CN 112132232A CN 202011114703 A CN202011114703 A CN 202011114703A CN 112132232 A CN112132232 A CN 112132232A
- Authority
- CN
- China
- Prior art keywords
- labeling
- medical image
- annotation
- feature map
- original medical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000002372 labelling Methods 0.000 title claims abstract description 128
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims description 29
- 238000013527 convolutional neural network Methods 0.000 claims description 28
- 238000010586 diagram Methods 0.000 claims description 25
- 238000011176 pooling Methods 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 10
- 239000003086 colorant Substances 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 4
- 238000012795 verification Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 2
- 238000003062 neural network model Methods 0.000 claims 6
- 238000010422 painting Methods 0.000 claims 3
- 238000005070 sampling Methods 0.000 claims 2
- 238000012935 Averaging Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 14
- 239000000284 extract Substances 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000008439 repair process Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000002059 diagnostic imaging Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000012552 review Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 2
- 238000002790 cross-validation Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012285 ultrasound imaging Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Multimedia (AREA)
- Radiology & Medical Imaging (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
本发明公开了医学图像的分类标注方法和系统、服务器,属于医学图像技术领域。该方法包括:获取原始医学图像并发送给人工标注终端;获取人工标注终端返回的人工标注图并通过边缘提取算法得到标注区域的像素边界;通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图;在像素边界,将特征图与人工标注图进行比较;如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正;反之,则选取特征图上该像素点周围的多个像素点,以多个像素点中数量最多的类别作为该像素点的类别;以修正后的特征图作为网络标注图输出。
The invention discloses a medical image classification and labeling method, system and server, and belongs to the technical field of medical images. The method includes: obtaining an original medical image and sending it to a manual labeling terminal; obtaining a manual labeling image returned by the manual labeling terminal and obtaining the pixel boundary of the labeling area through an edge extraction algorithm; The image is processed to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types; at the pixel boundary, the feature map is compared with the manual annotation map; The pixel is corrected; otherwise, multiple pixels around the pixel on the feature map are selected, and the category with the largest number of pixels is used as the category of the pixel; the corrected feature map is used as the network annotation map. output.
Description
技术领域technical field
本发明属于医学图像处理技术领域,特别涉及一种医学图像的分类标注方法和系统、服务器。The invention belongs to the technical field of medical image processing, and in particular relates to a method, system and server for classifying and labeling medical images.
背景技术Background technique
先进医学成像技术如超声成像、磁共振成像技术等产生了大量的二维与三维医学图像用于医疗诊断,这些图像中包含了病理组织、器官等多种信息,需要医务工作人员结合专业知识进行判断。然而医学成像和常规图像相比具有对比度差、噪声大等特点,造成诊断耗时、容易出错等问题。通过预先进行医学图像地分割与标注,把目标区域从背景区域中定量地分离出来,不仅可以减小医学专家在疾病诊断过程中的工作量与成本,而且可以减少人工过程的出错率,从而提高诊断效率与准确度。Advanced medical imaging technologies such as ultrasound imaging and magnetic resonance imaging technology generate a large number of 2D and 3D medical images for medical diagnosis. These images contain various information such as pathological tissues and organs, which require medical staff to combine professional knowledge. judge. However, compared with conventional images, medical imaging has the characteristics of poor contrast and high noise, resulting in time-consuming and error-prone diagnosis. By segmenting and labeling medical images in advance, the target area can be quantitatively separated from the background area, which can not only reduce the workload and cost of medical experts in the process of disease diagnosis, but also reduce the error rate of manual processes, thereby improving Diagnostic efficiency and accuracy.
目前,对于医学图像的分割处理多采用基于阈值、边缘检测等的传统图像分割方法,这些方法容易受噪声影响,难以保证分割边缘的连续性与封闭性。以神经网络为代表的人工智能算法在自动驾驶等图像处理领域中得到成熟应用,成为解决医疗图像处理问题的新兴技术。At present, traditional image segmentation methods based on threshold and edge detection are mostly used for segmentation of medical images. These methods are easily affected by noise, and it is difficult to ensure the continuity and closure of segmentation edges. Artificial intelligence algorithms represented by neural networks have been maturely applied in image processing fields such as autonomous driving, and have become an emerging technology to solve medical image processing problems.
发明内容SUMMARY OF THE INVENTION
为了解决上述问题,本发明实施例提供了一种医学图像的分类标注方法和系统、服务器。所述技术方案如下:In order to solve the above problems, the embodiments of the present invention provide a method, system, and server for classifying and labeling medical images. The technical solution is as follows:
一方面,本发明实施例提供了医学图像的分类标注方法,该方法包括:On the one hand, an embodiment of the present invention provides a method for classifying and labeling medical images, the method comprising:
S101:获取原始医学图像并发送给人工标注终端;S101: Obtain the original medical image and send it to the manual labeling terminal;
S102:获取人工标注终端返回的人工标注图并通过边缘提取算法得到标注区域的像素边界,所述人工标注图由人工标注终端在原始医学图像上的目标区域通过人工按照预定策略分类标注得到;S102: Obtain the manual annotation map returned by the manual annotation terminal and obtain the pixel boundary of the marked area through an edge extraction algorithm, where the manual annotation map is obtained by manually labeling the target area on the original medical image by the manual annotation terminal according to a predetermined strategy;
S103:通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图,所述特征图深度方向上像素值最大的点所在的层数序号为标注的类别号;S103: Process the original medical image through a pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original medical image and whose depth is the number of annotation types, where the point with the largest pixel value in the depth direction of the feature map is located. The layer number is the marked category number;
S104:在像素边界,将特征图与人工标注图进行比较;如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正;反之,则选取特征图上该像素点周围的多个像素点,以多个像素点中数量最多的类别作为该像素点的类别;S104: at the pixel boundary, compare the feature map with the manual annotation map; if the pixel point is consistent with the results on the manual annotation map and the feature map, do not correct the pixel point; otherwise, select the pixel point on the feature map around the pixel point. For multiple pixels, the category with the largest number of pixels is used as the category of the pixel;
S105:以经过步骤S104处理后的特征图作为网络标注图输出。S105: Output the feature map processed in step S104 as the network annotation map.
其中,在人工标注终端,人工绘制目标区域的边缘轮廓,通过连通图算法将边缘轮廓围成的区域进行填充,人工对填充后的区域进行分类标注,所述人工标注图与特征图标注的类别对应。Among them, at the manual labeling terminal, the edge contour of the target area is manually drawn, the area enclosed by the edge contour is filled by the connected graph algorithm, and the filled area is manually classified and marked. correspond.
具体地,本发明实施例中人工标注终端的标注方法为:Specifically, the labeling method for manually labeling the terminal in the embodiment of the present invention is:
S201:人工选择特定颜色的画笔并选择粗线型,不同颜色代表不同的组织且对应相应的标注分类;S201: Manually select a brush of a specific color and select a thick line type, different colors represent different tissues and correspond to corresponding labeling classifications;
S202:人工使用画笔勾勒出目标区域的大致轮廓;S202: Manually use a brush to outline the rough outline of the target area;
S203:通过连通图算法,将位于轮廓内的像素点标注为画笔颜色以实现将边缘轮廓围成的区域进行填充;S203: through the connected graph algorithm, mark the pixels located in the outline as the brush color to fill the area enclosed by the edge outline;
S204:人工选择细线型,对区域边缘进行修补;S204: Manually select the thin line type, and repair the edge of the area;
S205:如果还有区域要标注,则重复步骤S201- S204,如果没有则得到人工标注图。S205: If there is still an area to be marked, repeat steps S201-S204, if not, obtain a manual marked map.
进一步地,本发明提供的分类标注方法还包括:每张原始医学图像均具有唯一的数据标识码;在步骤S101中,获取原始医学图像及对应的数据标识码并一起发送给人工标注终端;在步骤S102中,获取人工标注图及对应的数据标识码,在步骤S105中,输出网络标注图及对应的数据标识码;数据标识码的生成方法为:Further, the classification and labeling method provided by the present invention further includes: each original medical image has a unique data identification code; in step S101, the original medical image and the corresponding data identification code are obtained and sent to the manual labeling terminal together; In step S102, the manual annotation diagram and the corresponding data identification code are obtained, and in step S105, the network annotation diagram and the corresponding data identification code are output; the generation method of the data identification code is:
S301:原始医学图像X通过线性投影矩阵W矩阵运算得到向量Y;S301: the original medical image X obtains the vector Y through the linear projection matrix W matrix operation;
其中,原始医学图像X的大小为:宽度为w,高度为3h;线性投影矩阵W的大小为1*w;向量Y的尺寸为1*3h;线性投影矩阵W根据下列公式计算得到:Among them, the size of the original medical image X is: the width is w, the height is 3h; the size of the linear projection matrix W is 1*w; the size of the vector Y is 1*3h; the linear projection matrix W is calculated according to the following formula:
S302:向量Y的每个元素向下取整到最近的整数,使Y向量的每个元素都在[0,255]之间;S302: Round down each element of the vector Y to the nearest integer, so that each element of the Y vector is between [0, 255];
S303:通过哈希散列函数,将Y向量映射为特定数字标识码ID;如果特定数字标识码ID与已有图像ID相同,则为特定数字标识码ID加上后缀标识符与已有图像ID进行区分;带或不带后缀标识符的特定数字标识码ID即为数据标识码。S303: Map the Y vector to a specific digital identification code ID through a hash function; if the specific digital identification code ID is the same as the existing image ID, add a suffix identifier to the specific digital identification code ID and the existing image ID To distinguish; the specific digital identification code ID with or without the suffix identifier is the data identification code.
其中,步骤S103具体包括:Wherein, step S103 specifically includes:
S401:通过多个卷积核和最大池化提取原始医学图像的特征信息,得到初始特征图;S401: Pass multiple convolution kernel and Maximum pooling extracts the feature information of the original medical image to obtain the initial feature map;
S402:通过平均值池化将初始特征图降采样至一系列尺寸,通过卷积核卷积处理;再依次通过双线性插值方法与等尺寸卷积将系列尺寸特征图还原到初始特征图大小,得到还原特征图;S402: Downsample the initial feature map to a range of sizes through mean pooling, and pass Convolution kernel convolution processing; then through the bilinear interpolation method and equal-size convolution in turn, the series size feature map is restored to the initial feature map size, and the restored feature map is obtained;
S403:将还原特征图和初始特征图拼接得到拼接特征图;S403: splicing the restored feature map and the initial feature map to obtain a spliced feature map;
S404:依次通过双线性插值方法与等尺寸卷积将拼接特征图还原到原始医学图像大小,再通过softmax算法,得到尺寸为原始医学图像大小、深度为标注类型数目的特征图。S404: The stitched feature map is restored to the size of the original medical image through the bilinear interpolation method and equal-size convolution in turn, and then a feature map whose size is the size of the original medical image and the depth is the number of annotation types is obtained through the softmax algorithm.
优选地,本发明提供的分类标注方法还包括:选取特征图与人工标注图误差较小的原始医学图像为高质量样本,以高质量样本作为全卷积神经网络模型的输入,以该原始医学图像对应的网络标注图作为全卷积神经网络模型的输出,对全卷积神经网络模型进行优化训练。Preferably, the classification and labeling method provided by the present invention further includes: selecting an original medical image with a smaller error between the feature map and the manual labeling map as a high-quality sample, using the high-quality sample as the input of the fully convolutional neural network model, and using the original medical image as the input of the full convolutional neural network model. The network annotation map corresponding to the image is used as the output of the fully convolutional neural network model, and the fully convolutional neural network model is optimized for training.
具体地,在步骤S104中,如果特征图与人工标注图中结果不一致的像素点在像素边界的占比小于预定阈值时,则认为特征图与人工标注图误差较小,对应的原始医学图像为高质量样本;所述优化训练方法包括:以原有的已标注医学图像作为验证集,对高质量样本进行k折交叉验证。Specifically, in step S104, if the proportion of pixels whose results are inconsistent between the feature map and the manual annotation map in the pixel boundary is less than a predetermined threshold, it is considered that the error between the feature map and the manual annotation map is small, and the corresponding original medical image is High-quality samples; the optimization training method includes: using the original marked medical images as a verification set, and performing k-fold cross-validation on the high-quality samples.
具体地,在步骤S104中:多个像素点的数量为15-25个。Specifically, in step S104: the number of the plurality of pixel points is 15-25.
另一方面,本发明实施例还提供了一种医学图像的分类标注系统,该系统包括:On the other hand, an embodiment of the present invention also provides a system for classifying and labeling medical images, the system comprising:
用户:用于将原始医学图像上传至服务器,并能显示网络标注图;User: used to upload the original medical image to the server, and can display the network annotation map;
中央数据库:用于获取并存储用户上传的原始医学图像,将原始医学图像分发给服务器,存储网络标注图并将网络标注图发送至用户;Central database: used to acquire and store the original medical images uploaded by users, distribute the original medical images to the server, store the network annotation map and send the network annotation map to the user;
服务器:用于将原始医学图像分发给人工标注终端,对原始医学图像通过预先训练的全卷积神经网络模型进行处理得到尺寸为原始图像大小、深度为标注类型数目的特征图,获取人工标注图并通过边缘提取算法对人工标注图进行处理得到标注区域的像素边界;在像素边界,通过与人工标注图对比对特征图进行修正得到网络标注图,将网络标注图上传至中央数据库;Server: used to distribute the original medical image to the manual labeling terminal, process the original medical image through a pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original image and the depth is the number of labeling types, and obtain the manual labeling map And through the edge extraction algorithm, the artificial annotation map is processed to obtain the pixel boundary of the labeled area; at the pixel boundary, the feature map is corrected by comparing with the artificial annotation map to obtain the network annotation map, and the network annotation map is uploaded to the central database;
人工标注终端:用于显示原始医学图像和人工标注图,对原始医学图像进行人工标注得到人工标注图,并将人工标注图上传至服务器。Manual labeling terminal: used to display the original medical image and manual labeling map, manually label the original medical image to obtain the manual labeling map, and upload the manual labeling map to the server.
又一方面,本发明实施例还提供了一种服务器,包括:In another aspect, an embodiment of the present invention also provides a server, including:
数据收发模块:用于接收中央数据库发送的原始医学图像、向人工标注终端发送原始医学图像和向中央数据库发送网络标注图;Data transceiver module: used to receive the original medical image sent by the central database, send the original medical image to the manual labeling terminal, and send the network labeling map to the central database;
图像处理模块:用于通过边缘提取算法对人工标注图进行处理得到标注区域的像素边界;Image processing module: It is used to process the artificially labeled image through the edge extraction algorithm to obtain the pixel boundary of the labeled area;
全卷积神经网络模块:用于通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图,所述特征图深度方向上像素值最大的点所在的层数序号为标注的类别号;Fully convolutional neural network module: used to process the original medical image through the pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original medical image and whose depth is the number of annotation types. The depth direction of the feature map is The layer number of the point with the largest pixel value is the labeled category number;
修正模块:用于在像素边界,将特征图与人工标注图进行比较;如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正;反之,则选取特征图上该像素点周围的多个像素点,以多个像素点中数量最多的类别作为该像素点的类别;以修正后的特征图作为网络标注图。Correction module: It is used to compare the feature map with the artificial annotation map at the pixel boundary; if the result of the pixel point on the artificial annotation map and the feature map is consistent, the pixel point will not be corrected; otherwise, the pixel on the feature map will be selected. For multiple pixels around the point, the category with the largest number of pixels is used as the category of the pixel; the corrected feature map is used as the network annotation map.
其中,本发明实施例提供的全卷积神经网络模块包括:Wherein, the fully convolutional neural network module provided by the embodiment of the present invention includes:
特征图提取单元:用于通过多次卷积与池化提取原始医学图像的特征信息,得到初始特征图;Feature map extraction unit: used to extract the feature information of the original medical image through multiple convolutions and pooling to obtain the initial feature map;
多尺度卷积池化单元:用于通过平均池化将初始特征图降采样至一系列尺寸,通过卷积核卷积处理,再依次通过双线性插值方法与等尺寸卷积将系列尺寸特征图还原到初始特征图大小,得到还原特征图;Multi-scale convolution pooling unit: used to downsample the initial feature map to a range of sizes through average pooling, through Convolution kernel convolution processing, and then through the bilinear interpolation method and equal-size convolution to restore the series size feature map to the initial feature map size to obtain the restored feature map;
拼接单元:用于将还原特征图和初始特征图拼接得到拼接特征图;Splicing unit: used to splicing the restored feature map and the original feature map to obtain a spliced feature map;
上采样单元:用于依次通过双线性插值方法与等尺寸卷积将拼接特征图还原到原始医学图像大小,再通过softmax算法,得到尺寸为原始医学图像大小、深度为标注类型数目的特征图。Upsampling unit: It is used to restore the stitched feature map to the original medical image size through the bilinear interpolation method and equal-size convolution in turn, and then use the softmax algorithm to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types. .
又一方面,本发明实施例还提供了一种人工标注终端,包括:原始图显示模块:用于显示原始医学图像并能人工标注,人工标注时,先采用粗线型勾勒出目标区域的大致轮廓,再采用细线型对区域边缘进行修补;In another aspect, an embodiment of the present invention also provides a manual labeling terminal, including: an original image display module: used to display the original medical image and be able to manually label it. When manually labeling, a rough outline of the target area is first outlined with a thick line. outline, and then use thin lines to repair the edge of the area;
控制模块:用于对标注后的原始医学图像进行处理以得到人工标注图;Control module: used to process the labeled original medical images to obtain manual labeled maps;
人工标注图显示模块:用于显示人工标注图;Manual annotation map display module: used to display manual annotation map;
选择模块:用于选择画笔颜色和线型粗细,不同颜色代表不同的组织且对应相应的标注分类;Selection module: used to select brush color and line thickness, different colors represent different tissues and correspond to corresponding label classifications;
通信模块:用于接收原始医学图像,并上传人工标注图。Communication module: used to receive original medical images and upload manual annotations.
本发明实施例提供的技术方案带来的有益效果是:本发明基于全卷积神经网络构建一套医学图像分割标注方法与系统,可以实现原始医学图像像素到网络标注图像像素端到端的映射关系,张量化运算大大提高了图像的处理速度;与传统图像处理算法相比,本发明中使用的神经网络标注方法不需要人为设计图像处理算子,在使用优良的训练数据集的情况下,能够用于不同类型、不同尺寸的医学图像的标注;通过综合卷积神经网络输出标注结果和人工标注结果,对神经网络权重进行优化,从而改善图像分割与标注的结果,在图像处理过程实现正反馈,不断提升医学影像诊断的效率与准确度。The beneficial effects brought by the technical solutions provided by the embodiments of the present invention are: the present invention constructs a medical image segmentation and labeling method and system based on a fully convolutional neural network, which can realize the end-to-end mapping relationship between the original medical image pixels and the network labeled image pixels. , the tensorization operation greatly improves the processing speed of the image; compared with the traditional image processing algorithm, the neural network labeling method used in the present invention does not require artificial design of image processing operators, and can be used in the case of using an excellent training data set. It is used for labeling medical images of different types and sizes; by synthesizing the output labeling results and manual labeling results of the convolutional neural network, the weights of the neural network are optimized to improve the results of image segmentation and labeling, and realize positive feedback in the image processing process. , and continuously improve the efficiency and accuracy of medical imaging diagnosis.
附图说明Description of drawings
图1是本发明实施例提供的医学图像的分类标注方法的流程图;1 is a flowchart of a method for classifying and labeling medical images provided by an embodiment of the present invention;
图2是本发明实施例提供的人工标注终端的标注方法的流程图;2 is a flowchart of a method for manually labeling a terminal provided by an embodiment of the present invention;
图3是本发明实施例提供的人工标注终端的操作界面图;3 is an operation interface diagram of a manual labeling terminal provided by an embodiment of the present invention;
图4是本发明实施例提供的人工标注终端的操作流程图;4 is an operation flowchart of a manual labeling terminal provided by an embodiment of the present invention;
图5是本发明实施例提供的数据标识码的生成方法的流程图;5 is a flowchart of a method for generating a data identification code provided by an embodiment of the present invention;
图6是步骤S103的具体流程图;Fig. 6 is the concrete flow chart of step S103;
图7是步骤S103的具体处理方式;Fig. 7 is the specific processing mode of step S103;
图8是本发明实施例提供的医学图像的分类标注系统的原理框图;8 is a schematic block diagram of a system for classifying and labeling medical images provided by an embodiment of the present invention;
图9是本发明实施例提供的服务器的原理框图;9 is a schematic block diagram of a server provided by an embodiment of the present invention;
图10是本发明实施例提供的全卷积神经网络模块的原理框图;10 is a schematic block diagram of a fully convolutional neural network module provided by an embodiment of the present invention;
图11是本发明实施例提供的人工标注终端的原理框图。FIG. 11 is a schematic block diagram of a terminal for manual annotation provided by an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings.
实施例1Example 1
参见图1,实施例1公开了一种医学图像的分类标注方法,该方法包括:Referring to FIG. 1, Embodiment 1 discloses a method for classifying and labeling medical images, the method includes:
S101:获取原始医学图像并发送给人工标注终端,同时也可获取数据标识码。S101: Obtain the original medical image and send it to the manual labeling terminal, and also obtain the data identification code.
S102:获取人工标注终端返回的人工标注图(及对应的数据标识码)并通过边缘提取算法得到标注区域的像素边界;其中,人工标注图由人工标注终端在原始医学图像上的目标区域通过人工按照预定策略分类标注得到;在本实施例中,仅需要标注出目标区域的轮廓即可(即粗标注),工作强度低,且标注速度非常快。其中,像素边界为附近为标注结果可以出错的区域。S102: Obtain the manual labeling map (and the corresponding data identification code) returned by the manual labeling terminal, and obtain the pixel boundary of the labeling area through an edge extraction algorithm; wherein, the target area of the manual labeling map on the original medical image is manually labelled by the terminal. It is obtained by classification and labeling according to a predetermined strategy; in this embodiment, only the outline of the target area needs to be labelled (that is, rough labeling), the work intensity is low, and the labeling speed is very fast. Among them, the pixel boundary is the nearby area where the labeling result can be wrong.
S103:通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图。其中,特征图深度方向上像素值最大的点所在的层数(对应图7中的N)序号为标注的类别号。S103: Through the pre-trained fully convolutional neural network model, the original medical image is processed to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types. Among them, the number of layers (corresponding to N in Figure 7) where the point with the largest pixel value in the depth direction of the feature map is located is the labeled category number.
S104:在像素边界,将特征图与人工标注图进行比较(像素点)。如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正。反之,则选取特征图上该像素点周围的多个(如20个)像素点,以多个像素点中数量最多的类别作为该像素点的类别以对特征图中对应的像素点进行修正。该步骤实现对特征图进行修正,不但能提升网络标注图的准确度,还能用于反馈优化训练。具体地,标注不同的像素点周围的20个像素点中,有8个像素点的类别号为分类1、5个像素点的类别号为分类2,3个像素点的类别号为分类3、2个像素点的类别号为分类4和2个像素点的类别号为分类4,则数量最多的类别为分类1,则标注不同的像素点的类别号为分类1。具体地,在步骤S104中:多个像素点的数量为15-25个,具体可以为20个;像素点数量的选择根据图像的复杂度选择,图像越复杂,数量越大。S104: At the pixel boundary, compare the feature map with the manual annotation map (pixel points). If the result of the pixel point on the artificial annotation map and the feature map is consistent, the pixel point will not be corrected. On the contrary, select multiple (eg 20) pixels around the pixel on the feature map, and use the category with the largest number of pixels as the category of the pixel to correct the corresponding pixel in the feature map. This step realizes the correction of the feature map, which can not only improve the accuracy of the network annotation map, but also be used for feedback optimization training. Specifically, among the 20 pixels around the marked different pixels, there are 8 pixels whose category numbers are category 1, 5 pixels whose category numbers are category 2, and 3 pixels whose category numbers are category 3, The category number of 2 pixels is category 4 and the category number of 2 pixels is category 4, then the category with the largest number is category 1, and the category numbers marked with different pixels are category 1. Specifically, in step S104: the number of the plurality of pixels is 15-25, specifically 20; the selection of the number of pixels is selected according to the complexity of the image, the more complex the image, the greater the number.
S105:以修正后(经过步骤S104处理后)的特征图(及对应的数据标识码)作为网络标注图输出;当然,也可一起输出人工标注图。S105: Output the corrected feature map (and the corresponding data identification code) as the network annotation map; of course, the manual annotation map may also be output together.
其中,在人工标注终端,人工绘制目标区域的边缘轮廓,通过连通图算法将边缘轮廓围成的区域进行填充(具体可以采用与边缘轮廓相同的颜色),人工对填充后的区域进行分类标注(可以直接采用颜色进行分类,也可填充后再定义),人工标注图与特征图标注的类别对应,人工标注时要按照一定的策略(如根据实际情况定义,癌组织的颜色、病变组织的颜色、正常组织的颜色等)进行,保证人工标注图与特征图标注的类别对应。Among them, at the manual labeling terminal, the edge contour of the target area is manually drawn, and the area enclosed by the edge contour is filled by the connected graph algorithm (specifically, the same color as the edge contour can be used), and the filled area is manually classified and marked ( Color can be directly used for classification, or it can be filled and then defined). The manual labeling map corresponds to the category labeled by the feature map. When manually labeling, a certain strategy should be followed (such as the definition according to the actual situation, the color of cancer tissue, the color of diseased tissue) , the color of normal tissue, etc.) to ensure that the manual annotation map corresponds to the category marked by the feature map.
具体地,参见图2-4,本发明实施例中人工标注终端的标注方法为:Specifically, referring to FIGS. 2-4 , a labeling method for manually labeling a terminal in the embodiment of the present invention is:
S201:人工选择特定颜色的画笔并选择粗线型,不同颜色代表不同的组织且对应相应的标注分类。S201: Manually select a brush of a specific color and select a thick line type. Different colors represent different tissues and correspond to corresponding labeling classifications.
S202:人工使用画笔勾勒出目标区域的大致轮廓(如采用粗虚线进行勾勒),点击确认。S202: Manually use a brush to outline the rough outline of the target area (for example, use a thick dotted line to outline), and click OK.
S203:通过连通图算法,将位于轮廓内的像素点标注为画笔颜色以实现将边缘轮廓围成的区域进行填充。S203: Using a connected graph algorithm, mark the pixels located in the outline as the brush color to fill the area enclosed by the edge outline.
S204:人工选择细线型,对区域边缘进行修补,使标注区域尽可能正确。S204: Manually select the thin line type, and repair the edge of the area to make the marked area as correct as possible.
S205:如果还有区域要标注,则重复步骤S201- S204(不同部位采用相应的颜色),如果没有则得到人工标注图,则点击下一张,进行下一张标注。S205: If there is still an area to be marked, repeat steps S201-S204 (different parts use corresponding colors), if not, get a manual marked map, click the next one to mark the next one.
S206:如果有需要修改的图像,可点击上一张进行回看。S206: If there is an image that needs to be modified, click the previous image to review it.
S207:所有图像标注完成后,打包发送给服务器。S207: After all images are marked, packaged and sent to the server.
其中,本发明的标注过程可以在常规的PC端实现,优选在触屏设备上实现以便于勾勒轮廓。Wherein, the marking process of the present invention can be implemented on a conventional PC, preferably on a touch screen device to facilitate outline drawing.
具体地,操作员登陆服务器分配的账号,获取原始医学图像。人工标注终端具备医学图像的显示、切换、存储等功能。完成一张医学图像标注后,在人工标注终端中暂存,并进行下一张图像的标注。当完成一组例如20张图像的粗标注后,人工标注终端将标注数据回传至服务器,由服务器进行下一步处理。类似的,人工标注终端也可采用在线工作方式,例如用户每完成一张医学图像的粗标注,终端便将粗标注后的图像回转至服务器,并从服务器下载一张新的医学图像进行标注。Specifically, the operator logs in to the account assigned by the server to obtain the original medical image. The manual labeling terminal has the functions of displaying, switching, and storing medical images. After completing the annotation of a medical image, it is temporarily stored in the manual annotation terminal, and the next image is annotated. After completing the rough labeling of a group of, for example, 20 images, the manual labeling terminal sends the labeling data back to the server, and the server performs the next step of processing. Similarly, the manual labeling terminal can also work online. For example, every time the user completes the rough labeling of a medical image, the terminal returns the crudely labelled image to the server, and downloads a new medical image from the server for labeling.
进一步地,本发明提供的分类标注方法还包括:每张原始医学图像均具有唯一的数据标识码;在步骤S101中,获取原始医学图像及对应的数据标识码并一起发送给人工标注终端;在步骤S102中,获取人工标注图及对应的数据标识码,在步骤S105中,输出网络标注图及对应的数据标识码。Further, the classification and labeling method provided by the present invention further includes: each original medical image has a unique data identification code; in step S101, the original medical image and the corresponding data identification code are obtained and sent to the manual labeling terminal together; In step S102, the manual annotation map and the corresponding data identification code are obtained, and in step S105, the network annotation map and the corresponding data identification code are output.
具体地,中央数据库生成数据标识码后,将原始医学图像与数据识别码发送给服务器,由服务器将待标注医学图像(原始医学图像)随机分发给人工标注终端。人工标注终端接收到数据后,由操作员手动粗标注,并将粗标注的人工标注图回传给服务器。服务器将粗标注的人工标注图进行运算处理,提取出粗标注特征的准确像素边缘,再结合全卷积神经网络模型可实现对目标组织或器官的像素级别的分割与多分类。图像标注完成后,各个服务器将标注后的图像(网络标注图)回传给中央数据库,再由中央数据库返回给用户。中央数据库可对用户上传的每张原始医学图像生成唯一的数据识别码以建立存储与索引,该数据识别码随对应的图像一起传送给服务器。服务器完成医学图像的处理后将图像发送回中央数据库,中央数据库根据该数据识别码建立已标注医学图像与原始医学图像的一一对应关系,进行存储。用户根据需要,可以从中央数据库提取原始医学图像、人工标注图和网络标注图。Specifically, after the central database generates the data identification code, the original medical image and the data identification code are sent to the server, and the server randomly distributes the medical image to be labeled (original medical image) to the manual labeling terminal. After the manual labeling terminal receives the data, the operator manually labels it roughly, and sends the crude labeling map back to the server. The server processes the rough-labeled manual annotation map, extracts the accurate pixel edges of the coarse-labeled features, and combines with the fully convolutional neural network model to achieve pixel-level segmentation and multi-classification of the target tissue or organ. After the image annotation is completed, each server sends the marked image (network annotation map) back to the central database, and then the central database returns it to the user. The central database can generate a unique data identification code for each original medical image uploaded by the user to establish storage and indexing, and the data identification code is transmitted to the server together with the corresponding image. After the server completes the processing of the medical image, the image is sent back to the central database, and the central database establishes a one-to-one correspondence between the marked medical image and the original medical image according to the data identification code and stores it. Users can extract original medical images, manual annotation maps and network annotation maps from the central database according to their needs.
具体地,参见图5,本发明实施例中的数据标识码的生成方法为:Specifically, referring to Fig. 5, the method for generating the data identification code in the embodiment of the present invention is:
S301:原始医学图像X通过线性投影矩阵W矩阵运算得到向量Y;S301: the original medical image X obtains the vector Y through the linear projection matrix W matrix operation;
其中,原始医学图像X的大小为:宽度为w,高度为3h;线性投影矩阵W的大小为1*w;向量Y的尺寸为1*3h;线性投影矩阵W根据下列公式计算得到:Among them, the size of the original medical image X is: the width is w, the height is 3h; the size of the linear projection matrix W is 1*w; the size of the vector Y is 1*3h; the linear projection matrix W is calculated according to the following formula:
。 .
S302:向量Y的每个元素向下取整到最近的整数,使Y向量的每个元素都在[0,255]之间。S302: Each element of the vector Y is rounded down to the nearest integer, so that each element of the Y vector is between [0, 255].
S303:通过哈希散列函数,将Y向量映射为特定数字标识码ID;如果特定数字标识码ID与已有图像ID相同,则为特定数字标识码ID加上后缀标识符与已有图像ID进行区分;带(与已有图像ID相同)或不带(不与已有图像ID相同)后缀标识符的特定数字标识码ID即为数据标识码。S303: Map the Y vector to a specific digital identification code ID through a hash function; if the specific digital identification code ID is the same as the existing image ID, add a suffix identifier to the specific digital identification code ID and the existing image ID To distinguish; the specific digital identification code ID with (same as the existing image ID) or without (not the same as the existing image ID) suffix identifier is the data identification code.
其中,参见图6和7,步骤S103具体包括:6 and 7, step S103 specifically includes:
S401:通过多次卷积与池化提取原始医学图像(具体为512*512*3)的特征信息,在减小图像尺寸的同时,加深特征图的深度,从而提取出原始图像中包含的深层特征信息,得到初始特征图。S401: Extract the feature information of the original medical image (specifically, 512*512*3) through multiple convolutions and pooling, while reducing the size of the image, deepen the depth of the feature map, thereby extracting the deep layers contained in the original image. feature information to obtain the initial feature map.
S402:通过平均值池化将初始特征图降采样至一系列尺寸(如按8*8*256、4*4*256、2*2*256和1*1*256尺度进行降采样),通过卷积核卷积处理(结果分别为8*8*256、4*4*256、2*2*256和1*1*256)。再依次通过双线性插值方法与等尺寸卷积将系列尺寸特征图还原到初始特征图大小,得到还原特征图。S402: Downsample the initial feature map to a range of sizes (such as downsampling by 8*8*256, 4*4*256, 2*2*256, and 1*1*256) through mean pooling, and pass Convolution kernel convolution processing (results are 8*8*256, 4*4*256, 2*2*256 and 1*1*256 respectively). Then, the series size feature map is restored to the original feature map size by bilinear interpolation method and equal-size convolution in turn, and the restored feature map is obtained.
S403:将还原特征图和初始特征图拼接得到拼接特征图,与常规技术一致。S403: Splicing the restored feature map and the initial feature map to obtain a spliced feature map, which is consistent with the conventional technology.
S404:依次通过双线性插值方法与等尺寸卷积将拼接特征图还原到原始医学图像大小,再通过softmax算法,得到尺寸为原始医学图像大小、深度为标注类型数目的特征图(512*512*N,其中,N为标注类型数)。S404: Restore the stitched feature map to the size of the original medical image through the bilinear interpolation method and equal-size convolution in turn, and then use the softmax algorithm to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types (512*512 *N, where N is the number of annotation types).
具体地,步骤S401包括:通过多个卷积核和最大池化提取原始医学图像的特征信息,得到初始特征图。具体一次为卷积3*3*64、最大池化2*2、卷积3*3*256、最大池化2*2、卷积3*3*512。Specifically, step S401 includes: through multiple convolution kernel and Maximum pooling extracts the feature information of the original medical image and obtains the initial feature map. The specific one is convolution 3*3*64, max pooling 2*2, convolution 3*3*256, max pooling 2*2, and convolution 3*3*512.
优选地,本发明提供的分类标注方法还包括:选取特征图与人工标注图误差较小的原始医学图像为高质量样本,以高质量样本作为全卷积神经网络模型的输入,以该原始医学图像对应的网络标注图作为全卷积神经网络模型的输出,对全卷积神经网络模型进行优化训练,优化训练为本领域内的常规技术,本实施例省略详细描述。Preferably, the classification and labeling method provided by the present invention further includes: selecting an original medical image with a smaller error between the feature map and the manual labeling map as a high-quality sample, using the high-quality sample as the input of the fully convolutional neural network model, and using the original medical image as the input of the full convolutional neural network model. The network annotation map corresponding to the image is used as the output of the fully convolutional neural network model, and the fully convolutional neural network model is optimized and trained. The optimization training is a conventional technique in the field, and detailed description is omitted in this embodiment.
具体地,在步骤S104中,如果特征图与人工标注图中结果不一致的像素点在像素边界(像素点集合)的占比(像素点数量占比)小于预定阈值(根据实际需要进行设计,具体可以为1%)时,则认为特征图与人工标注图误差较小,对应的原始医学图像为高质量样本。优化训练方法包括:以原有的已标注医学图像(保存在中央数据库中,作为训练用)作为验证集,对高质量样本进行k折交叉验证,提升全卷积神经网络模型输出与图像实际标注图的平均交并比。Specifically, in step S104, if the proportion of pixels whose results are inconsistent between the feature map and the manual annotation map in the pixel boundary (pixel set) (the proportion of the number of pixels) is less than a predetermined threshold (design according to actual needs, specific can be 1%), it is considered that the error between the feature map and the manual annotation map is small, and the corresponding original medical image is a high-quality sample. The optimized training method includes: using the original annotated medical images (stored in the central database for training) as the validation set, performing k-fold cross-validation on high-quality samples, and improving the output of the fully convolutional neural network model and the actual image annotation Average intersection ratio of graphs.
实施例2Example 2
参见图8,实施例2公开了一种医学图像的分类标注系统,该系统包括:Referring to FIG. 8 , Embodiment 2 discloses a system for classifying and labeling medical images, and the system includes:
用户:用于将原始医学图像上传至服务器,并能显示网络标注图。User: used to upload the original medical images to the server, and can display the network annotation map.
中央数据库:用于获取并存储用户上传的原始医学图像,将原始医学图像分发给服务器,存储网络标注图并将网络标注图发送至用户。Central database: used to acquire and store the original medical images uploaded by users, distribute the original medical images to the server, store the network annotation map and send the network annotation map to the user.
服务器:用于将原始医学图像分发给人工标注终端,对原始医学图像通过预先训练的全卷积神经网络模型进行处理得到尺寸为原始图像大小、深度为标注类型数目的特征图,获取人工标注图并通过边缘提取算法对人工标注图进行处理得到标注区域的像素边界;在像素边界,通过与人工标注图对比对特征图进行修正(具体可以参见步骤S104)得到网络标注图,将网络标注图上传至中央数据库。Server: used to distribute the original medical image to the manual labeling terminal, process the original medical image through a pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original image and the depth is the number of labeling types, and obtain the manual labeling map And through the edge extraction algorithm, the artificial annotation map is processed to obtain the pixel boundary of the annotation area; at the pixel boundary, the feature map is corrected by comparing with the manual annotation map (for details, please refer to step S104) to obtain the network annotation map, and the network annotation map is uploaded. to the central database.
人工标注终端:用于显示原始医学图像和人工标注图,对原始医学图像进行人工标注得到人工标注图,并将人工标注图上传至服务器。Manual labeling terminal: used to display the original medical image and manual labeling map, manually label the original medical image to obtain the manual labeling map, and upload the manual labeling map to the server.
其中,用户如医院、医学科研院所、医疗公司等在获取医学图像后,将医学图像上传到中央数据库中。Among them, users such as hospitals, medical research institutes, medical companies, etc., after obtaining medical images, upload the medical images to the central database.
其中,中央数据库可根据实际医学图像数量连接一台或多台服务器,并自动将原始医学图像分发给不同的服务器进行处理。当需要处理的医学图像数量较少时,可将服务器与中央数据库合并为一体。服务器在接收到原始医学图像后,自动将图像分发给一台或多台人工标注终端进行粗标注。中央数据库可以架设在用户单位局域网、云存储等,用户可以通过无线网络、以太网、USB等通信方式进行数据上传。中央数据库为每个用户设置专用的账号与密码,分配读取、写入、更改等权限,以保证数据的安全性。中央数据库在获取用户上传的医学图像后,为每张图像生成唯一的数字识别码,与相应用户进行绑定,并将原始医学图像和数字识别码分发给服务器。Among them, the central database can connect one or more servers according to the actual number of medical images, and automatically distribute the original medical images to different servers for processing. When the number of medical images to be processed is small, the server can be combined with the central database. After receiving the original medical image, the server automatically distributes the image to one or more manual labeling terminals for rough labeling. The central database can be set up in the local area network, cloud storage, etc. of the user unit, and users can upload data through wireless network, Ethernet, USB and other communication methods. The central database sets a dedicated account and password for each user, and assigns permissions such as read, write, and change to ensure data security. After acquiring the medical images uploaded by the user, the central database generates a unique digital identification code for each image, binds it with the corresponding user, and distributes the original medical image and the digital identification code to the server.
其中,服务器可以采用实体服务器、云服务器、个人计算机等多种运算平台。服务器为每个人工标注终端分配账号与密码,在接收到原始医学图像与数字识别码后,分发给人工标注终端,由人工标注单元进行粗标注。Among them, the server may adopt various computing platforms such as physical server, cloud server, personal computer, etc. The server assigns an account number and password to each manual labeling terminal, and after receiving the original medical image and digital identification code, distributes it to the manual labeling terminal, and the manual labeling unit performs rough labeling.
其中,人工标注终端可运行在Windows、Android、IOS等多种操作系统上,且可以进行跨平台的数据同步。当操作员完成原始图像的粗标注后,可以将人工标注图回传至服务器。Among them, the manual labeling terminal can run on Windows, Android, IOS and other operating systems, and can perform cross-platform data synchronization. After the operator completes the rough annotation of the original image, the manual annotation map can be sent back to the server.
人工标注终端将人工标注图回传至服务器中;服务器将处理得到的网络标注图传输回中央数据库,供用户进行下载。图8中K、M、N等指未知数量。在实际应用中,根据实际图像数量可以增加或减少服务器,每台服务器上运行相同的指令与全卷积神经网络模型。此处中央数据库与服务器指相应的功能单元,在实际应用中,中央数据库与服务器也可集成为一体。人工标注终端可以是计算机、笔记本电脑、平板电脑、手机等,可以运行在Windows、Android、IOS等操作系统中,其数量可根据实际应用进行增加与减少。The manual labeling terminal sends the manual labeling map back to the server; the server transmits the processed network labeling map back to the central database for users to download. K, M, N, etc. in FIG. 8 refer to unknown quantities. In practical applications, servers can be increased or decreased according to the actual number of images, and each server runs the same instructions and a fully convolutional neural network model. Here, the central database and the server refer to corresponding functional units. In practical applications, the central database and the server can also be integrated into one. Manual labeling terminals can be computers, laptops, tablet computers, mobile phones, etc., and can run on Windows, Android, IOS and other operating systems, and the number of them can be increased or decreased according to actual applications.
实施例3Example 3
参见图9,实施例3公开了一种服务器,包括:Referring to FIG. 9, Embodiment 3 discloses a server, including:
数据收发模块:用于接收与存储中央数据库发送的原始医学图像、向人工标注终端发送原始医学图像和向中央数据库发送网络标注图。为常见结构。Data transceiver module: used to receive and store the original medical images sent by the central database, send the original medical images to the manual labeling terminal, and send the network labeling map to the central database. is a common structure.
图像处理模块:用于通过边缘提取算法对人工标注图进行处理得到标注区域的像素边界。Image processing module: It is used to process the artificially labeled map through the edge extraction algorithm to obtain the pixel boundary of the labeled area.
全卷积神经网络模块:用于通过预先训练的全卷积神经网络模型,对原始医学图像进行处理得到尺寸为原始医学图像大小、深度为标注类型数目的特征图,特征图深度方向上像素值最大的点所在的层数序号为标注的类别号。Fully convolutional neural network module: used to process the original medical image through the pre-trained fully convolutional neural network model to obtain a feature map whose size is the size of the original medical image and whose depth is the number of annotation types, and the pixel value in the depth direction of the feature map. The layer number where the largest point is located is the labeled category number.
修正模块:用于在像素边界,将特征图与人工标注图进行比较;如果像素点在人工标注图与特征图上结果一致,则不对该像素点进行修正;反之,则选取特征图上该像素点周围的多个像素点,以多个像素点中数量最多的类别作为该像素点的类别以对特征图中对应的像素点进行修正;以修正后(经过步骤S104处理后)的特征图作为网络标注图。Correction module: It is used to compare the feature map with the artificial annotation map at the pixel boundary; if the result of the pixel point on the artificial annotation map and the feature map is consistent, the pixel point will not be corrected; otherwise, the pixel on the feature map will be selected. For multiple pixels around the point, the category with the largest number of pixels is used as the category of the pixel to correct the corresponding pixel in the feature map; the corrected (after step S104) feature map is used as Network annotation diagram.
存储模块:用于存储数据。Storage module: used to store data.
进一步地,服务器还包括反馈模块,用于选取特征图与人工标注图误差较小的原始医学图像为高质量样本,以高质量样本作为全卷积神经网络模型的输入,以该原始医学图像对应的网络标注图作为全卷积神经网络模型的输出,对全卷积神经网络模型进行优化训练。Further, the server also includes a feedback module for selecting an original medical image with a smaller error between the feature map and the manual annotation map as a high-quality sample, using the high-quality sample as the input of the full convolutional neural network model, and using the original medical image corresponding to the high-quality sample. The network annotation map is used as the output of the fully convolutional neural network model, and the fully convolutional neural network model is optimized for training.
其中,参见图10,本发明实施例提供的全卷积神经网络模块包括:10, the fully convolutional neural network module provided by the embodiment of the present invention includes:
特征图提取单元:用于通过多次卷积与池化提取原始医学图像的特征信息,得到初始特征图;具体通过多个卷积核和最大池化提取原始医学图像的特征信息,得到初始特征图。Feature map extraction unit: used to extract the feature information of the original medical image through multiple convolutions and pooling to obtain the initial feature map; convolution kernel and Maximum pooling extracts the feature information of the original medical image and obtains the initial feature map.
多尺度卷积池化单元:用于通过平均池化将初始特征图降采样至一系列尺寸,通过卷积核卷积处理,再依次通过双线性插值方法与等尺寸卷积将系列尺寸特征图还原到初始特征图大小,得到还原特征图。Multi-scale convolution pooling unit: used to downsample the initial feature map to a range of sizes through average pooling, through Convolution kernel convolution processing, and then through the bilinear interpolation method and equal-size convolution to restore the series size feature map to the initial feature map size to obtain the restored feature map.
拼接单元:用于将还原特征图和初始特征图拼接得到拼接特征图;Splicing unit: used to splicing the restored feature map and the original feature map to obtain a spliced feature map;
上采样单元:用于依次通过双线性插值方法与等尺寸卷积将拼接特征图还原到原始医学图像大小,再通过softmax算法,得到尺寸为原始医学图像大小、深度为标注类型数目的特征图。Upsampling unit: It is used to restore the stitched feature map to the original medical image size through the bilinear interpolation method and equal-size convolution in turn, and then use the softmax algorithm to obtain a feature map whose size is the size of the original medical image and the depth is the number of annotation types. .
实施例4Example 4
参见图11,实施例4公开了一种人工标注终端,包括:原始图显示模块:用于显示原始医学图像并能人工标注,人工标注时,先采用粗线型勾勒出目标区域的大致轮廓,再采用细线型对区域边缘进行修补。Referring to FIG. 11 , Embodiment 4 discloses a manual labeling terminal, including: an original image display module: used to display the original medical image and be able to manually label it. When manually labeling, a rough outline of the target area is first outlined with a thick line. Then use a thin line to repair the edge of the area.
控制模块:用于对标注后的原始医学图像进行处理以得到人工标注图,同时对整个人工标注终端进行控制,如向原始图显示模块和人工标注图显示模块输出显示数据。Control module: It is used to process the labeled original medical images to obtain manual labeling images, and at the same time control the entire manual labeling terminal, such as outputting display data to the original image display module and the manual labeling image display module.
人工标注图显示模块:用于显示人工标注图。Manual annotation map display module: used to display manual annotation map.
选择模块:用于选择画笔颜色和线型粗细,不同颜色代表不同的组织且对应相应的标注分类。具体地,可选择多种线型(至少两种)和多种颜色(按照预定策略进行定义)。Selection module: used to select the brush color and line thickness. Different colors represent different organizations and correspond to corresponding label classifications. Specifically, multiple line types (at least two) and multiple colors (defined according to a predetermined strategy) can be selected.
通信模块:用于接收原始医学图像,并上传人工标注图。Communication module: used to receive original medical images and upload manual annotations.
具体地,原始图显示模块、人工标注图显示模块和选择模块可由同一块触摸显示屏实现。Specifically, the original image display module, the manual annotation image display module and the selection module can be implemented by the same touch display screen.
具体地,原始图显示模块生成与原始图像大小相同的空白画布,选择模块可以选择不同颜色的画笔和不同粗细线型,其上还可包括确认按键、下一张按键、上一张按键和完成按键,操作员完成的原始图像的大致标注后,点击所述确认按键,则将标注保存到标注图缓存区并将填充处理后的标注图在人工标注图显示模块上显示;操作员点击所述下一张按键,可以跳转到下一张未标注原始图像进行标注;上一张按键可以回看之前已经进行标注的图像,如果回看时对图像进行修改,修改结果会保存到标注图缓存区。而原始图显示区为空白时,表明服务器目前未分配任务,则点击所述完成按键,完成标注。Specifically, the original image display module generates a blank canvas with the same size as the original image, the selection module can select brushes of different colors and line types of different thicknesses, and the selection module can also include a confirmation button, a next button, a previous button and a finish button. After the operator completes the rough annotation of the original image and clicks the confirmation button, the annotation will be saved to the annotation map cache area and the filled annotation map will be displayed on the manual annotation map display module; the operator clicks the The next button can jump to the next unmarked original image for annotation; the previous button can review the image that has been marked before. If the image is modified during review, the modified result will be saved to the marked image cache Area. When the original image display area is blank, it indicates that the server has not assigned tasks at present, and the completion button is clicked to complete the labeling.
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011114703.9A CN112132232B (en) | 2020-10-19 | 2020-10-19 | Medical image classification and annotation method, system and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011114703.9A CN112132232B (en) | 2020-10-19 | 2020-10-19 | Medical image classification and annotation method, system and server |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112132232A true CN112132232A (en) | 2020-12-25 |
CN112132232B CN112132232B (en) | 2024-12-20 |
Family
ID=73853160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011114703.9A Active CN112132232B (en) | 2020-10-19 | 2020-10-19 | Medical image classification and annotation method, system and server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112132232B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113989610A (en) * | 2021-12-27 | 2022-01-28 | 广州思德医疗科技有限公司 | Intelligent image labeling method, device and system |
CN114550129A (en) * | 2022-01-26 | 2022-05-27 | 江苏联合职业技术学院苏州工业园区分院 | Machine learning model processing method and system based on data set |
WO2022218012A1 (en) * | 2021-04-13 | 2022-10-20 | 北京百度网讯科技有限公司 | Feature extraction method and apparatus, device, storage medium, and program product |
CN117592517A (en) * | 2023-11-02 | 2024-02-23 | 新疆新华水电投资股份有限公司 | Model training method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108335303A (en) * | 2018-01-28 | 2018-07-27 | 浙江大学 | A kind of multiple dimensioned palm bone segmentation method applied to palm X-ray |
CN109378052A (en) * | 2018-08-31 | 2019-02-22 | 透彻影像(北京)科技有限公司 | The preprocess method and system of image labeling |
US20190073447A1 (en) * | 2017-09-06 | 2019-03-07 | International Business Machines Corporation | Iterative semi-automatic annotation for workload reduction in medical image labeling |
CN109658422A (en) * | 2018-12-04 | 2019-04-19 | 大连理工大学 | A kind of retinal images blood vessel segmentation method based on multiple dimensioned deep supervision network |
CN111445440A (en) * | 2020-02-20 | 2020-07-24 | 上海联影智能医疗科技有限公司 | Medical image analysis method, equipment and storage medium |
CN111680753A (en) * | 2020-06-10 | 2020-09-18 | 创新奇智(上海)科技有限公司 | Data labeling method and device, electronic equipment and storage medium |
-
2020
- 2020-10-19 CN CN202011114703.9A patent/CN112132232B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190073447A1 (en) * | 2017-09-06 | 2019-03-07 | International Business Machines Corporation | Iterative semi-automatic annotation for workload reduction in medical image labeling |
CN108335303A (en) * | 2018-01-28 | 2018-07-27 | 浙江大学 | A kind of multiple dimensioned palm bone segmentation method applied to palm X-ray |
CN109378052A (en) * | 2018-08-31 | 2019-02-22 | 透彻影像(北京)科技有限公司 | The preprocess method and system of image labeling |
CN109658422A (en) * | 2018-12-04 | 2019-04-19 | 大连理工大学 | A kind of retinal images blood vessel segmentation method based on multiple dimensioned deep supervision network |
CN111445440A (en) * | 2020-02-20 | 2020-07-24 | 上海联影智能医疗科技有限公司 | Medical image analysis method, equipment and storage medium |
CN111680753A (en) * | 2020-06-10 | 2020-09-18 | 创新奇智(上海)科技有限公司 | Data labeling method and device, electronic equipment and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022218012A1 (en) * | 2021-04-13 | 2022-10-20 | 北京百度网讯科技有限公司 | Feature extraction method and apparatus, device, storage medium, and program product |
CN113989610A (en) * | 2021-12-27 | 2022-01-28 | 广州思德医疗科技有限公司 | Intelligent image labeling method, device and system |
CN114550129A (en) * | 2022-01-26 | 2022-05-27 | 江苏联合职业技术学院苏州工业园区分院 | Machine learning model processing method and system based on data set |
CN117592517A (en) * | 2023-11-02 | 2024-02-23 | 新疆新华水电投资股份有限公司 | Model training method and device |
Also Published As
Publication number | Publication date |
---|---|
CN112132232B (en) | 2024-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112132232A (en) | Method, system and server for classification and labeling of medical images | |
CN108345890B (en) | Image processing method, device and related equipment | |
CN108230339B (en) | Annotation completion method for gastric cancer pathological slices based on pseudo-label iterative annotation | |
CN109493417B (en) | Three-dimensional object reconstruction method, device, equipment and storage medium | |
WO2021120834A1 (en) | Biometrics-based gesture recognition method and apparatus, computer device, and medium | |
CN111902825A (en) | Polygonal object labeling system and method for training object labeling system | |
CN116310076A (en) | Three-dimensional reconstruction method, device, equipment and storage medium based on nerve radiation field | |
CN108335303A (en) | A kind of multiple dimensioned palm bone segmentation method applied to palm X-ray | |
US12165295B2 (en) | Digital image inpainting utilizing a cascaded modulation inpainting neural network | |
US20230368339A1 (en) | Object class inpainting in digital images utilizing class-specific inpainting neural networks | |
CN113239977B (en) | Training method, device and equipment of multi-domain image conversion model and storage medium | |
CN111583264B (en) | Training method for image segmentation network, image segmentation method, and storage medium | |
Li et al. | Multi-view convolutional vision transformer for 3D object recognition | |
CN110827341A (en) | Picture depth estimation method and device and storage medium | |
CN116051392A (en) | Image restoration method and system based on deep learning interactive network | |
CN110717405A (en) | Face feature point positioning method, device, medium and electronic equipment | |
Zhang et al. | End-to-end learning of self-rectification and self-supervised disparity prediction for stereo vision | |
AU2023210623A1 (en) | Panoptically guided inpainting utilizing a panoptic inpainting neural network | |
AU2023210622A1 (en) | Learning parameters for neural networks using a semantic discriminator and an object-level discriminator | |
US12086965B2 (en) | Image reprojection and multi-image inpainting based on geometric depth parameters | |
CN116486071A (en) | Image blocking feature extraction method, device and storage medium | |
CN116258756A (en) | A self-supervised monocular depth estimation method and system | |
CN114387294A (en) | Image processing method and storage medium | |
CN115840507B (en) | Large-screen equipment interaction method based on 3D image control | |
CN119067839B (en) | An image BEV perspective conversion method based on negation probability and depth estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |