[go: up one dir, main page]

CN112070726B - Deep learning-based grape embryo slice image processing method and device - Google Patents

Deep learning-based grape embryo slice image processing method and device Download PDF

Info

Publication number
CN112070726B
CN112070726B CN202010828700.5A CN202010828700A CN112070726B CN 112070726 B CN112070726 B CN 112070726B CN 202010828700 A CN202010828700 A CN 202010828700A CN 112070726 B CN112070726 B CN 112070726B
Authority
CN
China
Prior art keywords
slice
edema
image
label
villus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010828700.5A
Other languages
Chinese (zh)
Other versions
CN112070726A (en
Inventor
师丽
朱承泽
王松伟
王治忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010828700.5A priority Critical patent/CN112070726B/en
Publication of CN112070726A publication Critical patent/CN112070726A/en
Application granted granted Critical
Publication of CN112070726B publication Critical patent/CN112070726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30044Fetus; Embryo

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a grape embryo image processing method and device based on deep learning, belongs to the field of medical image detection of grape embryo in the technical field of medical images, and is used for solving the problem of low efficiency of clinical diagnosis detection of grape embryo in the prior art. The invention acquires a scanned grape embryo slice map under a microscope, inputs the scanned grape embryo slice map into a villus network and an oedema network to obtain a slice villus tag map and a slice oedema tag map of the scanned grape embryo slice map, and finally obtains a slice oedema distribution map. According to the invention, through a villus network and an oedema network, two different pathological features of the villus and oedema can be subjected to image processing to obtain a section oedema distribution map, and the distribution map is visually displayed to a clinician so as to intuitively obtain the section oedema regional distribution situation.

Description

一种基于深度学习的葡萄胎切片图像处理方法及装置A method and device for processing hydatidiform mole slice images based on deep learning

技术领域Technical Field

本发明属于医学影像技术领域,涉及一种葡萄胎的医学影像检测,尤其涉及一种用深度学习的方法对葡萄胎切片图像处理的方法。The present invention belongs to the field of medical imaging technology, and relates to a medical imaging detection of hydatidiform mole, and in particular to a method for processing hydatidiform mole slice images using a deep learning method.

背景技术Background Art

葡萄胎(HM)是指妊娠后胎盘形成的形如葡萄串的水泡状胎块。而葡萄胎婴儿多死亡或形成畸胎,极少有足月婴诞生。在一般情况下,有10%到20%的葡萄胎会发展演变成恶性的葡萄胎以及绒毛膜癌,这类癌症会通过血型团泊进行转移,如果治疗不及时就会给患者带来生命威胁。因此,葡萄胎的早期病理诊断对每位患病孕妇都有重要意义。Hydatidiform mole (HM) refers to a grape-shaped blister-like fetal mass formed by the placenta after pregnancy. Babies with hydatidiform mole often die or develop malformations, and very few full-term babies are born. Under normal circumstances, 10% to 20% of hydatidiform moles will develop into malignant hydatidiform mole and choriocarcinoma. These cancers can metastasize through blood groups and pose a life-threatening threat to the patient if not treated in time. Therefore, early pathological diagnosis of hydatidiform mole is of great significance to every pregnant woman with the disease.

现有技术中,对葡萄胎的检测、筛查主要有两种方式,第一种是通过显微镜人工观察切片,第二种是通过检测与葡萄胎相关的基因。In the prior art, there are two main ways to detect and screen hydatidiform mole. The first is to manually observe the sections under a microscope, and the second is to detect genes related to hydatidiform mole.

第一种方式中,一般病理医师使用5*10倍与10*10倍的显微镜对病人多个切片进行观察,然后根据经验和切片组织细胞的形态进行综合诊断。葡萄胎诊断主要通过观察切片中的绒毛特征进行诊断,切片病理特征主要为绒毛滋养细胞增生和绒毛内部间质水肿。In the first method, pathologists generally use 5*10x and 10*10x microscopes to observe multiple sections of the patient, and then make a comprehensive diagnosis based on experience and the morphology of the tissue cells in the sections. The diagnosis of hydatidiform mole is mainly made by observing the characteristics of the villi in the sections. The pathological characteristics of the sections are mainly villous trophoblastic hyperplasia and interstitial edema inside the villi.

妇科医院的病理科医生每天需要花费大量的时间去诊断类似葡萄胎这类相较于肿瘤危险系数较低的病症,而这些患者中大多数未患病,但这些需要占用病理科医师大量的工作时间。Pathologists in gynecological hospitals need to spend a lot of time every day diagnosing diseases such as hydatidiform mole, which have a lower risk factor than tumors. Most of these patients are not sick, but this takes up a lot of the pathologists' working time.

但是,目前国内病理科医师人数在1.5w左右,人才缺口较大,检测的效率较低;另外,临床葡萄胎诊断由于主要由医师人工筛查切片,因此准确率很难保证,尤其对于12周以前的葡萄胎,由于葡萄胎未到成熟期,病灶发育不完全,组织形态与正常葡萄胎切片较为相似,不易区分,造成临床诊断准确率极低,不到50%。However, there are currently about 15,000 pathologists in China, resulting in a large talent gap and low detection efficiency. In addition, since the diagnosis of clinical hydatidiform mole is mainly performed by doctors through manual screening of sections, it is difficult to guarantee accuracy, especially for hydatidiform mole before 12 weeks. Since the hydatidiform mole has not reached maturity, the lesion is not fully developed, and the tissue morphology is similar to that of normal hydatidiform mole sections, making it difficult to distinguish. This results in an extremely low clinical diagnostic accuracy of less than 50%.

第二种方式中,申请号为201310027715.1、名称为用于检测NLRP7基因的基因芯片、检测试剂和试剂盒的发明专利就公开了通过检测与葡萄胎相关的NLRP7基因SNP,实现对于葡萄胎的临床诊断和高危人群早期筛查、早期预防干预具有重要的意义,可广泛用于临床高效筛查葡萄胎高危人群。该发明专利构建了筛查与葡萄胎相关的NLRP7基因多态性高危人群的基因芯片检测系统,基因芯片包括固相载体和合成在该载体上的寡核苷酸探针,检测试剂包括基因芯片和18对用于扩增样本中各SNPs的PCR引物,试剂盒包括检测试剂、一阴性对照样本和一阳性对照样本。该发明专利可快速、准确检测临床样本中的NLRP7基因各个相关SNPs位点,对于葡萄胎的临床诊断和高危人群早期筛查、早期预防干预具有重要的意义。In the second method, the invention patent with application number 201310027715.1 and titled Gene chip, detection reagent and kit for detecting NLRP7 gene discloses that by detecting NLRP7 gene SNPs related to hydatidiform mole, it is of great significance to achieve clinical diagnosis of hydatidiform mole and early screening of high-risk populations and early preventive intervention, and can be widely used for clinical efficient screening of high-risk populations for hydatidiform mole. This invention patent constructs a gene chip detection system for screening high-risk populations for NLRP7 gene polymorphisms related to hydatidiform mole. The gene chip includes a solid phase carrier and an oligonucleotide probe synthesized on the carrier. The detection reagent includes a gene chip and 18 pairs of PCR primers for amplifying each SNPs in the sample. The kit includes a detection reagent, a negative control sample and a positive control sample. This invention patent can quickly and accurately detect each related SNPs site of NLRP7 gene in clinical samples, which is of great significance for clinical diagnosis of hydatidiform mole and early screening of high-risk populations and early preventive intervention.

虽然,通过检测基因筛查葡萄胎固然有其存在的必要性,但是通过检测NLRP7基因对葡萄胎进行筛查一是增加了试剂盒检测步骤,整个检测周期会比较长,另外就是会涉及到芯片生产、试剂以及试剂盒的生产,筛查成本大幅上升,在葡萄胎临床诊断中应用范围非常受限,不易推广、应用。Although there is a necessity for screening hydatidiform mole by testing genes, screening hydatidiform mole by testing the NLRP7 gene not only increases the testing steps of the test kit, but also involves the production of chips, reagents and test kits, which greatly increases the screening cost. Its application scope in the clinical diagnosis of hydatidiform mole is very limited and is not easy to promote and apply.

基于上述病理科医师人数少、医师人工筛查切片效率低、医师人工筛查切片精度低以及基因检测筛查成本高、基因检测筛查周期长的现状,有必要研发一整套从显微镜自动获取图像到生成水肿、增生等病理特征分布图的方法和装置,从而辅助临床医生更高效的筛查病例。Based on the above-mentioned current situation of small number of pathologists, low efficiency and low accuracy of manual screening of slices by physicians, high cost and long cycle of genetic testing screening, it is necessary to develop a complete set of methods and devices from automatically acquiring images with a microscope to generating distribution maps of pathological characteristics such as edema and hyperplasia, so as to assist clinicians in screening cases more efficiently.

临床病理科医生主要通过绒毛间质水肿、绒毛边缘滋养细胞弥漫性增生等病理特征和病人绝经时长、妊娠史等信息综合判断是否为葡萄胎病症,其中绒毛间质水肿这一病理特征是较为关键的诊断依据。Clinical pathologists mainly judge whether it is a hydatidiform mole based on pathological features such as villous stromal edema, diffuse hyperplasia of trophoblastic cells at the villous margins, and information such as the patient's menopausal duration and pregnancy history. Among them, the pathological feature of villous stromal edema is the more critical basis for diagnosis.

发明内容Summary of the invention

本发明的目的在于:提供一种基于深度学习的葡萄胎切片图像处理方法及装置,通过获得葡萄胎的绒毛、水肿分布情况,以解决现有技术中通过人工观察切片组织的病理特征致使的葡萄胎临床诊断检测的效率低的问题。The purpose of the present invention is to provide a method and device for processing hydatidiform mole slice images based on deep learning, so as to solve the problem of low efficiency of clinical diagnosis and detection of hydatidiform mole caused by manual observation of pathological characteristics of sliced tissue in the prior art by obtaining the distribution of villi and edema of hydatidiform mole.

本发明采用的技术方案如下:The technical solution adopted by the present invention is as follows:

一种基于深度学习的葡萄胎切片图像处理方法,步骤如下:A method for processing hydatidiform mole slice images based on deep learning, the steps are as follows:

S1,将he染色好的葡萄胎切片放入显微镜载物台,显微镜对焦,并获取葡萄胎切片在显微镜下的葡萄胎切片扫描图;S1, placing the HE-stained hydatidiform mole slice on the microscope stage, focusing the microscope, and obtaining a scan of the hydatidiform mole slice under the microscope;

S2,将葡萄胎切片扫描图切块,得到扫描图切块一,将扫描图切块一输入绒毛网络a-net,得到葡萄胎切片的切块绒毛标签图,将所有切块绒毛标签图融合得到切片绒毛标签图;S2, cutting the scanned image of the hydatidiform mole slice into blocks to obtain scanned image block 1, inputting the scanned image block 1 into the villus network a-net to obtain a block villus label map of the hydatidiform mole slice, and fusing all the block villus label maps to obtain a slice villus label map;

得到扫描图切块二,将扫描图切块二输入水肿网络b-net,得到葡萄胎切片的切块水肿标签图,将所有切块水肿标签图融合得到切片水肿标签图;Obtain the second scanned image slice, input the second scanned image slice into the edema network b-net, obtain the sliced edema label map of the hydatidiform mole slice, and fuse all the sliced edema label maps to obtain the sliced edema label map;

S3,根据切片水肿标签图得到切片水肿分布图;S3, obtaining the slice edema distribution map according to the slice edema label map;

可选地,步骤S2中,绒毛网络a-net、水肿网络b-net均为图像分割网络UNet网络,网络包括卷积层、池化层、上卷积层、并和层。扫描图切块一经过绒毛网络a-net得到切块绒毛标签图,扫描图切块二经过水肿网络b-net得到切块水肿标签图。Optionally, in step S2, the villus network a-net and the edema network b-net are both image segmentation networks UNet networks, and the networks include convolutional layers, pooling layers, upper convolutional layers, and union layers. The first block of the scanned image is passed through the villus network a-net to obtain a block villus label map, and the second block of the scanned image is passed through the edema network b-net to obtain a block edema label map.

可选地,步骤S2中,葡萄胎切片图像切块是将切片以间隔s切成若干尺寸为size的切块,上下左右相邻的两个切块有size×(size-s)的面积重合,其中扫描图切块一尺寸size1为3000×3000~18000×18000的像素尺寸,扫描图切块二尺寸size2为1500×1500~9000×9000的像素尺寸。Optionally, in step S2, the hydatidiform mole slice image is cut into blocks of size size at intervals s, and two adjacent blocks in the upper, lower, left and right directions overlap in area of size×(size-s), wherein the size size1 of the scanned image block one is a pixel size of 3000×3000 to 18000×18000, and the size size2 of the scanned image block two is a pixel size of 1500×1500 to 9000×9000.

可选地,步骤S2中,切块水肿标签图融合得到切片水肿标签图是将所有切块水肿标签图按原先切块在切片中对应位置进行拼接,上下左右相邻切块的重合部分按多个切块图像的重合部分像素均值拟合。Optionally, in step S2, the fusion of the cut edema label map to obtain the slice edema label map is to splice all the cut edema label maps according to the corresponding positions of the original cuts in the slice, and the overlapping parts of the upper, lower, left and right adjacent cuts are fitted according to the pixel mean of the overlapping parts of multiple cut images.

可选地,步骤S3中,根据切片绒毛标签图得到绒毛区域,将切片水肿标签图中绒毛区域之外的水肿区域剔除,得到切片水肿分布图;Optionally, in step S3, the villus region is obtained according to the slice villus label map, and the edema region outside the villus region in the slice edema label map is removed to obtain a slice edema distribution map;

绒毛区域和非绒毛区域是指切片绒毛标签图中像素值为0和255的区域,水肿区域和非水肿区域是指切片水肿标签图中像素值为0和255的区域,将切片水肿标签图中对应的绒毛区域之外的水肿区域像素值变为255,得到切片水肿分布图。The villous area and non-villous area refer to the areas with pixel values of 0 and 255 in the slice villous label map, and the edema area and non-edema area refer to the areas with pixel values of 0 and 255 in the slice edema label map. The pixel value of the edema area outside the corresponding villous area in the slice edema label map is changed to 255 to obtain the slice edema distribution map.

可选地,步骤S3中,可以将得到的分布图进行可视化输出。根据葡萄胎切片图像、切片水肿分布图得到水肿可视化切片图像、增生可视化切片图像、水肿区域截图,将所述水肿可视化切片图像、水肿区域截图和S5中得到的葡萄胎切片分类结果显示输出;Optionally, in step S3, the obtained distribution map can be visualized and output. According to the hydatidiform mole slice image and the slice edema distribution map, an edema visualization slice image, a hyperplasia visualization slice image, and an edema area screenshot are obtained, and the edema visualization slice image, the edema area screenshot, and the hydatidiform mole slice classification result obtained in S5 are displayed and output;

所述水肿可视化切片图像是将葡萄胎切片图像中的水肿区域标注第一颜色得到;The edema visualization slice image is obtained by marking the edema area in the hydatidiform mole slice image with a first color;

所述水肿区域截图是通过将切片水肿分布图按尺寸size切块得到,切块像素均值最小的1~10个切块对应的切块图像为水肿区域截图。The edema region screenshot is obtained by cutting the slice edema distribution map into blocks according to size, and the block images corresponding to 1 to 10 blocks with the smallest block pixel mean are the edema region screenshots.

一种基于深度学习的葡萄胎切片图像处理装置,包括:A hydatidiform mole slice image processing device based on deep learning, comprising:

显微镜,用于放大葡萄胎切片的微小结构;A microscope to magnify the tiny structures in a section of the mole;

切片扫描模块,用于获取葡萄胎切片在显微镜下的葡萄胎切片扫描图;A slice scanning module, used for obtaining a scanned image of a hydatidiform mole slice under a microscope;

切片绒毛标签图生成模块,用于将葡萄胎切片扫描图切块,得到扫描图切块一,将扫描图切块一输入绒毛网络a-net,得到葡萄胎切片的切块绒毛标签图,将所有切块绒毛标签图融合得到切片绒毛标签图;A sliced villus label map generation module is used to slice the hydatidiform mole slice scan image into blocks to obtain a scanned image block 1, input the scanned image block 1 into a villus network a-net to obtain a sliced villus label map of the hydatidiform mole slice, and fuse all the sliced villus label maps to obtain a sliced villus label map;

切片水肿标签图生成模块,用于将葡萄胎切片扫描图切块,得到扫描图切块二,将扫描图切块二输入水肿网络b-net,得到葡萄胎切片的切块水肿标签图,将所有切块水肿标签图融合得到切片水肿标签图;A slice edema label map generation module is used to slice the hydatidiform mole slice scan image into blocks to obtain a second scan image slice, input the second scan image slice into an edema network b-net to obtain a slice edema label map of the hydatidiform mole slice, and fuse all the slice edema label maps to obtain a slice edema label map;

分布图生成模块,用于根据切片绒毛标签图、切片水肿标签图得到切片水肿分布图;A distribution map generating module, used for obtaining a slice edema distribution map according to a slice villus label map and a slice edema label map;

所述,由于采用了上述技术方案,本发明的有益效果是:As described above, due to the adoption of the above technical solution, the beneficial effects of the present invention are:

本发明将深度学习技术和葡萄胎切片病理识别技术进行结合,将葡萄胎切片扫描图输入绒毛网络a-net、水肿网络b-net,可获取葡萄胎切片扫描图的切片水肿分布图,切片水肿分布图可直接输出给临床医生,为临床医生提供帮助,医生根据切片水肿分布图可直观获得水肿分布情况,减少检测过程中对于病理医师主观分析的过度依赖。The present invention combines deep learning technology with hydatidiform mole slice pathology recognition technology, and inputs the hydatidiform mole slice scan image into the villus network a-net and the edema network b-net, so as to obtain the slice edema distribution map of the hydatidiform mole slice scan image. The slice edema distribution map can be directly output to the clinician to provide assistance to the clinician. The doctor can intuitively obtain the edema distribution situation according to the slice edema distribution map, thereby reducing the excessive reliance on the subjective analysis of the pathologist during the detection process.

另外,针对绒毛网络a-net、水肿网络b-net进行了深度训练,通过特定的训练方式方法获得更加成熟的网络,提高切片水肿分布图的精度。In addition, deep training was performed on the villus network a-net and the edema network b-net. Through specific training methods, a more mature network was obtained to improve the accuracy of the slice edema distribution map.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明所需的葡萄胎切片扫描图的绒毛标注样例;FIG1 is a villus annotation example of a hydatidiform mole slice scan required by the present invention;

图2为本发明所需的葡萄胎切片扫描图的水肿标注样例;FIG2 is an example of edema annotation of a hydatidiform mole slice scan required by the present invention;

图3为本发明公开的水肿网络输入的葡萄胎切片扫描图、实际标注切片水肿标签图、网络预测切片水肿标签图;FIG3 is a hydatidiform mole slice scan image input by the edema network disclosed in the present invention, an actual annotated slice edema label image, and a network-predicted slice edema label image;

图4为本发明公开的葡萄胎切片扫描图标注训练数据集的获取流程图;FIG4 is a flowchart of obtaining a training data set for annotating a hydatidiform mole slice scan image disclosed in the present invention;

图5为本发明公开的葡萄胎切片图像处理整体流程图;FIG5 is an overall flow chart of hydatidiform mole slice image processing disclosed in the present invention;

图6为绒毛网络a-net的网络结构图,水肿网络b-net与绒毛网络a-net结构相同;Figure 6 is a diagram of the network structure of the villus network a-net, and the edema network b-net has the same structure as the villus network a-net;

图7为数字显微镜切片扫描装置;Fig. 7 is a digital microscope slice scanning device;

具体实施方式DETAILED DESCRIPTION

本说明书中公开的所有特征,或公开的所有方法或过程中的步骤,除了互相排斥的特征和/或步骤以外,均可以以任何方式组合。All features disclosed in this specification, or steps in all methods or processes disclosed, except mutually exclusive features and/or steps, can be combined in any manner.

实施例一Embodiment 1

一种基于深度学习的葡萄胎切片图像处理方法,步骤如下:A method for processing hydatidiform mole slice images based on deep learning, the steps are as follows:

S1,将葡萄胎切片放入显微镜载物台,显微镜对焦,并获取葡萄胎切片在显微镜下的葡萄胎切片扫描图;S1, placing the hydatidiform mole slice on a microscope stage, focusing the microscope, and obtaining a scan image of the hydatidiform mole slice under the microscope;

在该步骤中,首先,葡萄胎切片需要先进行he染色处理,经he染色处理后的葡萄胎切片再放入数字显微镜载物台,显微镜选用数字显微镜;其次,自动对焦模块进行自动对焦,使显微镜的视场中能够看到清晰的葡萄胎切片;最后切片扫描模块获取葡萄胎切片在显微镜下的葡萄胎切片扫描图。In this step, first, the hydatidiform mole slice needs to be treated with HE staining, and then the hydatidiform mole slice after HE staining is placed on the stage of a digital microscope, and a digital microscope is selected as the microscope; secondly, the autofocus module performs autofocusing so that a clear hydatidiform mole slice can be seen in the field of view of the microscope; finally, the slice scanning module obtains a scanned image of the hydatidiform mole slice under the microscope.

S2,由于获取到的葡萄胎切片扫描图尺寸较大,而现有的网络输入时对图片尺寸又有要求,因而需要对输入网络的图片进行切分处理,将葡萄胎切片扫描图切分成多组适用于不同网络的切块。S2, since the size of the obtained hydatidiform mole slice scan image is large, and the existing network input has requirements on the image size, it is necessary to segment the image input to the network, and segment the hydatidiform mole slice scan image into multiple groups of blocks suitable for different networks.

将葡萄胎切片扫描图切块,得到扫描图切块一,将扫描图切块一输入绒毛网络a-net,得到葡萄胎切片的切块绒毛标签图,将所有切块绒毛标签图融合得到切片绒毛标签图;Cut the scanned image of the hydatidiform mole slice into blocks to obtain scanned image block 1, input the scanned image block 1 into the villus network a-net to obtain a block villus label map of the hydatidiform mole slice, and fuse all the block villus label maps to obtain a slice villus label map;

将葡萄胎切片扫描图切块,得到扫描图切块二,将扫描图切块二输入水肿网络b-net,得到葡萄胎切片的切块水肿标签图,将所有切块水肿标签图融合得到切片水肿标签图。The hydatidiform mole slice scan image is cut into blocks to obtain the scan image cut block 2, and the scan image cut block 2 is input into the edema network b-net to obtain the cut edema label map of the hydatidiform mole slice, and all the cut edema label maps are fused to obtain the slice edema label map.

葡萄胎切片图像切块是将切片以间隔s切成若干尺寸为size的切块,上下左右相邻的两个切块有size×(size-s)的面积重合,其中扫描图切块一尺寸size1为3000×3000~18000×18000的像素尺寸,扫描图切块二尺寸size2为1500×1500~9000×9000的像素尺寸。The slicing of hydatidiform mole slice image is to cut the slice into several slices of size with an interval of s. The two adjacent slices above, below, left and right have an area overlap of size×(size-s). The size of the scanned image slice one, size1, is 3000×3000 to 18000×18000 pixels, and the size of the scanned image slice two, size2, is 1500×1500 to 9000×9000 pixels.

切块标签图融合得到切片标签图是将所有切块标签图按原先切块在切片中对应位置进行拼接,上下左右相邻切块的重合部分按多个切块图像的重合部分像素均值拟合。The slice label map is obtained by fusing the slice label map by splicing all the slice label maps according to the corresponding positions of the original slices in the slice, and the overlapping parts of the upper, lower, left and right adjacent slices are fitted according to the pixel mean of the overlapping parts of multiple slice images.

上述的绒毛网络a-net、水肿网络b-net的网络结构相同,均包括卷积层、池化层、上卷积层、并和层。The above-mentioned villus network a-net and edema network b-net have the same network structure, both of which include a convolutional layer, a pooling layer, an upper convolutional layer, and a union layer.

网络的第一部分为上部分,其与传统卷积网络结构类似,由卷积层和池化层构成,每经过一个池化层就一个尺度,包括原图尺度一共有5个尺度,主要用于图像特征提取,第二部分在图6中的下部分,为上采样部分,每上采样一次,就和特征提取部分对应的通道数相同尺度合并,但是合并之前要将其修剪成相同尺寸,充分利用了多尺度的图片特征信息,对多尺度的图像特征有较好的识别效果。The first part of the network is the upper part, which is similar to the traditional convolutional network structure. It consists of a convolution layer and a pooling layer. Each time it passes through a pooling layer, there is a scale. There are a total of 5 scales including the original image scale, which is mainly used for image feature extraction. The second part is the lower part in Figure 6, which is the upsampling part. Each time it is upsampled, it is merged with the scale of the same number of channels corresponding to the feature extraction part. However, it must be trimmed to the same size before merging, which makes full use of the multi-scale image feature information and has a better recognition effect on multi-scale image features.

S3,根据切片水肿标签图得到切片水肿分布图;S3, obtaining the slice edema distribution map according to the slice edema label map;

切片水肿分布图的生成可以有多种方式,可以直接根据切片水肿标签图中的水肿区域标记出水肿分布,得到切片水肿分布图。There are many ways to generate a slice edema distribution map. The edema distribution can be directly marked according to the edema area in the slice edema label map to obtain the slice edema distribution map.

由于水肿往往是伴随着绒毛一同出现,因而为提高检测准确率,本实施例还提供另外一种生成方式:Since edema often occurs together with villi, in order to improve the detection accuracy, this embodiment also provides another generation method:

根据切片绒毛标签图可以得到切片的绒毛区域,绒毛区域和非绒毛区域分别为切片绒毛标签图中像素值为0和255的区域,水肿区域和非水肿区域分别为切片水肿标签图中像素值为0和255的区域,将切片水肿标签图中对应的绒毛区域之外的水肿区域像素值变为255,得到切片水肿分布图,即将切片水肿标签图中对应绒毛区域之外的水肿区域剔除。According to the slice villus label map, the villus area of the slice can be obtained. The villus area and non-villus area are respectively the areas with pixel values of 0 and 255 in the slice villus label map, and the edema area and non-edema area are respectively the areas with pixel values of 0 and 255 in the slice edema label map. The pixel value of the edema area outside the corresponding villus area in the slice edema label map is changed to 255 to obtain the slice edema distribution map, that is, the edema area outside the corresponding villus area in the slice edema label map is eliminated.

上述的绒毛网络a-net、水肿网络b-net都需要经过训练。本实施例还提供有一种网络训练的方法。The villus network a-net and the edema network b-net mentioned above need to be trained. This embodiment also provides a network training method.

要进行网络训练,需要有用于输入网络并进行训练的训练图片。To train the network, you need training images to feed into the network and train it.

本实施例结合葡萄胎实际诊断所需要用到的葡萄胎形态特征,通过近一年的葡萄胎切片标注培训、葡萄胎切片标注到标注审查,得到了157张典型葡萄胎切片扫描图像的标注结果,每张切片需要进行两种标注,绒毛标注是将切片绒毛区域圈注出来,水肿标注是将切片的绒毛区域中水肿部分圈注出来具体标注样例可见图1~2。This embodiment combines the morphological characteristics of hydatidiform mole required for the actual diagnosis of hydatidiform mole, and through nearly one year of hydatidiform mole slice annotation training, hydatidiform mole slice annotation and annotation review, obtains the annotation results of 157 typical hydatidiform mole slice scan images. Each slice needs two kinds of annotations. The villous annotation is to circle the villous area of the slice, and the edema annotation is to circle the edema part in the villous area of the slice. Specific annotation samples can be seen in Figures 1 and 2.

目前医疗影像的瓶颈就在于极度缺少优质的标注数据集,而葡萄胎病症在世界范围还没有较好的标注数据集。深度网络的识别率是建立在良好的数据集的前提下的,因此首先需要进行规范、合理、严格的标注,以得到良好的数据集。训练数据集获取流程如图4所示,其流程包括:The current bottleneck of medical imaging is the extreme lack of high-quality annotated data sets, and there is no good annotated data set for hydatidiform mole worldwide. The recognition rate of deep networks is based on a good data set, so first of all, standardized, reasonable, and strict annotation is required to obtain a good data set. The training data set acquisition process is shown in Figure 4, and the process includes:

步骤101:制定可靠的标注方案Step 101: Develop a reliable annotation scheme

提出的数个标注方案经过多位专业临床医师的评审和确认后,最后确定一种标注方案,即前面提到的从绒毛标注到水肿标注,具体标注样例可见图1~2。绒毛标注是第一步筛选,进行初步提取,水肿标注是将水肿区域提取出来,即将明显水肿、有池状间质出现的绒毛圈注出来。After several proposed annotation schemes were reviewed and confirmed by multiple professional clinicians, a final annotation scheme was determined, which is the aforementioned from villus annotation to edema annotation. Specific annotation samples can be seen in Figures 1 and 2. Villus annotation is the first step of screening and preliminary extraction, and edema annotation is to extract the edema area, that is, to annotate the villi with obvious edema and pool-like stroma.

步骤102:标注人员培训Step 102: Training of labelers

多名已有相关医学知识的标注员经过数周培训,在其中挑选最合适的标注员,用于后续标注。Several annotators with relevant medical knowledge undergo several weeks of training, and the most suitable annotators are selected for subsequent annotation.

步骤103:标注人员初步标注Step 103: Initial labeling by labelers

标注员根据标注规章要求每人负责约80张扫描切片,标注出用于网络训练的水肿标注、绒毛标注。According to the labeling regulations, each labeler is responsible for about 80 scan slices and annotates the edema and villus for network training.

步骤104:病理医生审查Step 104: Pathologist Review

步骤103得到的初步标注经过拥有长期临床经验的临床医生严格审查,并反馈给标注员。The preliminary annotations obtained in step 103 are strictly reviewed by clinicians with long-term clinical experience and fed back to the annotators.

步骤105:标注人员详细标注Step 105: Detailed annotation by annotators

对标注审查结果进行分析和修改,标注员统一标注的标准,得到最终的标注数据集。The annotation review results are analyzed and modified, and the annotators unify the annotation standards to obtain the final annotation data set.

步骤106:网络自动标注Step 106: Automatic network annotation

在较好的标注数据集基础上训练得到的网络对后续新加入的切片扫描图片进行图像语义分割,得到新的标注数据集,即网络自动标注新切片扫描图,以扩充数据集,进而训练得到鲁棒性更好的网络。The network trained on the basis of a better annotated data set performs image semantic segmentation on the newly added slice scan images to obtain a new annotated data set. That is, the network automatically annotates the new slice scan images to expand the data set, and then trains a network with better robustness.

绒毛网络a-net的训练方法为:The training method of the fluff network a-net is:

将训练图片以尺寸size1切成若干训练图片切块,将训练图片切块输入绒毛网络a-net,每个训练图片切块对应实际分割输出由绒毛标注文件处理得到,最终训练得到绒毛分类网络a-net;The training image is cut into several training image blocks with size size1, and the training image blocks are input into the fluff network a-net. The actual segmentation output corresponding to each training image block is obtained by processing the fluff labeling file, and finally the fluff classification network a-net is trained;

水肿网络b-net的训练方法为:The training method of the edema network b-net is:

将训练图片以尺寸size2切成若干训练图片切块,将有绒毛区域的训练图片切块输入水肿网络b-net,训练图片切块的水肿标注文件得到的切块水肿分割切块作为输出,并得到切片水肿分布图,最终训练得到水肿网络b-net;The training image is cut into several training image blocks with a size of size2, and the training image blocks with villus areas are input into the edema network b-net. The edema segmentation blocks obtained from the edema annotation files of the training image blocks are output, and the slice edema distribution map is obtained. Finally, the edema network b-net is trained;

实施例二Embodiment 2

本实施例还提供一种基于深度学习的葡萄胎切片图像处理装置,包括:This embodiment also provides a hydatidiform mole slice image processing device based on deep learning, comprising:

显微镜,用于放大葡萄胎切片的微小结构;A microscope to magnify the tiny structures in a section of the mole;

切片扫描模块,用于获取葡萄胎切片在显微镜下的葡萄胎切片扫描图;A slice scanning module, used for obtaining a scanned image of a hydatidiform mole slice under a microscope;

切片水肿标签图生成模块,用于将葡萄胎切片扫描图切块,得到扫描图切块二,将扫描图切块二输入水肿网络b-net,得到葡萄胎切片的切块水肿标签图,将所有切块水肿标签图融合得到切片水肿标签图;A slice edema label map generation module is used to slice the hydatidiform mole slice scan image into blocks to obtain a second scan image slice, input the second scan image slice into an edema network b-net to obtain a slice edema label map of the hydatidiform mole slice, and fuse all the slice edema label maps to obtain a slice edema label map;

分布图生成模块,用于根据切片水肿标签图得到切片水肿分布图;A distribution map generating module, used for obtaining a slice edema distribution map according to a slice edema label map;

另外,还可以包括:Additionally, it may include:

切片绒毛标签图生成模块,用于将葡萄胎切片扫描图切块,得到扫描图切块一,将扫描图切块一输入绒毛网络a-net,得到葡萄胎切片的切块绒毛标签图,将所有切块绒毛标签图融合得到切片绒毛标签图。The sliced villus label map generation module is used to cut the hydatidiform mole slice scan image into blocks to obtain scan image block 1, input the scan image block 1 into the villus network a-net to obtain the sliced villus label map of the hydatidiform mole slice, and fuse all the sliced villus label maps to obtain the sliced villus label map.

Claims (10)

1. The grape embryo slice image processing method based on deep learning is characterized by comprising the following steps:
S1, cutting a grape embryo slice image into blocks to obtain a block I, inputting the block I into an image segmentation network 1 to obtain a block fluff label graph of a grape embryo slice, and fusing all the block fluff label graphs to obtain a slice fluff label graph;
s2, cutting the grape embryo slice image into blocks to obtain blocks II, inputting the blocks II into the image segmentation network 2 to obtain a block edema label graph of the grape embryo slice, and fusing all the block edema label graphs to obtain a block edema label graph;
s3, obtaining a villus region according to the slice villus tag map, removing an edema region except the villus region in the slice edema tag map, and finally obtaining a slice edema distribution map;
In the step S1, the cut piece fluff label images are fused to obtain a cut piece fluff label image, namely all the cut piece fluff label images are spliced according to the corresponding positions of the original cut pieces in the cut pieces, and overlapping parts of the adjacent cut pieces at the upper part, the lower part, the left part and the right part are fitted according to the pixel average value of the overlapping parts of a plurality of cut piece images;
in the step S2, the cut edema label images are fused to obtain a cut edema label image, namely all the cut edema label images are spliced according to the corresponding positions of the original cut pieces in the cut pieces, and the overlapping parts of the adjacent cut pieces at the upper part, the lower part, the left part and the right part are fitted according to the pixel average value of the overlapping parts of the plurality of cut piece images;
In step S3, the villus region and the non-villus region are regions with pixel values of 0 and 255 in the slice villus tag image, the edema region and the non-edema region are regions with pixel values of 0 and 255 in the slice edema tag image, and the pixel values of the edema region outside the corresponding villus region in the slice edema tag image are changed to 255, so as to obtain the slice edema distribution map.
2. A method for processing a grape embryo slice image based on deep learning as claimed in claim 1, wherein the grape embryo slice image is obtained by scanning with a scanner.
3. The method for processing the grape embryo slice image based on the deep learning as claimed in claim 1, wherein the image segmentation network 1 is a network for segmenting the villus region obtained through the data set training of the manually marked grape embryo villus cut-out label image, and the image segmentation network 2 is a network for segmenting the edema region obtained through the data set training of the manually marked grape embryo edema cut-out label image.
4. The method for processing the grape embryo slice image based on the deep learning as set forth in claim 1, comprising the steps of:
In the step S1, the grape embryo slice image dicing is to divide the slice into a plurality of dicing blocks with the size of size1 at an interval S1, wherein the upper, lower, left and right adjacent dicing blocks have the area of sizel × (sizel-S1) and overlap, and the size1 is the pixel size of 3000×3000-18000×18000;
In step S2, the grape embryo slice image dicing is to divide the slice into a plurality of dicing blocks with size of size2 at interval S2, and two adjacent dicing blocks with size2× (size 2-S2) overlap each other in the upper, lower, left and right directions, and size2 is a pixel size of 1500×1500 to 9000×9000.
5. The method for processing a grape embryo slice image based on deep learning as set forth in claim 1, wherein in steps S1 and S2, the image segmentation network 1 and the image segmentation network 2 are both convolutional network-based image segmentation networks.
6. Grape child section image processing device based on degree of depth study, characterized by, include:
the slice fluff label map generation module is used for cutting the grape embryo slice image into blocks to obtain a block I, inputting the block I into the image segmentation network 1 to obtain a block fluff label map of the grape embryo slice, and fusing all the block fluff label maps to obtain a slice fluff label map;
the slice edema label map generation module is used for slicing the grape embryo slice image to obtain a second slice, inputting the second slice into the image segmentation network 2 to obtain a slice edema label map of the grape embryo slice, and fusing all the slice edema label maps to obtain a slice edema label map;
The edema distribution map generation module is used for obtaining a slice edema distribution map according to the slice edema label map and the slice villus label map;
In the slice fluff label map generation module, the cut fluff label maps are fused to obtain a slice fluff label map, namely all the cut fluff label maps are spliced according to the corresponding positions of the original cut blocks in the slices, and overlapping parts of the adjacent cut blocks at the upper part, the lower part, the left part and the right part are fitted according to the pixel average value of overlapping parts of a plurality of cut block images;
In the slice edema label image generation module, the slice edema label images are fused to obtain a slice edema label image, namely all the slice edema label images are spliced according to the corresponding positions of the original slices in the slices, and overlapping parts of the adjacent slices are fitted according to the pixel average value of overlapping parts of a plurality of slice images;
In the edema distribution map generation module, the villus region and the non-villus region are respectively the regions with the pixel values of 0 and 255 in the slice villus tag image, the edema region and the non-edema region are respectively the regions with the pixel values of 0 and 255 in the slice edema tag image, and the pixel values of the edema region except the corresponding villus region in the slice edema tag image are changed into 255, so that the slice edema distribution map is obtained.
7. A deep learning based grape embryo slice image processing device as claimed in claim 6, wherein the grape embryo slice image is obtained by scanning with a scanner.
8. The deep learning-based grape embryo slice image processing apparatus as set forth in claim 6, wherein the image segmentation network 1 is a network for villus region segmentation obtained through training of a data set of manually labeled grape embryo villus cut-out label images, and the image segmentation network 2 is a network for edema region segmentation obtained through training of a data set of manually labeled grape embryo edema cut-out label images.
9. A deep learning-based grape embryo slice image processing device as defined in claim 6, comprising:
In the section fluff label graph generating module, a grape embryo section image is cut into a plurality of sections with size of size1 at interval s1, wherein two sections which are adjacent to each other at upper, lower, left and right are overlapped by sizel × (sizel-s 1), and size1 is 3000×3000-18000×18000 pixel size;
in the section edema label image generation module, the grape embryo section image dicing is to divide the section into a plurality of dicing blocks with size of size2 at interval s2, wherein two adjacent dicing blocks with size2× (size 2-s 2) on the upper, lower, left and right are overlapped, and size2 is 1500×1500-9000×9000 pixels.
10. The deep learning-based grape embryo slice image processing apparatus as set forth in claim 6, wherein the slice fluff label map generation module and the slice edema label map generation module are both image segmentation networks 1 and 2 based on convolution networks.
CN202010828700.5A 2020-08-17 2020-08-17 Deep learning-based grape embryo slice image processing method and device Active CN112070726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010828700.5A CN112070726B (en) 2020-08-17 2020-08-17 Deep learning-based grape embryo slice image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010828700.5A CN112070726B (en) 2020-08-17 2020-08-17 Deep learning-based grape embryo slice image processing method and device

Publications (2)

Publication Number Publication Date
CN112070726A CN112070726A (en) 2020-12-11
CN112070726B true CN112070726B (en) 2024-09-17

Family

ID=73661864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010828700.5A Active CN112070726B (en) 2020-08-17 2020-08-17 Deep learning-based grape embryo slice image processing method and device

Country Status (1)

Country Link
CN (1) CN112070726B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109298170A (en) * 2011-04-29 2019-02-01 塞尔雷斯蒂斯有限公司 Cell-mediated immune response power test
US9684967B2 (en) * 2015-10-23 2017-06-20 International Business Machines Corporation Imaging segmentation using multi-scale machine learning approach
CN108305253B (en) * 2018-03-08 2021-04-06 麦克奥迪(厦门)医疗诊断系统有限公司 Pathological image classification method based on multiple-time rate deep learning
CN109741335B (en) * 2018-11-28 2021-05-14 北京理工大学 Method and device for segmentation of blood vessel wall and blood flow region in blood vessel OCT image
CN111062947B (en) * 2019-08-14 2023-04-25 深圳市智影医疗科技有限公司 X-ray chest radiography focus positioning method and system based on deep learning
CN110705425B (en) * 2019-09-25 2022-06-28 广州西思数字科技有限公司 Tongue picture multi-label classification method based on graph convolution network
CN111026799B (en) * 2019-12-06 2023-07-18 安翰科技(武汉)股份有限公司 Method, equipment and medium for structuring text of capsule endoscopy report

Also Published As

Publication number Publication date
CN112070726A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
US8391575B2 (en) Automatic image analysis and quantification for fluorescence in situ hybridization
JP2022530280A (en) Systems and methods for processing images to prepare slides for processed images for digital pathology
CN107330263A (en) A kind of method of area of computer aided breast invasive ductal carcinoma histological grading
CN108257129A (en) The recognition methods of cervical biopsy region aids and device based on multi-modal detection network
CN108388841A (en) Cervical biopsy area recognizing method and device based on multiple features deep neural network
US8542899B2 (en) Automatic image analysis and quantification for fluorescence in situ hybridization
WO2012041333A1 (en) Automated imaging, detection and grading of objects in cytological samples
CN115036011A (en) System for solid tumor prognosis evaluation based on digital pathological image
CN115205250A (en) Pathological image lesion segmentation method and system based on deep learning
CN111481233A (en) Measurement of the thickness of fetal neck translucency
CN112070726B (en) Deep learning-based grape embryo slice image processing method and device
CN112070725B (en) Deep learning-based grape embryo slice image processing method and device
CN102203657A (en) Procecure for preparing a processed virtual analysis image
CN113313685B (en) Renal tubular atrophy region identification method and system based on deep learning
CN113408533B (en) Construction method of chromosome abnormality prediction model based on fetal ultrasound image characteristic omics and diagnosis equipment
CN112102245B (en) Deep learning-based grape embryo slice image processing method and device
CN114764855A (en) Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning
CN116364229B (en) Intelligent visual pathological report system for cervical cancer anterior lesion coning specimen
CN113989297B (en) Tumor region segmentation method based on multimodal eyelid tumor data fusion
CN113658209A (en) Morphology-based method for judging excellent mounting position of sliced tissue
CN117153343B (en) Placenta multiscale analysis system
CN112184618B (en) Deep learning-based grape embryo slice image processing method and device
CN111710394A (en) AI-assisted early gastric cancer screening system
CN120107963A (en) Grape embryo edema focus segmentation method and device based on image large model
CN120107962A (en) Grape embryo hyperplasia focus segmentation method and device based on image large model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant